Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Designing‐in performance: energy simulation feedback for early stage design decision making
(USC Thesis Other)
Designing‐in performance: energy simulation feedback for early stage design decision making
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
DESIGNING-IN PERFORMANCE:
ENERGY SIMULATION FEEDBACK FOR EARLY STAGE DESIGN DECISION MAKING
By
Shih-Hsin Eve Lin
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(ARCHITECTURE)
May 16, 2014
Copyright 2014 Shih-Hsin Eve Lin
i
ACKNOWLEDGMENTS
I would first like to acknowledge the constant support of my advisor, Dr. David J. Gerber who
offered me the opportunity to pursue this research and continued to provide guidance and
essential advice at very crucial points throughout my research. I would like to thank my committee
Prof. Marc Schiler and Dr. Burcin Becerik-Gerber for their contribution, support and constructive
feedback. I would also like to thank Prof. Douglas Noble for his encouragement and instigation,
and to Karen M. Kensek for her guidance and providing me the experiment environment which
constitutes an important part of my work.
Many thanks for the research opportunity and for all the support provided by the Autodesk IDEA
Studio. Special thanks to the following people who, with all their support and contributions, made
this research possible: Kimberly Whinna, Matt Jezyk, John Kennedy, Davi Astbury, Arjun Ayyar,
Nancy Clark Brown, Jeff Clayton, Jim Cowan, Aniruddha Deodhar, Mikako Harada, Rick Jones,
Zachary Kron, Ian Molloy, Lira Nikolovska, Jim Quanci, Melrose B. Ross, Lillian Smith, Gopinath
Taget, Barry Tsai, Tom Vollaro, Mark Webb, and Jason Winstanley.
I would like to thank Ms. Bei “Penny” Pan the initial lead software developer; Junwen Chen, Ke Lu,
Shitian Shen, Yunshan Zhu for their continued software development. I also want to thank the
class of 2012 ARCH507, Xinyue Ma, Abdul Ali Khan, Ryan Conover and Aslihan Senel, Ian Andersen
and Rodrigo Shiordia for their participation in the research. I would like to specially thank the
participants of my case study: Gloria Lee and Nathan Swift from Swift Lee Office, and Peter
Simmonds from IBE Consulting Engineers for the support and information they have offered.
I wish to thank those of my friends and colleagues at USC School of Architecture: Jae Yong Suk,
Andrea Martinez, Yara Masri, Mic Matterson, Simon Chiu, Elizabeth Valmont, Jeffery Vaglio,
Edward Losch, Myoboon Hur, Loretta Lee and Ju Lee Kang for their friendship and support.
I must acknowledge and give very special thanks to Laura Haymond who, without her intellectual
and moral support, this work would have never been completed. Many thanks for her constant
companionship, care, and support. Also, I must express my thanks to the Haymond family for
providing me an environment full of love and support.
I must express my love and gratitude to my parents Prof. Kai-Hsin Lin and Prof. Syi Su Lin for their
continuous support. Without their love and patience I would never be able to accomplish this
work.
To all of you my most sincere thanks,
Shih-Hsin Eve Lin
ii
TABLE OF CONTENTS
ACKNOWLEDGMENTS I
LIST OF TABLES VII
LIST OF FIGURES XI
ABSTRACT XVI
ABBREVIATIONS & ACRONYMS XIX
CHAPTER 1 INTRODUCTION 1
1.1 PROBLEM STATEMENT ................................................................................................................... 1
1.2 OVERVIEW OF RESEARCH ................................................................................................................ 4
1.2.1 Research Hypothesis ....................................................................................................................... 4
1.2.2 Research Aims, Objectives and Questions ...................................................................................... 4
1.2.3 Research Scope & Definitions ......................................................................................................... 7
1.2.4 Premise Assumptions & Limitations ............................................................................................. 12
1.3 OVERVIEW OF THE THESIS ............................................................................................................. 13
CHAPTER 2 LITERATURE SURVEY, REVIEW, AND GAP ANALYSIS 15
2.1 BUILDING DESIGN TECHNOLOGY .................................................................................................... 15
2.1.1 From Computer-Aided Design (CAD) to Building Information Modeling (BIM) ............................ 15
2.1.2 Parametric Design & Design Exploration ..................................................................................... 17
2.1.3 Parametric Design & Building Performance ................................................................................. 18
2.1.4 Summary of Current Building Design Technology ........................................................................ 18
2.2 BUILDING ENERGY SIMULATION & DESIGN ...................................................................................... 19
2.2.1 Obstacles Between the Design and Energy Simulation Domains ................................................. 19
2.2.2 Future Trends– Criteria for Designing-In Energy Performance ..................................................... 23
2.2.3 Current Efforts in Overcoming the Obstacles Between Design and Energy Simulation Domain .. 26
2.2.4 Summary of Building Energy Simulation & Design ....................................................................... 30
2.3 DESIGN AUTOMATION & OPTIMIZATION ......................................................................................... 32
2.3.1 Multi-disciplinary Design Optimization (MDO) ............................................................................ 33
2.3.2 Evolutionary Design Techniques ................................................................................................... 35
2.3.3 State of the Art of MDO in Building Performance Feedback ........................................................ 37
2.3.4 Summary of Design Automation & Optimization ......................................................................... 40
2.4 SUMMARY OF THE LITERATURE REVIEW + RESEARCH’S POINT OF DEPARTURE ....................................... 42
iii
CHAPTER 3 RESEARCH METHODOLOGY 45
3.1 RESEARCH METHOD OVERVIEW..................................................................................................... 45
3.2 LITERATURE REVIEW .................................................................................................................... 47
3.3 PROTOTYPE TOOL DEVELOPMENT .................................................................................................. 48
3.4 ESTABLISHING FRAMEWORK VALIDATION & EVALUATION METRICS ..................................................... 49
3.5 HYPOTHETICAL CASE-BASED EXPERIMENT ....................................................................................... 53
3.6 DESIGN PROFESSION CASE-BASED EXPERIMENT................................................................................ 54
3.7 PEDAGOGICAL CASE-BASED EXPERIMENT ........................................................................................ 55
CHAPTER 4 EVOLUTIONARY ENERGY PERFORMANCE FEEDBACK FOR DESIGN
(EEPFD): A MDO DESIGN FRAMEWORK FOR EARLY STAGE ENERGY
SIMULATION FEEDBACK 56
4.1 INTRODUCING EEPFD – THE PROPOSED DESIGNER-ORIENTED MDO DESIGN FRAMEWORK .................... 56
4.2 MANIFESTATION OF EEPFD .......................................................................................................... 58
4.3 THE PROTOTYPE TOOL DEVELOPMENT FOR EEPFD - H.D.S. BEAGLE ................................................... 61
4.3.1 H.D.S. Beagle Platform Selection & Integration ........................................................................... 61
4.3.2 The Implementation of a GA-based Multi-objective Optimization in H.D.S. Beagle .................... 65
4.3.3 H.D.S. Beagle Implementation Process ........................................................................................ 84
4.4 EEPFD WORKFLOW .................................................................................................................... 92
4.5 SUMMARY OF EEPFD DEVELOPMENT ............................................................................................ 94
CHAPTER 5 HYPOTHETICAL CASE-BASED EXPERIMENTS 95
5.1 INTRODUCTION OF HYPOTHETICAL CASE-BASED EXPERIMENTS ............................................................ 95
5.2 HYPOTHETICAL CASE-BASED EXPERIMENT I: TECHNOLOGY AFFORDANCE .............................................. 97
5.2.1 The Technology Affordance Experiment Objective ....................................................................... 97
5.2.2 The Technology Affordance Experiment Description .................................................................... 97
5.2.3 Results & Observations of the EEPFD’s Technology Affordance ................................................... 98
5.2.4 Summary of the EEPFD’s Technology Affordance....................................................................... 100
5.3 HYPOTHETICAL CASE-BASED EXPERIMENT II: GA VALIDATION ........................................................... 107
5.3.1 The GA Validation Experiment Objective .................................................................................... 107
5.3.2 The GA Validation Experiment Description ................................................................................ 107
5.3.3 Results & Observations of the GA Validation Experiment .......................................................... 108
5.3.4 Summary of the GA Validation Experiment ................................................................................ 111
5.4 HYPOTHETICAL CASE-BASED EXPERIMENT III: COMPLEXITY VS. PERFORMANCE A ................................. 112
5.4.1 The Complexity vs. Performance Experiment A Objective .......................................................... 112
5.4.2 The Complexity vs. Performance Experiment A Description ....................................................... 112
5.4.3 Results & Observations of the Complexity vs. Performance Experiment A ................................ 113
iv
5.4.4 Summary of the Complexity vs. Performance Experiment A ...................................................... 115
5.5 HYPOTHETICAL CASE-BASED EXPERIMENT IV: COMPLEXITY VS. PERFORMANCE EXPERIMENT B .............. 117
5.5.1 The Complexity vs. Performance Experiment B Objective .......................................................... 117
5.5.2 The Complexity vs. Performance Experiment B Description ....................................................... 117
5.5.3 Results & Observations of the Complexity vs. Performance Experiment B ................................. 117
5.5.4 Summary of the Complexity vs. Performance Experiment B ...................................................... 119
5.6 HYPOTHETICAL CASE-BASED EXPERIMENT V: THE EEPFD BEST PRACTICE ........................................... 121
5.6.1 The EEPFD Best Practice Experiment Objective .......................................................................... 121
5.6.2 The EEPFD Best Practice Experiment Description ....................................................................... 121
5.6.3 Results & Observations of the EEPFD Best Practice Experiment ................................................ 121
5.6.4 Summary of the EEPFD Best Practice Experiment ...................................................................... 124
5.7 SUMMARY OF HYPOTHETICAL CASE-BASED EXPERIMENTS ................................................................ 125
CHAPTER 6 DESIGN PROFESSION CASE-BASED EXPERIMENT 127
6.1 INTRODUCTION OF THE DESIGN PROFESSION CASE-BASED EXPERIMENT ............................................. 127
6.2 METHOD AND PROCESS OF THE DESIGN PROFESSION CASE-BASED EXPERIMENT .................................. 128
6.3 DESIGN PROJECT BACKGROUND .................................................................................................. 129
6.4 ENERGY SIMULATION FEEDBACK APPROACHES & COMPARISON ........................................................ 132
6.4.1 In-House Analysis Process .......................................................................................................... 132
6.4.2 MEP Consultant Collaboration Process ...................................................................................... 134
6.4.3 EEPFD Process ............................................................................................................................ 135
6.5 SUMMARY OF THE DESIGN PROFESSION CASE-BASED EXPERIMENT .................................................... 137
CHAPTER 7 PEDAGOGICAL CASE-BASED EXPERIMENTS 140
7.1 INTRODUCTION OF PEDAGOGICAL CASE-BASED EXPERIMENTS ........................................................... 140
7.2 PEDAGOGICAL EXPERIMENT I: COMPUTATIONAL DESIGN TOOL COURSE SETTING ................................ 142
7.2.1 The Pedagogical Course Experiment Objectives ......................................................................... 142
7.2.2 The Pedagogical Course Case Experimental Background ........................................................... 142
7.2.3 The Pedagogical Course Case Experimental Design ................................................................... 145
7.2.4 Results & Observations of the Pedagogical Course Case Experiment ........................................ 153
7.2.5 Summary of the Pedagogical Course Case Experiment .............................................................. 170
7.3 PEDAGOGICAL EXPERIMENT II: DESIGN STUDIO SETTING ................................................................. 172
7.3.1 The Pedagogical Studio Experiment Objectives .......................................................................... 172
7.3.2 The Pedagogical Studio Case Experimental Background............................................................ 172
7.3.3 The Pedagogical Studio Case Experimental Design .................................................................... 172
7.3.4 Results & Observations of the Pedagogical Studio Case Experiment ......................................... 174
7.3.5 Summary of the Pedagogical Studio Case Experiment ............................................................... 178
7.4 PEDAGOGICAL EXPERIMENT III: COMPUTATIONAL DESIGN WORKSHOP SETTING ................................. 181
v
7.4.1 The Pedagogical Workshop Experiment Objectives ................................................................... 181
7.4.2 The Pedagogical Workshop Case Experimental Background ..................................................... 181
7.4.3 The Pedagogical Workshop Case Experimental Design .............................................................. 182
7.4.4 Results & Observations of the Pedagogical Workshop Case Experiment ................................... 191
7.4.5 Summary of the Pedagogical Workshop Case-based Experiment .............................................. 217
7.5 SUMMARY OF THE PEDAGOGICAL CASE-BASED EXPERIMENTS ........................................................... 220
CHAPTER 8 CONCLUSION + FUTURE WORK 222
8.1 CONTRIBUTIONS TO THE BODY OF KNOWLEDGE .............................................................................. 222
8.1.1 Contribution 1: Functional Requirements of “Designing-in Performance” for Early Stage Design
and Energy Performance .......................................................................................................... 222
8.1.2 Contribution 2: Evolutionary Energy Performance Feedback for Design (EEPFD) ...................... 223
8.1.3 Contribution 3: Process Evaluation of EEPFD ............................................................................. 223
8.2 PRACTICAL IMPLICATIONS ........................................................................................................... 225
8.3 RESEARCH LIMITATIONS ............................................................................................................. 226
8.4 FUTURE WORK ......................................................................................................................... 227
BIBLIOGRAPHY 228
APPENDIX A SUMMARY OF SELECTED WORKS ON APPLYING OPTIMIZATION IN
BUILDING DESIGN 254
APPENDIX B EXPERIMENT FRAMEWORK + QUESTIONNAIRE 260
B.1 DESIGN PROFESSION CASE-BASED EXPERIMENT FRAMEWORK .......................................................... 260
B.2 DESIGN PROFESSION CASE-BASED EXPERIMENT QUESTIONNAIRE ...................................................... 265
APPENDIX C H.D.S BEAGLE ASSUPTIONS + TEMPLATE 271
C.1 CEA BUILDING TYPE OPTIONS AND LOAD PROPERTIES .................................................................... 271
C.2 CEA SPACE TYPE OPTIONS AND LOAD PROPERTIES ......................................................................... 273
C.3 CEA CONCEPTUAL CONSTRUCTION OPTIONS AND THERMAL PROPERTIES ........................................... 278
C.4 H.D.S. BEAGLE EXCEL TEMPLATE SAMPLE ..................................................................................... 280
APPENDIX D SCENARIOS OF THE HYPOTHETICAL CASE-BASED EXPERIMENTS 285
D.1 SCENARIO 1 ............................................................................................................................. 285
D.2 SCENARIO 2 ............................................................................................................................. 287
D.3 SCENARIO 3 ............................................................................................................................. 290
D.4 SCENARIO 4 ............................................................................................................................. 293
D.5 SCENARIO 5 ............................................................................................................................. 296
vi
D.6 SCENARIO 6 ............................................................................................................................. 299
D.7 SCENARIO 7 ............................................................................................................................. 301
D.8 SCENARIO 8 ............................................................................................................................. 303
D.9 SCENARIO 9 ............................................................................................................................. 306
D.10 SCENARIO 10 ........................................................................................................................... 308
D.11 SCENARIO 11 ........................................................................................................................... 310
D.12 SCENARIO 12 ........................................................................................................................... 313
APPENDIX E PEDAGOGICAL EXPERIMENT I 315
E.1 PEDAGOGICAL EXPERIMENT I PROPOSAL ....................................................................................... 315
E.2 PEDAGOGICAL EXPERIMENT I WORKSHEET DESIGN ......................................................................... 316
E.3 PEDAGOGICAL EXPERIMENT I - PART B: STUDENTS’ PARAMETRIC MODEL DESIGN ............................... 326
E.4 PEDAGOGICAL EXPERIMENT I - PART B: STUDENTS’ PARAMETRIC MODEL DATA .................................. 352
E.5 PEDAGOGICAL EXPERIMENT I – RAW ANSWERS PROVIDED THROUGH THE FINAL QUESTIONNAIRE .......... 355
APPENDIX F PEDAGOGICAL EXPERIMENT III 359
F.1 PEDAGOGICAL EXPERIMENT III – SURVEY QUESTIONNAIRES ............................................................. 359
F.1.1 D1-Q1.pdf ................................................................................................................................... 359
F.1.2 D2-Q1.pdf ................................................................................................................................... 360
F.1.3 D2-Q2.pdf ................................................................................................................................... 362
F.1.4 D3-Q1.pdf ................................................................................................................................... 365
F.1.5 D3-Q2.pdf ................................................................................................................................... 367
F.1.6 D4-Q1.pdf ................................................................................................................................... 370
F.1.7 D4-Q2.pdf ................................................................................................................................... 372
F.2 PEDAGOGICAL EXPERIMENT III – HANDOUTS TO STUDENTS: D2-3_ST_DM_FROM_EEPFD.PDF ........ 375
F.3 PEDAGOGICAL EXPERIMENT III – PARAMETER SETTINGS OF STUDENTS’ AAC-3 MODELS ...................... 382
APPENDIX G PUBLICATIONS GENERATED FROM THIS RESEARCH 388
vii
LIST OF TABLES
Table 1-1: Major tasks in each phases of the building lifecycle. .................................................. 8
Table 1-2: Outline of the research method and the corresponding research objective and
questions. .................................................................................................................. 14
Table 2-1: Summary of obstacles & potential solutions between the design and energy
simulation domains. .................................................................................................. 31
Table 2-2: Summary of obstacles & potential solutions aimed at closing the gaps of current
MDO approach used by designer at the early stage of the design process. ............. 41
Table 3-1: Evaluation metrics used in this research. .................................................................. 52
Table 4-1: H.D.S. Beagle Excel template worksheets and functions. ......................................... 63
Table 4-2: An example of spatial program parameters & scoring formula for a mix-used building
experimental case. .................................................................................................... 68
Table 4-3: An example of spatial program parameters & scoring formula for a residential
building experimental case. ....................................................................................... 68
Table 4-4: Financial settings, parameters, and formulae in the Financial Model of H.D.S. Beagle.
................................................................................................................................... 71
Table 4-5: Adjustable financial parameters in H.D.S. Beagle. .................................................... 72
Table 4-6: Energy setting parameters available for exploration by name, type, range, and
changeability form global to local surface values. .................................................... 75
Table 4-7: The comparison between classic Genetic Algorithm and the applied GA in H.D.S.
Beagle ........................................................................................................................ 77
Table 4-8: An example of the list of parameters, types, and their exploration ranges of a GA run
with 13 parameters of interest. ................................................................................. 77
Table 5-1: Summary of the hypothetical cases measures. ......................................................... 99
Table 5-2: H.D.S. Beagle’s Level of detail - available building components & attributes according
to the utilized functionality of Revit and Revit’s CEA by this research.................... 100
Table 5-3: Load comparison between overall building type and the common corresponding
space type as suggested by Autodesk. Information presented is organized according
to the assumption values provided by CEA’s reference (Autodesk 2012b). ........... 105
Table 5-4: Scenario 7, 8 and 5’s solution space performance per generation ......................... 110
Table 5-5: Exploration settings of Complexity vs. Performance Experiment B ........................ 119
Table 5-6: Summary of the Best Practice Experiment results. ................................................. 123
Table 6-1: Summary of collected data for design profession case-based experiment. ........... 132
Table 7-1: The research questions of the pedagogical case-based experiments and the
correlated experiment set. ...................................................................................... 141
Table 7-2: Summary of students’ backgrounds for the Pedagogical Experiment I. ................. 144
Table 7-3: Summary of the recorded data for Pedagogical Experiment I. ............................... 149
Table 7-4: Statistical summary of the recorded times for Pedagogical Experiment I – Part
A. .............................................................................................................................. 153
viii
Table 7-5: Statistical summary of the recorded parametric value ranges explored by students
for Pedagogical Experiment I – Part A. .................................................................... 153
Table 7-6: Summary of iteration numbers, exploration times, and the explored ranges of
generated solution space for Pedagogical Experiment I – Part A. .......................... 154
Table 7-7: Solution space of the validated results for Pedagogical Experiment I – Part A. ..... 156
Table 7-8: Performance comparison of the solution space for hypothetical Scenario 10 between
the pedagogical experiment results and the H.D.S. Beagle-generated results after a
runtime of 3 hours, 7 hours, and 6 generations. ..................................................... 158
Table 7-9: Pedagogical Experiment I – Part B: quality evaluation summary of student setup for
the two required parameters. Percentages provided based on the received work of
25 students. ............................................................................................................. 162
Table 7-10: Pedagogical Experiment I – Part B: quality evaluation summary of student setup for
the customized parameters. Percentages based on the received work of 25 students.
................................................................................................................................. 162
Table 7-11: Summary of final evaluation of students’ parametric models. ............................... 164
Table 7-12: Summary of received student self-evaluations of their customized form-driving
parameters’ ability to translate the desired design intent. ..................................... 166
Table 7-13: Summary of students’ backgrounds for the Pedagogical Experiment III ................ 182
Table 7-14: Curriculum outline of the Pedagogical Experiment III. ............................................ 184
Table 7-15: Designated exploration parameters and ranges of AAC-2 manual exploration process.
................................................................................................................................. 187
Table 7-16: Summary of the recorded data for the overall Pedagogical Experiment III. ........... 190
Table 7-17: Summary of data availability for each designated activity for each participant. .... 191
Table 7-18: Summary of Pedagogical Experiment III – AAC-1 Manual Exploration Process’s
iteration numbers, exploration times. and the explored ranges of the generated
solution space. ......................................................................................................... 193
Table 7-19: Performance comparison of the Pareto solutions generated by the Pedagogical
experiment I Part A, Pedagogical Experiment III AAC-1, and H.D.S. Beagle after 3-hour
and 7-hour runtimes and 10 generations. ............................................................... 193
Table 7-20: Performance comparison of the solution spaces generated by the Pedagogical
experiment I Part A, Pedagogical Experiment III AAC-1, and H.D.S. Beagle after 3-hour
and 7-hour runtimes and 10 generations. Improvement is measured against the
initial baseline objective score. ............................................................................... 194
Table 7-21: Performance comparison of the valid solution spaces generated by the Pedagogical
experiment I Part A, Pedagogical Experiment III AAC-1, and H.D.S. Beagle after 3-hour
and 7 hour runtimes and 10 generations. Improvement is measured against the
initial baseline initial objective score. ..................................................................... 194
Table 7-22: Performance comparison of the selected designs from the manual exploration
process and subsequent design selected from the EEPFD pre-generated data.
Improvements in the objective performances through EEPFD pre-generated data are
highlighted. Pareto ranks based on solution pool consisting of only the 14 selected
designs. .................................................................................................................... 197
ix
Table 7-23: Students’ proposed new exploration ranges and the resulting solution space
performance boundaries after 6 generations. Exploration ranges exceeding the
originally provided ranges have been highlighted. ................................................. 200
Table 7-24: Summary of Pedagogical Experiment III – AAC-2 Manual Exploration Process iteration
numbers, exploration times, and the explored ranges of the generated solution
space. ....................................................................................................................... 202
Table 7-25: Performance comparison of the solution spaces generated by the Pedagogical
Experiment III AAC-2 and H.D.S. Beagle after runtimes of 2.8 hours, 5 hours, and 30
generations. Improvement is measured against the initial baseline objective score.
Percent of contribution to the Pareto solution pool is also provided. .................... 204
Table 7-26: Performance comparison of the selected designs from the manual exploration
process and subsequent designs selected from the EEPFD pre-generated data.
Improvements in the objective performances through EEPFD pre-generated data are
highlighted. Pareto ranking is based on the solution pool consisting of only the 6
selected designs....................................................................................................... 204
Table 7-27: Students’ proposed new exploration ranges of AAC-2 and the resulting solution
space performance boundaries after 6 generations. Exploration ranges exceeding
the originally provided ranges have been highlighted. ........................................... 207
Table 7-28: Summary of the evaluation of students’ AAC-3 models. ........................................ 210
Table 7-29: Design requirements, project size, and the GA settings for EEPFD exploration of
students’ AAC-3 models. ......................................................................................... 210
Table 7-30: Pedagogical Experiment III AAC-3 Performance comparison of student-generated
solution space vs. H.D.S. Beagle-generated solution space in 2.3 hours. ............... 211
Table 7-31: Ranking comparison of the solution space of aac-3 including students’ initial designs,
SGSS, and BGSS. ....................................................................................................... 213
Table 7-32: Performance comparison of the solution spaces generated by the Pedagogical
Experiment III AAC-4 and the Beagle after a completed runtime of 2 hours.
Improvement is measured against the initial baseline objective score. Percentage
contribution to the Pareto solution pool is also provided. ..................................... 215
Table A - 1: Summary of selected works on applying Optimization in building design for
performance feedback. ........................................................................................... 254
Table C - 1: CEA building type options and related energy load properties as available through
Revit’s CEA at the time of the research. Data is reorganized from source provided by
Autodesk’s CEA reference (Autodesk 2012b). ......................................................... 271
Table C - 2: CEA space type options and related energy load properties as available through
Revit’s CEA at the time of the research. Data is reorganized from source provided by
Autodesk’s CEA reference (Autodesk 2012b). ......................................................... 273
Table C - 3: CEA conceptual construction options and thermal properties as available through
Revit’s CEA at the time of research. The data is reorganized from source provided by
Autodesk’s CEA energy setting reference (Autodesk 2012c). ................................. 278
x
Table E - 1: Pedagogical I – Part B – students’ answer & authors’ evaluation. .......................... 352
Table E - 2: Evaluation of students’ parametric models for Pedagogical Experiment I – Part
B. .............................................................................................................................. 354
xi
LIST OF FIGURES
Figure 1-1: A. Current use of performance simulations in the building design process. b.
illustration of the relationship between the influence of design decision changes and
their cost during the overall design process. Images synthesized and redrawn by the
author from the presentation of Ellis, Torcellini, and Crawley (2008) and the article
of Cherry and Petronis (2009). .................................................................................... 2
Figure 1-2: Illustration of design process. Image squiggle by Newman (Newman 2006). .......... 11
Figure 1-3: Thesis structure overview. ........................................................................................ 13
Figure 2-1: Building simulation tools pre- and post- design process. Image re-plotted by the
author, based on DOE 2011 website information obtained from Attia (2012). ....... 21
Figure 2-2: Developed building simulation tools divided by targeted users between 1997 and
2010. Image from Attia (2011). ................................................................................. 21
Figure 2-3: Graphic summary of the review of Evins (2013) for computational optimization
methods applied to sustainable building design. Image from Evins (2013). ............. 34
Figure 2-4: Highlights of the research approaches and foci. ....................................................... 44
Figure 3-1: System development research process, proposed by Nunamaker, Chen, and Purdin
(1990). Diagram is redrawn based on the research from Nunamaker, Chen, and
Purdin (1990). ............................................................................................................ 46
Figure 3-2: Outline of the framework development method & process. .................................... 48
Figure 3-3: The six-step process for integrating design and energy simulation independent of
platform or tool used. These steps can be implemented manually or automatically,
depending on user’s selected tools and platforms. .................................................. 50
Figure 3-4: Outline of the hypothetical case-based experiment method & process. ................. 53
Figure 3-5: Outline of the design profession case-based experiment method & process. ......... 54
Figure 3-6: Outline of the pedagogical case-based experiment method & Process. .................. 55
Figure 4-1: The proposed theoretical MDO design framework, Evolution Energy Performance
Feedback for Design. ................................................................................................. 57
Figure 4-2: The development process of EEPFD & H.D.S. Beagle. ............................................... 60
Figure 4-3: H.D.S. Beagle system overview. ................................................................................ 61
Figure 4-4: H.D.S. Beagle automation loop developed through C# as a plug-in for Revit, using
Revit API, Excel API, and GBS SDK. ............................................................................ 64
Figure 4-5: The working principle of EEPFD’s GA based multi-objective optimization
framework. ................................................................................................................ 66
Figure 4-6: Utilized process for obtaining EUI values in EEPFD. .................................................. 69
Figure 4-7: Example of parametric model and division of design geometry parameters into
driving, driven, and fixed categories. ........................................................................ 75
Figure 4-8: A sample design alternative with a “chromosome” composed of 13 “genes”
corresponding to Table 4-8. Available selection of “gene” values defined through the
Initial population size of 10 with an even distribution of parametric values across
xii
user-defined ranges of interest for each “gene”. Highlighted values are specific to the
provided graphic example. ........................................................................................ 78
Figure 4-9: An example of the resulting exchange of parametric values as generated through the
GA crossover mechanism. ......................................................................................... 81
Figure 4-10: Diagrammatic implementation of H.D.S. Beagle, as prepared by the author. .......... 84
Figure 4-11: Illustrated process to prepare executable files for H.D.S. Beagle. ............................ 86
Figure 4-12: Screenshot of H.D.S. Beagle v20120903 user interface 1 – Design Parameter Setting
UI. (Left) The initial user interface; (Right) the Interface after loaded Excel
template. ................................................................................................................... 88
Figure 4-13: Screenshot of H.D.S. Beagle v20120903 user interface 2 – Energy Parameter Settings
................................................................................................................................... 88
Figure 4-14: Screenshot of H.D.S. Beagle v20120903 user interface 3 – Individual Surface Energy
Settings ...................................................................................................................... 89
Figure 4-15: Screenshot of H.D.S. Beagle v20120903 user interface 4 – GA Settings ................... 89
Figure 4-16: Illustration of H.D.S. Beagle’s execution process. Diagram by the author. ............... 90
Figure 4-17: Summary of H.D.S. Beagle use process map. ............................................................ 91
Figure 4-18: EEPFD’s six-step process for integrating design and energy simulation. .................. 92
Figure 4-19: Illustration the GA driven evaluation by H.D.S. Beagle of hypothetical Scenario 10 and
a subset of the resulting solution pool. ..................................................................... 93
Figure 5-1: Summary of the 12 hypothetical design scenarios, including design parameter
problem scale, coupling, and geometric complexity. The table illustrates the
averaged measurements of surface tessellation count, i.e., geometric complexity and
time required for automated energy analysis to round trip for the average of all
offspring in all generations from each scenario. ....................................................... 96
Figure 5-2: Hypothetical Case-based Experiment I - illustrated experimental process. ............. 97
Figure 5-3: Mapped experienced energy analysis times with correlated geometry surface
quantities, overlaid with 3D visualizations of each scenario’s tessellated energy
model, initial design energy model surface counts, and averaged run times. Diagram
by the author. .......................................................................................................... 103
Figure 5-4: Sample of the data plots generated by MATLAB® through a customized code
developed by the research team. The plotted data pertain to experimental
run_20120903_2339. .............................................................................................. 106
Figure 5-5: Summary of the GA run for Scenario 5 including parameters, explored ranges, and
user defined GA settings. ........................................................................................ 112
Figure 5-6: Parallel geometric and analytical data visualization illustrating 3 2D plots of data set
for 3 offspring of Scenario 5. ................................................................................... 114
Figure 5-7: Subset of Scenario 5 data illustrating the highest-ranking design alternatives for each
objective from the overall solution space. .............................................................. 114
Figure 5-8: An example of a sub set of the solution space for Scenario 5 in which the multi-
objectives —EUI, NPV, and SPC—are calculated, ranked, and visualized for ease of
manual decision making. ......................................................................................... 116
xiii
Figure 5-9: Illustration of Scenario 10’s models a, b, c, and d with increasingly geometrically
complex initial designs, their parameters, and explored parametric ranges. ......... 118
Figure 5-10: Observed improvements found in the energy performance feedback for Scenario 10
as they relate to the geometric complexity of the initial design. ............................ 120
Figure 5-11: The percentage improvement trend of Scenarios 7, 8, and 5’s solution space range
over each generation, calculated with respect to the prior generation. ................ 122
Figure 5-12: A sample of the geometric diversity of potential design solutions for a single design
problem. Images provided by David Gerber. .......................................................... 124
Figure 6-1: Timeline of the competition project and the corresponding implementation stage for
each of the three energy performance feedback approaches. ............................... 130
Figure 6-2: The pre-engineered kit-of-parts prototype design concept of the net ZEB school
design. Image courtesy of Swift Lee Office (2012). The full size image can be found in
the poster prepared for the competition (Swift Lee Office 2012). ......................... 131
Figure 6-3: Examples of various kit-of-parts configurations compiled for different site
orientations, space types, space arrangements, and solar screen patterns. Image
courtesy of Swift Lee Office (2012). ........................................................................ 131
Figure 6-4: Examples of daylighting analyses for different façade strategies, such as light shelves,
louvers, and perforated metal screens. .................................................................. 133
Figure 6-5: Design guidance for active and passive design strategies summarized by the
designers based on the information provided by the MEP consultant. Image courtesy
of Swift Lee Office (2012). Full size image can be found in the poster prepared for the
competition (Swift Lee Office 2012). ....................................................................... 134
Figure 6-6: The final parametric model of the design profession case-based experiment. ...... 136
Figure 6-7: Illustration of the EEPFD implementation process. Image courtesy of Swift Lee Office
(2012). Full size image can be found in the poster prepared for the competition (Swift
Lee Office 2012). ...................................................................................................... 136
Figure 6-8: Project comparative process maps with initial observations. ................................. 139
Figure 7-1: Simulation process comparison between EEPFD & pedagogical experiment. ........ 146
Figure 7-2: The given design requirement for the Pedagogical Experiment I. .......................... 146
Figure 7-3: Initial parametric model for pedagogical experiment I as provided to each
student. .................................................................................................................... 148
Figure 7-4: The worksheet provided to students to record their decision making process. ..... 150
Figure 7-5: Excel worksheet calculator provided to students to calculate the performance of
their generated design alternative according to the three objective functions. .... 151
Figure 7-6: Worksheet provided to students to allow them to compare multiple design iterations.
................................................................................................................................. 151
Figure 7-7: Raw data and 3D visual representations of the generated solution space for
Pedagogical Experiment I – Part A. Illustrated by the author. ................................ 155
Figure 7-8: Illustrated example of a linear manual exploration process as mapped through the
pedagogical benchmark cases. Image reproduced by the author, based on the
original, courtesy of USC Arch507 2012 student Abdul Ali Khan. ........................... 157
xiv
Figure 7-9: Illustration of the performance comparison of solution spaces generated by the
pedagogical experiment I-Part A and through EEPFD, after completing three-hour
runtimes. ................................................................................................................. 159
Figure 7-10: The solution space comparison for hypothetical Scenario 10 between the
pedagogical experiment results and the prototype-generated results after a runtime
of 3 hours, 7 hours, and 6 generations. ................................................................... 160
Figure 7-11: Illustration of evaluation results summary of students’ parametric model. .......... 165
Figure 7-12: Student self-evaluation of their customized form driving parameters’ ability to
translate the desired design intent. ........................................................................ 167
Figure 7-13: Summary of students’ parametric designs with the performance range for both
student- and H.D.S. Beagle- generated alternatives. NPV calculated in million USD
and EUI in kBtu/ft²/yr. ............................................................................................. 168
Figure 7-14: Project site of Pedagogical Experiment II. Image source from Google Maps. ........ 173
Figure 7-15: The design of Pedagogical Experiment II process. ................................................... 173
Figure 7-16: Pre-parameterization design process of student A. Image courtesy of Xinyue Amber
Ma. ........................................................................................................................... 176
Figure 7-17: Parametric design of the student A. Image re-diagram by the author based on
student A’s design. .................................................................................................. 177
Figure 7-18: Summary of the provided data sets from two GA runs for final design decision
making. .................................................................................................................... 179
Figure 7-19: Illustration of the Student A’s decision-making process. ........................................ 180
Figure 7-20: Experimental outline of Pedagogical Experiment III ............................................... 183
Figure 7-21: Summary of the AAC-1 EEPFD pre-generated data set provided for Pedagogical
Experiment III. ......................................................................................................... 186
Figure 7-22: The given design requirement for the Pedagogical Experiment III AAC-2. ............. 187
Figure 7-23: Second parametric design AAC-2. ........................................................................... 188
Figure 7-24: Summary of the AAC-2 EEPFD pre-generated data set provided for Pedagogical
Experiment III. ......................................................................................................... 189
Figure 7-25: Solution space comparison of Pedagogical Experiment I Part A, Pedagogical
Experiment III AAC-1 and H.D.S. Beagle generated results
(Run_20131108_0004). ........................................................................................... 195
Figure 7-26: Students’ selected design via the three exploration approaches: (1) selection from
their manual exploration process; (2) selection from EEPFD pre-generated solution
space; and (3) improved design based on the provided EEPFD pre-generated dataset.
................................................................................................................................. 198
Figure 7-27: Comparison of the distribution of the solution spaces generated from EEPFD based
on each student’s new exploration range. .............................................................. 201
Figure 7-28: Performance comparison of the solution space provided through EEPFD and the
selected designs of SA07 and SA08 via three different approaches: (1) manual
exploration; (2) selection based on the given EEPFD pre-generated dataset; and (3)
improved design based on the given EEPFD pre-generated dataset. ..................... 205
xv
Figure 7-29: Comparison of the distribution of the solution spaces generated from EEPFD based
on each student’s new exploration ranges for the AAC-2 scenario. ....................... 208
Figure 7-30: Pedagogical Experiment III AAC-4 solution space comparison: students’ manually
explored solution space vs. H.D.S. Beagle-generated solution space within 2- hour
time limit. ................................................................................................................ 216
Figure C - 1: A sample parametric model for H.D.S. Beagle. Diagram by the author. ................. 280
Figure C - 2: Screenshot of H.D.S. Beagle’s Excel template – Sheet 1: Home Sheet. .................. 281
Figure C - 3: Screenshot of H.D.S. Beagle’s Excel Template – Sheet 2: GeometryParam. ........... 281
Figure C - 4: Screenshot of H.D.S. Beagle’s Excel Template – Sheet 3: LevelSetting. .................. 282
Figure C - 5: Screenshot of H.D.S. Beagle’s Excel Template – Sheet 4: ProjectConstraints. ........ 282
Figure C - 6: Screenshot of H.D.S. Beagle’s Excel Template – Sheet 5: SPCScoreParam. ............ 283
Figure C - 7: Screenshot of H.D.S. Beagle’s Excel Template – Sheet 6: SPCFormula. .................. 283
Figure C - 8: Screenshot of H.D.S. Beagle’s Excel Template – Sheet 7: FinancialParam. ............. 284
Figure C - 9: Screenshot of H.D.S. Beagle’s Excel Template – Sheet 8: FinancialProForma. ....... 284
xvi
ABSTRACT
With the continued advancement of computational tools for building design, performance has
gradually been allowed to claim a more prominent role as a driving force behind design decisions.
However, there is currently only limited direct energy performance feedback available for
designers early in the design process where such decision making has the highest potential impact
on the overall design’s energy performance. Therefore, this research aims to propose a design
process framework that can provide designers a “designing-in performance” environment, where
design decisions can be influenced by energy performance feedback during the early stages of the
design process.
In response to the overall aim of this research, the first objective is to identify the most potentially
suitable method through investigating current and past efforts. Extensive literature review
revealed that time constraints and interoperability issues between tools and expert domain
knowledge are the primary obstacles faced by designers in exploring design alternatives with
consideration for energy performance. Moreover, evidence suggests that Multidisciplinary Design
Optimization (MDO) methodology presents the most potential to overcome these obstacles. This
determination stems from the aerospace and automobile industries successfully integrating
multiple engineering domains through MDO to optimize and identify the best-fit design among
various competing objectives during the design process. As a result, it is the position of this
research that providing the designers with a designer-oriented MDO framework during the early
stages of design will allow energy performance feedback to influence their design decision-making
based on their design goals, thereby resulting in higher-performing designs. However, the
applications of MDO to the building industry are still in infancy, especially in relation to bridging
energy performance and design form exploration during the early stages of the design process.
As the applicability of this approach during the design process is yet to be fully explored, this task
is the second objective of this research.
More specifically, the second research objective is to identify the proposed framework
characteristics that would assist the designers during the early stage design process and enable a
“designing-in performance” environment. Also included in the second objective is the validation
of the proposed framework against the identified criteria. In order to achieve this objective, this
research first synthesizes the pertinent research findings in order to isolate the criteria for
“designing-in performance” and identify the gaps in the extant approaches, which hindered their
applicability to the design process. Based on these results, the research presents the theoretical
structure of the proposed early stage designer-centered MDO framework, entitled Evolutionary
Energy Performance Feedback for Design (EEPFD), which incorporates conceptual energy analysis
and design exploration of complex geometry through an automated evolutionary searching
method. EEPFD is a semi-automated design exploration process, enabled by a customized genetic
algorithm (GA)-based multi-objective optimization (MOO) approach, to provide energy
performance feedback in assisting design decision-making. In order to realize EEPFD for the
purpose of validation and evaluation against the previously identified criteria, a prototype tool,
H.D.S. Beagle, is developed to host the customized GA-based MOO algorithm. In H.D.S. Beagle,
energy use intensity (EUI) is selected as the energy objective function. Also included are spatial
programming compliance (SPC) and a schematic net present value (NPV) calculation for
consideration in performance tradeoff studies. A series of hypothetical cases are used to form the
xvii
initial framework, as well as obtain and evaluate the technology affordance of H.D.S. Beagle.
These hypothetical cases are also used as a means to assess whether EEPFD demonstrates the
potential to meet the needs of early stage design, where rapid design exploration and
accommodation of varying degrees of geometric complexity are needed. Based on these results,
EEPFD can be considered as suitable for further exploration in early stage design applications.
Finally, the hypothetical cases are used to reaffirm the need of incorporating energy performance
feedback during the early stages of the design process.
The last research objective is to evaluate the impact of the proposed framework and availability
of energy performance feedback on the early stage design process. To achieve this objective,
evaluation metrics are first established to provide the means and measurements by which to
conduct process evaluation and comparative studies in both the design profession and
pedagogical case-based experiments. In the design profession case-based experiment, EEPFD is
applied to a design competition open to professional design firms. In this case study, the chosen
design firm utilizes three approaches in pursuing higher performance design: (1) collaboration
with mechanical electrical and plumbing (MEP) consultants; (2) in-house analysis through
available analytical tools; and (3) EEPFD application. While this case revealed that the output of
these three approaches is not directly comparable, it is observed that EEPFD provides exploration
of a greater number of design alternatives and tradeoff studies compared to the two conventional
processes. The pedagogical case-based experiments conducted as a part of this study are divided
into three sets, based on the setting—a computational classroom, a design studio, and a
computational workshop setting. These experiments utilize the established benchmark process
to (1) compare the human decision process against the automated EEPFD; (2) observe the ability
of students to translate their design intent into a parametric model; and (3) gauge the impact of
the availability of energy performance feedback on the early stage design process. The results
obtained in these experiments indicate that EEPFD is capable of generating a solution space with
a higher Pareto designated solution rate than the students can achieve. Moreover, it eliminates
up to 50% of the human error rate, as observed in the manual exploration process. It is also
observed that students are able to use the EEPFD-provided feedback to identify a design
alternative that is not only consistent with their design exploration intent, but also improves upon
the energy performance of their initial design. However, in all these case-based experiments,
designers—both students and design professionals—encountered difficulties in translating their
design intent into a parametric model compatible with the use of EEPFD. While this is
acknowledged, it is likely that increased familiarity with parametric modeling techniques would
overcome these observed difficulties.
The significant contribution of this research is the EEPFD—a designer-oriented MDO framework
enabling the combination of complex geometric form exploration via energy performance
feedback. This tool addresses the observed gaps in the currently available solutions, thereby
enabling further exploration of MDO application to the early stage design. Subsequently, this
research provides the means and measurements to explore and evaluate the application of EEPFD
to the early stages of design, identifying potentially advantageous adjustments from previously
implemented MDO frameworks and the early stage design process. Considering the nature of
early stage design in the architectural field—which consists of subjective and objective design
requirements, time constraints, uncertainty regarding design components, and the unique
conditions of each design problem—a best practice is proposed in applying EEPFD to early stage
design, which significantly differs from the more common MDO application method of seeking a
xviii
mathematically-defined convergence “best fit” solution set. Finally, this research provides a
“designing-in performance” environment in which designers can use available energy
performance feedback to influence their design decisions during the early stages of the design
process, where such decisions have the greatest impact on the overall design’s energy
performance.
xix
ABBREVIATIONS & ACRONYMS
AEC Architecture, Engineer, and Construction
API Application Programming Interface
ASHRAE America Society of Heating Refrigerating and Air-Conditioning Engineers
BIM Building Information Modeling
CAD Computer-Aided Design
CAE Computer-Aided Engineering
CEA Conceptual Energy Analysis
DOE U.S. Department of Energy
EA Evolutionary Algorithm
EUI Energy Use Intensity
EEPFD Evolutionary Energy Performance Feedback for Design
Excel Microsoft® Excel® 2010
FAR Floor Area Ratio
GA Genetic Algorithm
GBS Autodesk® Green Building Studio® web-based energy analysis service
GUI Graphic User Interface
GUID Globally Unique Identifier
HVAC Heating, Ventilation, and Air Conditioning
ID Identification
IFC Industrial Foundation Class
MEP Mechanical Electrical and Pluming
MDO Multidisciplinary Design Optimization
MCDM Multi-criteria Decision Making
MOO Multi-Objective Optimization
NPV Net Present Value
OBJ Objective
Revit Autodesk® Revit® 2012/2013
RFI Request for Information
SDK Software Development Kit
xx
SPC Spatial Programming Compliance
SoA USC School of Architecture at the University of Southern California
UI User Interface
USC University of Southern California
USD United States Dollar
Vasari Autodesk Project Vasari is a Beta tool that has similar API and functionality
similar to that of Revit, but also offers the Beta functionalities that have not yet
been incorporated within Revit
ZEB Zero Energy Building
1
CHAPTER 1 INTRODUCTION
1.1 PROBLEM STATEMENT
In the interest of promoting more sustainable design, the topic of building performance, especially
with respect to energy consumption, has become increasingly significant to the overall design
process in the current field of architecture. This shift in focus is mostly due to the fact that
buildings account for a majority of all consumed energy, nearly one half (48.7%) in the United
States (Architecture 2030 2011) and up to 40% of all energy consumption in the European Union
(The European Union 2012). However, during the early stages of the design process, designers
currently have to make their decisions based on insufficient energy performance feedback, at
time when such decisions are considered to have the greatest impact on the overall design’s
performance. Thus, this research aims to propose a new design approach capable of reducing
design cycle latency and providing performance analyses to the initial stages of the design process
through an increased generation of design alternatives that are easily assessed, ranked, and filed
according to different criteria.
While the overall performance of buildings is more substantially impacted by design decisions
made during the early design stages versus later stages (Bogenstätter 2000), design professionals
are often confronted with difficulties to incorporate energy performance feedback into their early
exploration of design alternatives (Crawley et al. 2008, Flager et al. 2009, Schlueter and Thesseling
2009). As shown in Figure 1-1, currently the majority of energy performance analyses are applied
to support the optimization of specific systems, i.e., HVAC sizing or code-compliance, and are
conducted later in the design development phase, where the overall building form has been
defined (Hensen and Lamberts 2011, Toth et al. 2011). As a result of energy performance analyses
being used for specific system optimization, the target users of these analysis applications are
typically engineers who work within their own specialties, and therefore have limited opportunity
to gain and benefit from design knowledge across varying disciplines. While the increasing
demand for energy simulations is confirmed by the growing number of available energy
simulation tools, which has grown from 107 to 406 in the past 15 years (DOE 2013), less than 10%
of these tools can assist architects during the early design phases (Attia, Gratia, et al. 2012). The
majority of available energy simulation tools are designed for use during the post-schematic
design stages, where domain experts conduct performance evaluations post-priori, rather than in
a rapid and automated design cycle (Schlueter and Thesseling 2009). However, as shown in Figure
1-1, at this stage, the opportunity to intervene where the most benefit can be achieved at the
least cost has been lost.
2
A.
B.
FIGURE 1-1: A. CURRENT USE OF PERFORMANCE SIMULATIONS IN THE BUILDING DESIGN PROCESS. B.
ILLUSTRATION OF THE RELATIONSHIP BETWEEN THE INFLUENCE OF DESIGN DECISION CHANGES AND
THEIR COST DURING THE OVERALL DESIGN PROCESS. IMAGES SYNTHESIZED AND REDRAWN BY THE
AUTHOR FROM THE PRESENTATION OF ELLIS, TORCELLINI, AND CRAWLEY (2008) AND THE ARTICLE
OF CHERRY AND PETRONIS (2009).
3
In addition to the paucity of tools targeting designers, model conversion and expert domain
knowledge are also required to produce usable results from current tools. The process of model
conversion is not well supported by existing design tools and typically requires expert translation
and interpretation of generated data, thereby contributing to the overall design cycle latency and
lack of design domain integration (Augenbroe et al. 2004). The design cycle latency is affected by
other issues, such as tool interoperability among different expert domains, intensive analysis time
requirements, and limitations of design cognition and complexity (Oxman 2008, Augenbroe 2002,
Attia, Hensen, et al. 2012). Consequently, performance assessments are typically performed after
the initial design phase, where the analyses are based on a very limited set of design alternatives,
rather than to support early stage design decisions, where a broader range of potentially more
optimal solutions may exist (Radford and Gero 1980, Hensen 2004).
Furthermore, design decisions are rarely driven by a single design objective, as designs typically
need to address several often competing objectives (Aish and Marsh 2011, Radford and Gero 1980,
Grierson 2008). This is consistent with the widely accepted view that design is typically
understood as an ill-defined problem (Simon 1973) with objectives that are often non-
commensurable and their relative importance difficult to evaluate (Grierson 2008). As a result,
there is an inherent need for tradeoff studies in order to assist in finding the “best” compromise
during the design process. However, these studies can become time intensive and complicated,
requiring input from multiple disciplines in order to provide relevant feedback, and thereby the
findings of these studies are minimized by necessity and marginalized as a result.
This research addresses the issue that current methods for integrating design with energy analysis
feedback only provide a limited set of results with little or no ability to generate tradeoffs across
competing design constraints and objectives. More specifically, it proposes that, through the use
of multidisciplinary design optimization (MDO), designers can more efficiently generate an
expansive design alternative solution space that is simultaneously ranked across competing
objectives, thus reducing uncertainty in the process of identifying optimal design solutions.
4
1.2 OVERVIEW OF RESEARCH
1.2.1 RESEARCH HYPOTHESIS
By enabling the designers to benefit from a designer-oriented MDO framework during the early
stages of design, the aim is to allow the energy performance feedback to influence design
decision-making based on designers’ design goals, thereby resulting in higher-performing designs,
as defined by the MDO framework.
1.2.2 RESEARCH AIMS, OBJECTIVES AND QUESTIONS
The overall aim of this research is to propose a framework that can provide a “designing-in
performance” environment, where designers can integrate energy performance feedback to
support their decision-making process during the early stages of design. The framework should
be able to reduce design cycle latency and further assist in tradeoff studies by providing energy
performance feedback in conjunction with other performance measurements, such as financial
or spatial programming compliance. In addition, the framework should be applicable to the early
stages of the design process, helping designers identify higher-performing design alternatives, in
particular when such performance is an intended goal. In pursuit of this aim, the research has the
following three specific objectives, which can be achieved by addressing the associated research
questions:
1. To investigate the existing approaches to bridging the gap between design and energy
performance analysis and identify potential means by which to achieve the research aim.
In order to identify the potentially most suitable approach and formally define the required
functionalities of the proposed framework, a clear understanding of currently utilized
approaches, their effort focuses, and their successes and failures is needed. To achieve this
objective, the following research questions must be addressed:
1.1. What are the limitations and success of identified approaches focusing on bridging design
and energy performance feedback?
There have been several efforts that focus on providing energy performance feedback in
order to support design decision-making. However, several issues still exist and have yet
to be overcome. To clearly understand the fundamental problems and their potential
solutions, this research will synthesize current efforts through a literature survey and
review.
1.2. Why have the identified approaches not been adopted by designers during the early
stages of design?
The efforts to bridge energy simulation feedback for design decision-making are made
with different objectives, and during different phases of the building lifecycle. As the
research aim is to provide the proposed framework to the designers in the early stages of
design, the research needs to identify existing gaps and address the challenges
encountered by previous approaches through a review of current designer-driven early
design stage precedents.
5
1.3 How can the research, with current technology, overcome the shortcomings of these
precedent approaches?
The potential of a technological solution to overcome the previously identified challenges
designers face when attempting to use energy simulation feedback for design decision-
making during the early stages of design needs to be investigated through synthesizing
current attempts. Through this investigation, this research can isolate the critical
components and necessary functionalities of the proposed framework to achieve the
research aim.
2. To identify the requirements and evaluate the proposed framework’s ability to meet these
requirements to be adopted by designers during the early stage design process and enable
“designing-in performance.” The second objective of this research is to validate the proposed
framework by identifying criteria for both functionality and applicability before assessing the
ability of the proposed framework to meet these criteria. To achieve this objective, the
following research questions must be addressed:
2.1 What criteria should be used to examine and determine the proposed framework’s
validity for early stage design use?
A clear definition of criteria for a framework to achieve the research aim is needed. Unlike
other design fields, each design stage is characterized by inherent unique needs. For
example, the early design phase might require that several building envelope design
concepts be considered, while later design stages might require analysis of individual
HVAC components. A thorough investigation of these requirements, including the metrics
by which to measure them, is critical to orienting the composition of the proposed
framework. These requirements and metrics should also be used when validating the
proposed framework.
2.2 Is the proposed framework capable of satisfying the requirements identified above?
After identifying the criteria, the next step is to ascertain whether the proposed
framework is able to fulfill the identified requirements. Therefore, a prototype tool with
the basic identified functionalities needs to be developed and validated against the
acknowledged criteria prior to evaluating the proposed framework in terms of its utility
during the early stages of the design process.
3. To evaluate the impact of the proposed framework and availability of energy performance
feedback on the early stage design process. To understand whether the proposed framework
is able to achieve the research aim that enables designers to design-in performance, the
impact of the framework on the design process needs to be investigated. Also in need of
investigation is the influence of having the proposed framework’s generated energy
performance feedback available during the early stage design process. For this investigation,
user feedback is an essential component to the initial assessment of the proposed framework
and the influence of the available energy performance feedback. However, currently explored
approaches primarily focus on the potential scope of application or the efficiency of the
technologies and algorithms. Consequently, it remains unclear how these precedent
approaches would affect the design process, or how potential improvements can be
implemented to achieve the research goal. To achieve this objective, the following research
questions must be answered:
6
3.1. What are the requirements and associated metrics that should be used to conduct
comparative studies in the design and evaluation of the proposed framework’s impact on
the early stage design process?
The design process is subject to individual preferences and varies due to the unique
nature of each project’s design requirements. In order for the impact of the proposed
framework to be measured, compared, and understood, clearly defined metrics and
measurements need to be established.
3.2 What is the impact of the proposed framework on the early stage design process
according to the previously established metrics?
Various approaches involving obtaining energy performance feedback for design
decision-making have been adopted in the design process, each having its pros and cons.
The most widely used examples include the collaboration with an engineer consultant, in-
house performance analysts, and designer’s self-conducted analysis. To understand
whether the proposed framework is more capable at meeting the research aim, a series
of case-based experiments and comparative studies of these various approaches will be
carried out, based on the previously established evaluation metrics.
3.3 How does the established framework and available energy performance feedback
support designers’ decision-making?
Previous research based on similar approaches has yet to determine if the provided
energy performance feedback influences the designer’s decision-making. Therefore, a
series of case-based experiments will be conducted as a part of this research, in order to
determine how the provided data supports designers’ decision-making. The interaction
between designers and the proposed framework can also be initially observed, so that
further adjustments can be suggested for future improvements.
7
1.2.3 RESEARCH SCOPE & DEFINITIONS
1.2.3.1 DESIGNING-IN PERFORMANCE
The concept of “designing-in performance” is defined in this research as the idea of utilizing
performance feedback to influence design exploration and subsequent decision-making under the
assumption of pursuing higher-performing design. Although the term “designing-in performance”
can refer to the inclusion of any type of performance feedback, it should be noted that, in this
research, the specific emphasis is on the inclusion of energy performance feedback.
To distinguish from the term “performance-based design”, which is generally used to describe a
performance-driven design method, “designing-in performance” in this research refers to
increased emphasis on providing a design environment, where a designer’s state of mind can be
naturally influenced by performance feedback. Performance-based design implies a design
process possessing an emphasis or singular design goal focusing on improved performance.
However, in this work, “designing-in performance” is specifically used in reference to providing
an environment where design performance feedback is available, irrespective of design emphasis
or design goals. The premise is that designers that have access to performance feedback will
develop higher-performing designs, even if increased performance is not a designated design goal,
as is the case in “performance-based designs”.
1.2.3.2 EARLY DESIGN STAGE
A building’s lifecycle can be categorized into four phases: Pre-Design Phase, Design Phase,
Construction, and Facility Management and Operation. During the Design Phase, activities can be
further broken down into Schematic Design, Design Development, and Construction
Documentation. For the purpose of this research, major tasks in each phase are categorized and
listed in Table 1-1 and illustrated in Figure 1-1. However, it should be noted that, while tasks may
be assigned to a specific phase, this does not exclude the possibility of overlapping tasks between
phases, or continuing advancement of a specific task throughout the design process. The table
presented here is meant to provide a basic idea regarding the tasks that might be involved at
various referred to design phases and is not meant to be considered an exhaustive list of design
related tasks. The other purpose of this list is to provide the perceived availability of information
by this research during specified design processes.
Based on the categorization provided in Table 1-1, the early design stage that this research focuses
on corresponds to the Schematic Design phase activities listed directly following the Pre-Design
Phase. At this point, while design requirements and site context are available, geometric
components and massing have not yet been finalized.
8
TABLE 1-1: MAJOR TASKS IN EACH PHASES OF THE BUILDING LIFECYCLE.
Phase of the Building Lifecycle Major Tasks Potential Involved Professions
Pre-Design
- feasibility studies
- cash flow analysis
- initial cost estimates
- space requirement
analysis and planning
- site context issues
- building code and
zoning analysis
Owner, Architect
Design
Schematic Design
(SD)
- massing study
- conceptual geometry
- conceptual rendering
- propose candidate
materials and systems
Owner, Architects, Engineers,
Contractors, Subcontractors,
Fabricators
Design Development
(DD)
- detailed floor plans
- detailed major
construction systems
- evaluate the
acceptance for major
systems
Construction
Documentation (CD)
- detailed demolition
plans
- site preparation and
grading
- specification of systems
and materials
- construction review
Construction
Procurement
- detailed cost estimates
- detailed scheduling
- site logistic and on site
errors response
- construction
management
- close out and hand over
documentation
Owner, Architects, Engineers,
Contractors, Subcontractors,
Fabricators Construction
Administration
Facility Management/Operation
- close out and hand over
commissioning
- space management
- equipment
maintenance and
management
- energy management
- renovation and
replacement evaluation
Owner, Engineers
9
1.2.3.3 DESIGN DOMAIN VS. ENERGY SIMULATION DOMAIN
Design Domain: In the context of this research, this phrase pertains to the knowledge, tools, and
activities used by architects and designers to provide overall geometric and spatial definitions of
architectural projects. This definition excludes knowledge, tools, and activities, used for
performance analysis or related to the development of specialty systems, such as structure, MEP,
or HVAC.
Energy Simulation Domain: This phrase, in the context of this research, refers to the specific
activities, knowledge, and tools necessary for conducting energy simulations, regardless of actors.
1.2.3.4 CONCEPTUAL ENERGY ANALYSIS
In order for designers to utilize energy performance feedback during the early stages of the design
process, they should be able to conduct energy performance analysis based on the information
available during massing studies. In this research, this type of analysis is referred to as conceptual
energy analysis.
1.2.3.5 FRAMEWORK VS. TOOL
While the term “framework” is often used in computer science or other engineering fields in
reference to the system structure of a developed software or tool, in this research, it refers to a
basic conceptual structure of a design approach and process. The similarity is that, in both cases,
frameworks is “a basic conceptual structure (as of ideas),” as defined in Merriam-Webster
dictionary (Merriam-Webster 2013b). However, in order to provide clarity and avoid confusion,
in this thesis, “framework” is used when describing a conceptual structure of a design process,
whereas “(prototype) tool” refers to a system structure of a tool.
1.2.3.6 DESIGN GEOMETRY
During the design process, geometric exploration includes overall building form, interior layouts,
and façade configurations, such as window sizing and shade depth. This research focuses on
assisting in both the overall building form exploration and the exploration of façade configuration
elements, such as window sizing, shade depth, and louver or fin angle and distance.
1.2.3.7 OPTIMIZATION
In this research, there are two primary areas to which the term “optimization” is applied. While
the notion of optimization has been heavily driven by the use in math and computing, this
research seeks to reemphasize the process of optimization activities by returning to the origin of
the optimization theory:
“Man’s longing for perfection finds expression in the theory of optimization. It
studies how to describe and attain what is Good or Bad, once one knows how to
measure and alter what is Good or Bad. Normally, one wishes the most, or
maximum, and the least, or minimum. The word optimum, or maximum, meaning
“best,” is synonymous with “most” or “maximum” in the former case, and with
10
“least” or “minimum” in the latter. Optimum has become a technical term
connoting quantitative measurement and mathematical analysis, whereas “best”
remains a less precise work more suitable for everyday to achieve the optimum,
and optimization refer to the act of optimizing. Thus optimization theory
encompasses the quantitative study of optima and methods for finding them.”
(Beightler, Phillips, and Wilde 1979, 1)
Moreover, in his “Genetic Algorithms in Search, Optimization, and Machine Learning” Goldberg
states:
“Note that this definition has two parts: (1) we seek improvement to approach
some (2) optimal point. There is a clear distinction between the process of
improvement and the destination or optimum itself. Yet, in judging optimization
procedures we commonly focus solely upon convergence (does the method reach
the optimum?”) and forget entirely about interim performance. This emphasis
stem from the origins of optimization in the calculus. It is not, however, a natural
emphasis……the most important goal of the optimization is improvement.”
(Goldberg 1989, 6-7)
He emphasizes that there are two parts in the definition of “optimization”, namely: (1) the process
of seeking the optimal, and (2) the optimal itself.
Following this intent, in this research, the term “optimization” is applied to two areas: (1) the
mathematically defined multi-objective optimization algorithm used; and (2) the designer-driven
optimization process, including the non-quantifiable elements of design, such as personal
aesthetic preference. Despite the discrepancies between these two uses of the term, it is
acknowledged that seeking “improvement” is the most important goal in both areas.
1.2.3.8 MOO, MDO, & MCDM
In the literature review, several terms are used to represent similar concepts, such as multi-
objective optimization (MOO), multi-criteria optimization (MCO), multi-disciplinary design
optimization (MDO), etc. However, the context and emphasis of these terms, as well as their
usage, varies significantly. Thus, for clarity, in this research, the following definitions are adopted:
Multi-disciplinary design optimization (MDO): In this research, MDO is an umbrella term used to
refer to a design method that involves parametric design coupled with a multi-objective
optimization algorithm. The term used in this research places emphasis on the description of the
process. This differs from the strictly defined multi-disciplinary design optimization method
originally developed by the aerospace industry.
Multi-objective optimization (MOO): This term is used as a general term for the optimization
algorithm that deals with multiple competing objectives, i.e., a multi-objective genetic algorithm
(MOGA).
Multi-criteria decision-making (MCDM): This term refers to the evaluation, rating, and overall
decision-making strategies, outside of the automated MOO.
This research centers on the MDO process. Thus, although both MOO and MCDM are applied,
they are not the primary focus.
11
1.2.3.9 DESIGN PROCESS
The design process throughout a project’s lifecycle is messy and involves a veritable smorgasbord
of activities, as illustrated in Figure 1-2. Of particular interest for this research is the integration
processes between geometric design and energy performance feedback for the purpose of early
design decision-making. As a result, the scope of the research can be viewed as tackling one
tangling part of the overall design process. Therefore, it should be noted that, when the design
process is mentioned in the research, it mainly focuses on the activities of obtaining energy
performance feedback for the generated design idea, or design alternatives for decision-making
within the massing study design stage. Thus, the manner in which designers utilize the information
and synthesize it with other performance or criteria considerations, and how they further proceed
to the next design stage, is outside the scope of this research.
FIGURE 1-2: ILLUSTRATION OF DESIGN PROCESS. IMAGE SQUIGGLE BY NEWMAN (NEWMAN 2006).
1.2.3.10 PERFORMANCE FEEDBACK AND FEEDBACK LOOPS
According to the Merriam-Webster dictionary, feedback is defined as:
“helpful information or criticism that is given to someone to say what can be done
to improve a performance, product, etc..” (Merriam-Webster 2013a)
In this research, feedback is defined as a series of performance data associated with generated
potential design solutions. This feedback is subsequently utilized by two separate and distinct
feedback loops. The first feedback loop is through an automated system, which is driven by the
mathematical function and method of measurement, as defined by this research. The second
feedback loop is the designer’s reactive design decision-making process that commences once
the automation loop has been terminated.
1.2.3.11 BENCHMARK, VALIDATION, & EVALUATION
As the following three terms are used extensively within this research, they are defined below in
order to provide clarity and consistency.
12
Benchmark: To provide a starting point by which to calibrate or evaluate against.
Validation: A process that has quantifiable values and matrix, i.e. number, by which to define and
establish validity.
Evaluation: A process that involves combining quantifiable measurements with abstractive
descriptions, observations, and discussions for non-quantifiable elements for the purpose of
objective assessment.
1.2.4 PREMISE ASSUMPTIONS & LIMITATIONS
The known issues, limitations, and assumptions affecting this research are acknowledged and
discussed below.
1. Design Goals: It is the assumption of this research that pursuit of higher energy
performance is one of the design goals. Therefore, given the opportunity, designers will
choose more energy efficient designs.
2. Simulation analysis accuracy: This research intends to use existing energy simulation
engines to provide reliable performance analysis for design decision-making. While the
proposed framework is reliant on these analyses, it does not seek to validate or confirm
the accuracy of these results in anyway. The fact that energy simulations rely heavily on
design assumptions notwithstanding, verifying the actual accuracy of the results is
outside the scope of this research. This research, instead, seeks to focus on the impact
the availability of such results has on the early stage design process, as opposed to the
results themselves.
3. Target user: The target user of this research is the architect or architectural designer,
whose viewpoint is taken in all decisions and discussions. In other words, while multi-
disciplinary domains and related technology are mentioned, the usability evaluation is
centered on designer use.
4. Focus and limitation of the prototype development: Any tools developed as a part of this
research should be considered prototypes meant to investigate existing problems,
potential solutions, and solution requirements. As such, they will be established according
to proposed ideas, with user interfaces and technical issues still requiring improvement
and fine-tuning.
5. Continuance throughout the process: The aim of this research is to assist designers in
utilizing platform tools when evaluating and defining an initial design, i.e., geometric
massing, before proceeding to the next design stage. Thus, the manner in which the
design proceeds to the next design stage is not within the scope of this research.
13
1.3 OVERVIEW OF THE THESIS
Based on the previously described three research objectives, the thesis is organized into eight
chapters, as illustrated as Figure 1-3 below. Table 1-2 outlines the structure of this thesis with its
corresponding research objectives, questions, and research methods.
FIGURE 1-3: THESIS STRUCTURE OVERVIEW.
Chapter 1 provides the introduction to this research. It includes the research problem statement,
research hypothesis, objectives, questions, scope, definitions, assumptions, and limitations.
Chapter 2 provides the literature and background review, focusing on the pertinent sources in the
related field. It thus provides the basis for identifying the gaps in the extant knowledge,
substantiate the proposed methodology, and situate the contributions of this research. Finally,
this chapter aims to answer the associated research questions, as indicated in Table 1-2.
Chapter 3 introduces an overview of the methodologies adopted by this research. It also describes
the development of the process evaluation metrics, which define the measurements and
recorded items that are the focus of this research and are later used for framework validation and
evaluation.
Chapter 4 introduces the proposed theoretical framework. This is followed by the technical
development of the necessary prototype tool as a one instance by which to realize the proposed
framework for this research. This allows the focus to shift to the second and third research
objectives.
Chapter 5 provides the hypothetical case-based experiments that are used to test the prototype
tool, gauge the prototype’s technological affordance, and validate the framework for further
process evaluation.
Chapter 6 is dedicated to the design professional case-based experiment. This experiment is a
part of the process evaluation for the framework, performed by comparing the proposed process
against two other processes used during the course of the same project.
Chapter 7 provides the three pedagogical case-based experiments, including comparison studies
against the benchmark process, measurement of problem formulation, and student’s utilization
of the generated feedback.
Introduction
CH1
Background Review +
Gap Analaysis
CH2
Research Methods
CH3
Proposed Framwork
CH4
Hypothetical Case-
based Experiments
CH5
Design Profession +
Pedagogical Case-
based Experiments
CH6
CH7
Conclusion +
Future Work
CH8
14
Finally, chapter 8 presents the conclusions reached by this research based on the findings,
followed by the contributions made to the extant body of knowledge, and proposals for future
work.
TABLE 1-2: OUTLINE OF THE RESEARCH METHOD AND THE CORRESPONDING RESEARCH OBJECTIVE AND
QUESTIONS.
Chapter
Title
Research
Objective
Research
Questions
Research
Methodology
1 Introduction
2
Background
Review + Gap
Analysis
OBJ 1,
OBJ2
1.1 What are the limitations and success of
identified approaches focusing on bridging
design and energy performance feedback?
1.2 Why have the identified approaches not
been adopted by designers during the early
stages of design?
1.3 How can the research, with current
technology, overcome the shortcomings of
these precedent approaches?
2.1 What criteria should be used to examine
and determine the proposed framework’s
validity for early stage design use?
- Literature review
3 Research Methods OBJ 3
3.1 What are the requirements and associated
metrics that should be used to conduct
comparative studies in the design and
evaluation of the proposed framework’s
impact on the early stage design process?
- Literature review
- Establish validation &
evaluation metrics
4
Proposed
Framework
OBJ 2
2.2 Is the proposed framework capable of
satisfying the requirements identified
above?
- Develop a prototype
tools to fulfill the
identified needs of the
proposed framework
5
Hypothetical
Case-based
Experiment
- Hypothetical case-based
experiment and analysis
6
Design Profession
Case-based
Experiment
OBJ 3
3.2 What is the impact of the proposed
framework on the early stage design
process according to the previously
established metrics?
3.3 How does the established framework and
available energy performance feedback
support designers’ decision-making?
- Literature review
- Establish measurement
metrics and evaluation
method
- Conduct design
profession experiment
- Conduct pedagogical
case-based experiment
7
Pedagogical Case-
based Experiment
8
Conclusion +
Future Work
15
CHAPTER 2 LITERATURE SURVEY, REVIEW, AND GAP ANALYSIS
Approximately 1300 references are were surveyed during the research period. The topics of
covered in the surveyed literature considered for the more in-depth review can be divided into
the following three major categories:
1. Design Method, Process and Evaluation: Design activity analysis, design process
evaluation, design cognition, parametric design, Building Information Modeling (BIM),
generative design, etc.
2. Performance Based Design Method & Process: Energy simulation, energy simulation
feedback method, other performance feedback, design decision support approach, etc.
3. Design Automation & Optimization: Evolutionary algorithm, multi-objective optimization,
artificial intelligence, cloud computing, optimization, human computer interaction, etc.
While each of these three groups subsumes a variety of fields, the major focus, as it relates to the
present research, is on following areas:
1. Building Design Technology
2. Building Energy Simulation & Design Integration
3. Design Automation & Optimization
This chapter is organized according to these three literary focuses, each serving as the foundation
on which this research is developed, as well as the starting point in the process of providing
answers to some of the previously presented research questions.
2.1 BUILDING DESIGN TECHNOLOGY
2.1.1 FROM COMPUTER-AIDED DESIGN (CAD) TO BUILDING INFORMATION MODELING (BIM)
The continuing advancement of building design technology is best exemplified by the industry
transition from CAD to BIM. While there are arguments pertaining to the technical differences
between CAD and BIM, from the standpoint of this research, the term “Building Information
Modeling (BIM)” has become synonymous with the new “Computer-Aided Design (CAD)”, in that
it represents the latest series of technologies and tools available to assist design.
The origins of the term “Building Information Modeling” can be traced back to the 1970s, when
3D solid modeling was first developed (Eastman et al. 2011). The technology concept was coined
by Professor Eastman from his Building Description System to Building Product Model
(Underwood and Isikdag 2009, Yessios 2004).
16
In other industries, such as mechanical engineering, aerospace, or the automotive industry,
product modeling technology has already been extensively applied in order to improve the
industry in question. The objective of this new technology, which treats a building as a product,
allows the “information” or “semantic” qualities to be associated with the “objects” or “elements”
within the model. As a result, this information model not only provides the geometric information
pertaining to a particular product, but is also able to describe the characteristics and capabilities
of individual elements defined within the project, including, but not limited to, how these
individual elements interact in order to form the product in question. Despite the varying levels
of BIM tools that provide varying levels of information, using varying methods of descriptive
language, this overall concept of “information model” remains consistent (Nederveen, Beheshti,
and Gielingh 2010, Eastman et al. 2011).
Following the continually evolving capabilities of technology, the capabilities of BIM have grown
from the original “building product model” to parametric and fabrication-enabled modeling. As a
result, synthesizing from Nederveen, Beheshti, and Gielingh (2010), and the M.A. Mortenson
Company (Eastman et al. 2011), the definition of BIM should be:
BIM is a model of information about a building (or a building project) in digital representation. It
comprises the information about the project and the components within. It has the ability to
ASSOCIATE ATTRIBUTES AND SEMANTICS, such as function, shape and material, to objects in the
model. It provides the PARAMETRIC ENVIRONMENT enabling the user to DEFINE THE
RELATIONSHIP between the elements within the model without undertaking programming-level
software development. It provides Object-Based Parametric Model Function and enables user-
defined parametric objects. The model is expected to be continually used later on for construction
and fabrication purposes.
The list of some of the essential functionality of BIM tools is composed from the compiled
characteristics by Eastman et al. (2011) and Nederveen, Beheshti, and Gielingh (2010) are:
Digital: The information within the model is stored in digital form that is computer-interpretable.
Spatial: The information describes the spatial information of the objects in the model.
Building Components: The representation of real world building components within the BIM tool
possessing both geometry and attributes associated with them. As a result, the user understands
what these components refer to in the physical world through their direct representation within
the BIM tool.
Parametric Enabled: While objects are represented in digital form, BIM tools allow for the use of
parametric rules to describe an object and its relationship to other objects within the BIM tool.
Comprehensive Component Behavior Description: Objects within the model can possess data
describing particular behavioral traits. This can aid in communicating design intent, building
performance, constructability, financial analysis, etc.
Consistent, Non-redundant and Coordinated Data: No data is repeatedly stored in the model. As
a result, when the user updates the information in one view, all other views available will be
updated accordingly.
Measurable: The components in the model are quantifiable, dimension-able, and query-able.
17
Durable: Information available within the model can support building activities throughout the
entire building lifecycle.
Among the above, the “PARAMETRIC MODELING” is the key element that increases BIM’s abilities
beyond object-based modeling.
“The basic idea is that shape instances and other properties can be defined and
controlled according to a hierarchy of parameters at the assembly and sub-
assembly level, as well as at an individual object level” (Eastman et al. 2011, 29)
BIM definitively represents a more powerful functionality when compared with conventional CAD.
Thus, the introduction of new technology provides more opportunities, i.e., design automation
and performance based design, but also changes the design method and processes. At the same
time, issues regarding the use of these new technologies also arise. This research uses BIM as the
term to represent the most current design technology and looks into the needs and capabilities
of state of the art of BIM application to identify the contributions and the gaps this research hopes
to address.
2.1.2 PARAMETRIC DESIGN & DESIGN EXPLORATION
In conventional practice, designers meet the functional and programmatic performance criteria
of the design requirements during the early stages, while other performance criteria are deferred
until later phases (Turrin, von Buelow, and Stouffs 2011). What other performance criteria is
considered during the early stages of design is done so with only narrow ranges and without
quantitative criteria which decreases the quality of the final design solution (Chong, Chen, and
Leong 2009). In addition, the generation of an expansive design solution space during the early
design stages is also absent from the typical architectural design process (Turrin, von Buelow, and
Stouffs 2011), despite the evidence that repeated generation and evaluation of design
alternatives during the conceptual design stage leads to more satisfactory design solutions (Liu,
Chakrabarti, and Bligh 2003, Yi and Malkawi 2009). Generating alternative geometric
configurations is a significant advantage of parametric modelling, and the number of tools and
methods that support architectural design are increasing (Gerber 2009, Burry and Murray 1997a,
Hesselgren, Charitou, and Dritsas 2007). Parametric modeling can decrease the time and effort
needed to change or modify the design at hand, and can also yield improved form finding (Aish
and Woodbury 2005). It facilitates the rapid change of geometric and non-geometric variables
according to a particular design logic (Shah and Mäntylä 1995a, Shea, Aish, and Gourtovaia 2005).
Furthermore, parametric design thinking and modelling enables a relational—i.e., correlative,
integral, and explorative design process (Menges 2011). It also aids in the setup of structural
alternatives in an efficient manner that allows design teams to communicate results affected by
altering parameter values and the subsequent impact on the potential design solution (Rolvink,
van de Straat, and Coenders 2010). Therefore, a primary building block of the present research is
the utilization of parametric design and modeling for the purpose of providing rapid design
iteration (Gerber 2009). However, it should be noted that, although generation of multiple design
alternatives is a major advantage, it introduces the requirement of evaluating each alternative
individually according to pre-set performance criteria.
18
2.1.3 PARAMETRIC DESIGN & BUILDING PERFORMANCE
The parametric design capability of building design tools is progressively being pushed forward.
The drive behind this parametric design development stems from the acknowledged advantages
of the inherent ability to quickly generate design alternatives through the parameterization of
geometry (Gerber 2009, Burry and Murray 1997b). Aside from the previously mentioned benefits
of rapid design exploration, once the design intent has been rigorously defined through
parametric relationships, the automation of the exploration process becomes possible (Kilian
2006). This not only shortens design exploration cycle time (i.e., latency) and bypasses the issues
regarding interoperability, but also provides the opportunity for relationally incorporating
performance criteria as a part of the process. Thus, it is possible to integrate a greater number of
criteria, including performance objectives and constraints, thereby allowing for more extensive
feedback by which to evaluate and identify potentially optimal design alternatives. These
advantages of parametric design technology introduce more potential possibilities for designing
buildings with higher performance through rigorous setting of criteria for design constraints and
intent. The various analyses and simulations that can be employed to evaluate and examine
design solutions during the design phase are continually expanding. Some of the notable
examples are Phare Tower by Morphosis Architects and the Hangzhou Stadium the by NBBJ. Aish
and Woodbury’s work demonstrates how the automation of parametric design can significantly
reduce the time required for implementing changes and facilitating reuse (Aish and Woodbury
2005). Another precedent, the Generative Performance Oriented Design model (GenPOD), utilizes
the parametric form generation process to generate various performance envelopes and
demonstrates promising support for design decision-making (Grobman, Yezioro, and Capeluto
2008). Similarly, Rolvink et al. demonstrated the effectiveness of using parametric modeling
techniques in rapidly exploring design alternatives and associative structural performance in a
practice setting (Rolvink, van de Straat, and Coenders 2010).
2.1.4 SUMMARY OF CURRENT BUILDING DESIGN TECHNOLOGY
With the advancement of current building design technology, Computer-Aided Design and
Engineering (CAD/CAE) tools enable architects and engineers to pursue higher performance in
building design by simulating different aspects of building performance, such as financial,
structural, energy, and lighting efficiencies. However, due to tool interoperability, design cycle
latency and domain expert disconnection, these performance feedbacks can rarely support design
decision-making during the early stages of the design process, where such decisions have a
disproportionate impact on the overall building performance versus later design stages. Thus, this
research posits that increasing design iteration can improve the possibility of identifying higher-
performing designs through the exploration of both geometric and non-geometric variables,
according to the established design objectives and constraints (Shea, Aish, and Gourtovaia 2005,
Akin 2001). It has been previously established that parametric modeling can facilitate rapid
changes in both geometric and non-geometric variables, according to a pre-rationalized design
logic (Shea, Aish, and Gourtovaia 2005, Shah and Mäntylä 1995b, Gerber 2007). This leads to the
key components that form the basis of this research, namely parameterization, design integration,
and design automation. By utilizing the advantages of these components, a design framework
should be able to rapidly explore design, increase correlation between disparate domains, rapidly
19
visualize the cause and effect of these correlated decisions, quantify understanding of the multi-
objective tradeoffs, and reduce design decision latency, given the transparency and automation
of the generation and evaluation of the design solution spaces.
2.2 BUILDING ENERGY SIMULATION & DESIGN
While constructability is often the first issue to be tackled by designers in developing complex
geometry (Glymph et al. 2004), this research aims to append this obvious need with that of
informing these forms with performance criteria, such as energy use intensity, project economics,
or the environmental footprint. This is especially important in the early design stage, where the
decisions made can have a significant impact on the lifecycle cost of buildings (USGBC 2003).
While environmental simulation is not new to architecture, it has not been determined how often
and when it can be incorporated into the design workflow (Hensen et al. 2004). Previous studies
show that conventionally adopted performance-based analysis methods are rarely able to
support these early stage design decisions due to time limitations (Flager and Haymaker 2007).
Augenbroe (2002, 891) states, “the role of simulation tools in the design and engineering of
buildings has been firmly established over the last two decades.” He continues to credit energy
simulation “with speeding up the design process, increasing efficiency, and enabling the
comparison of a broader range of design variants, leading to more optimal designs” and, in
summary, providing better understanding of tradeoffs in the multi-objective design process
(Augenbroe 2002, 891).
It has been widely recognized that incorporating building performance feedback to support design
decision-making can improve the building performance throughout the building lifecycle, when
such goals are set as a priority. In addition, the use of simulation In the early design stage can
influence better design of energy efficient buildings (Obanye 2006, Mahdavi et al. 2003, Morbitzer
2003, Robinson 1996, Hensen 1994, Bambardekar and Poerschke 2009). However, due to an array
of challenges, the boundaries between design and energy performance still persist. This section
reviews the current obstacles identified in the design and energy simulation domains, along with
the efforts and methods aimed at addressing these issues with the intent to support developing
the framework methodology for this research.
2.2.1 OBSTACLES BETWEEN THE DESIGN AND ENERGY SIMULATION DOMAINS
In order to solve the current obstacles between the design and energy simulation domains,
several authors have made an effort to organize the identified problematic areas, and those most
relevant to the present research are discussed below.
Obstacles between design and energy performance summarized by Welle, Haymaker, and Rogers
(2011) as:
“1. Long analytical model preparation time…; 2. Inaccurate conversion of
architectural model to analytical model….; 3. Inconsistent conversion of
architectural model to analytical model…; 4. Missing or invalid data in
architectural model…; 5. In consistent analytical models for multidisciplinary
20
analysis…; 6. Long analytical model simulation time…; 7. Poor coordination of
analytical model outputs/inputs…; 8. Inconsistent performance metrics” (Welle,
Haymaker, and Rogers 2011, 295)
Attia, Gratia, et al. (2012) categorize the barriers to integrating building performance simulation
(BPS) during early design phases as the following:
“1. Geometry representation in simulation tools…; 2. Filling input…; 3.
Informative support during the decision making…; 4. Evaluative performance
comparisons…; 5. Interpretation of results…; 6. Informed iteration” (Attia, Gratia,
et al. 2012, 5-6)
Bambardekar and Poerschke (2009) discussed the obstacles impeding architects as the
performers to use energy simulation programs (ESP) during the early design stage (EDS) by
synthesizing the survey and interview results from Pedrini and Szokolay (2005), Mahdavi et al.
(2003), Lam, Huang, and Zhai (2004), Morbitzer et al. (2001), Trebilcock, Ford, and Wilson (2006)
and de Wilde, Augenbroe, and van der Voorden (2001):
“The observations of the surveys were: a) skepticism towards the potential of the
ESP to provide decision support, b) lack of simulation know-how and unfamiliar
working methods, c) perceived disconnect between the simulation process and
the architectural design process as the simplified ESP’s meant for EDS analysis
were used in the same stage as the more complex ESP’s i.e. in the detailed design
stage, d) the ESP’s were used mainly for load calculations and code compliance.
To experience firsthand the issues related to limited uptake within the design
process, a test design was undertaken by the authors. Similar observations were
experienced by the authors in addition to the facts that a) very limited guidance
is available to architects for understanding and integrating simulation as a design
tool in the EDS and b) selection of the ESP’s is a non-trivial task and requires better
guidance.” (Bambardekar and Poerschke 2009, 1306)
There are also issues regarding the usability of these energy simulation tools for designers. It has
been demonstrated by prior research that a majority of available energy simulation tools are
designed for use during the post-schematic design stages, where domain experts conduct
performance evaluations post-priori and not in a rapid and automated design cycle (Schlueter and
Thesseling 2009), as shown in Figure 2-1. While building energy simulation tool can be supportive
when integrated early in the design process, the currently available tools are not considered to
be “architect-friendly” for use by designers during early phases of the design (Attia et al. 2009,
Weytjens et al. 2011, Lam, Huang, and Zhai 2004, Riether and Butler 2008). This phenomenon can
also been observed in the tools available to architects in the simulation tools database listed on
DOE website. “Out of the 389 BPS tool listed on the DOE website in 2010, less than 40 tools are
targeting architect during early design phases” (Attia 2011), as shown in Figure 2-2.
Based on these findings, this research groups the aforementioned obstacles into three major
categories to be discussed in greater depth: (1) Tools & Tool Interoperability; (2) Lacking of
Domain Knowledge; and (3) Support Decision-making.
21
FIGURE 2-1: BUILDING SIMULATION TOOLS PRE- AND POST- DESIGN PROCESS. IMAGE RE-PLOTTED BY THE
AUTHOR, BASED ON DOE 2011 WEBSITE INFORMATION OBTAINED FROM ATTIA (2012).
FIGURE 2-2: DEVELOPED BUILDING SIMULATION TOOLS DIVIDED BY TARGETED USERS BETWEEN 1997 AND
2010. IMAGE FROM ATTIA (2011).
22
2.2.1.1 TOOLS & TOOL INTEROPERABILITY
Extant research reveals that most of the energy simulation tools are designed for use by engineers.
More specifically, only 10% of the tools available are intended for architects’ use and only 1% of
these are able to support early design stage (Attia 2012). In addition to the limited tool availability,
seamless integration between software programs is typically lacking. Thus, the necessary data
transfer between tools leads to the loss of information, whereby manual modification of models
is required when repeated use of design and energy simulation tools is needed. For example, an
energy simulation typically uses less sophisticated geometric models of a design to generate
feedback information. As there is no real-time linking between geometry and engineering
parameter, this often leads to manual adjustments being necessary by domain experts
(Sanguinetti et al. 2010). Restricted data exchange and multiple interfaces required for
multidisciplinary design problems are thus a pressing problem that needs to be solved (Holzer,
Tengono, and Downing 2007). While some of these issues can be resolved by using IFC or gbXML
formats in order to translate the geometry into space boundaries for energy simulation, presently,
these formats do not support complex geometries. As a result, in order to import models,
geometries often have to be simplified in order to be effective within the available energy
simulation tools.
2.2.1.2 LACKING OF DOMAIN KNOWLEDGE
Design professionals are often unfamiliar with energy simulation tools as these tool functionalities
are often outside of their expertise. As a result, environmental simulation software is typically
operated by experts, familiar with the specialized nature of these tools (Aish and Marsh 2011).
In addition, in order to successfully use the results to support design, issues regarding knowing
when and how to use what tools need to be overcome. However, this knowledge is not typically
a norm, as throughout the design phase of a project, there are different elements under
development. For example, the wall assembly or skin cannot be determined before the
orientation has been decided. At this preliminary stage, energy simulation tools are not used for
estimating energy bills, but rather for measuring the impact of specific design decisions on the
overall energy performance of a design. For example, they can estimate the impact of a building’s
orientation on the site or how much energy can potentially be saved by using more efficient glass
in the window versus a shading device. In sum, energy simulation tools provide information that
allows designers to make more informed decisions during their design development.
As is true with any tool, the results can only be as effective, or even as accurate, as the user is
competent in using the tool. If designers are unable to utilize energy simulation tools during the
schematic phase of design, their contribution at this phase is minimized. In addition, when new
technology is first introduced into a field, a disconnection between the experience needed to use
the tools effectively and the actual ability to utilize new tools is typically present. Quite often,
younger members in the field know how to use the tools, but lack the experience to make
informed decisions or to understand how take information and feed it back into the design. On
the other hand, more experienced members in the field often have the necessary expertise to
understand the implications of the available information, but lack the technical skills to fully utilize
new tools or access the full spectrum of information available to them. As a result, gaps emerge
23
in feedback loops, which can only be eliminated once the industry as a whole is able to readjust
itself to accommodate the new tools.
2.2.1.3 SUPPORT DECISION-MAKING
The design process is not linear. Ideas are bounced back and forth, choices are explored, more
ideas are introduced, and the loop continues. Designers are constantly trying new ideas and
evaluating previous ones throughout the entire design process. Energy simulation tools are meant
to provide one more way to evaluate the impact of these ideas on the overall design. In order to
maximize the effectiveness of energy simulation tools and their utility to the design process,
seamless transition from BIM to energy simulation tools and back to BIM must be enabled.
However, there is currently no seamless feedback loop from energy simulation tools back into
BIM. Any changes made to the design as a result of the energy simulations have to be manually
entered into the BIM tool separately, requiring labor and time to complete. This limits the amount
of information energy simulation models are able to contribute and at various stages during the
design process. Conventionally adopted performance-based analysis methods have been shown
by prior studies to be ill suited in their ability to support early stage design decisions due to time
limitations (Flager and Haymaker 2007). They also suggest that a more supportive user interface
to assist design decision and visualize analysis results is needed (Pilgrim et al. 2003).
2.2.2 FUTURE TRENDS– CRITERIA FOR DESIGNING-IN ENERGY PERFORMANCE
In order to overcome the observed obstacles related to energy simulation tools, several
researchers have made an effort to identify tool and process requirements to provide
recommendations for any future tool development. By reviewing these precedents, this research
isolates the criteria to achieve a “designing-in performance” framework. The following is a
summarized review of some of these studies, presented chronologically.
Crawley et al. (1997) invited developers and users of the energy simulation tools to provide input
on next-generation building energy simulation tool requirements, with a particular focus on the
application, capability, methods, and structures for the calculation engine of EnergyPlus. User
interface was outside the discussion scope. The summarized conclusions are:
1. The first program application priority is “design” for both users and developers. The
second on the list is the application for “performance evaluation”.
2. Users (designers) would prefer specific answers from the simulated results and are less
concerned with the mechanics of the tools.
3. This first program capability priority is physical process models for both developers and
users.
4. The program interface priority for the user group is “interoperability and integration”
5. The first priority of the program methods and structures of the developers’ workshop was
pre- and post- processing methods.
Lam, Wong, and Henry (1999) presented their review of the usage of performance-based
simulation tools for building design and evaluation in Singapore, based on an extensive industry
survey. Their recommendations to increase the use of energy simulation tools, based on the
survey results, are: “1) Development of integrative design support environment…; 2) Co-evolution
24
of building delivery process and simulation tools..; 3)The use of performance-based regulatory
systems.” (Lam, Wong, and Henry 1999)
Al-Homoud (2001) presented his review regarding the potential applications of computer
technology in the energy simulation and optimization of buildings based on the most common
building energy analysis techniques. He concluded:
“Future trends in computer-aided building energy modeling are expected to
involve: More accurate models with greater flexibility; Shorter time steps; More
user-friendly interfaces; Graphical representations; More comprehensive
programs covering all users’ needs; Interfacing with CADD packages; Design tools;
More utilization of optimization techniques.” (Al-Homoud 2001)
Augenbroe (2002) summarized the trend in building simulation. In his paper, the new tool “wish
list” is provided based on the research papers presented in 1997 and 1999 Building Simulation
Conference (Spittler and Hensen 1997, Nakahara and Hensen 1999) and the research of Clarke
(1999), Clarke and Hensen (2000), and Mahdavi (2001). The list includes:
“Rapid evaluation of alternative designs by tools that facilitate quick, accurate,
and complete analysis of candidate designs…; Design as a (rational) decision
making process enabled by tools that support decision-making under risk and
uncertainty…; Robust solvers for nonlinear, mixed and hybrid simulations, going
beyond the classical solving of a set of differential algebraic equations (DAE) …;
certification (the end user perspective), and code sharing (the developers
perspective)…” (Augenbroe 2002)
Lam, Huang, and Zhai (2004) conducted an assessment for five energy simulation tools intended
for use in the early design phase. They summarized their findings on the factors that can improve
the process between design and energy simulation domain as follows:
1. Advantages of web-based energy simulation: The web-based energy simulation tool has
advantages in providing cloud collaboration and ease of maintenance.
2. Improved user interface: To increase the cognition and familiarity of the tools, user
interface should complement the concepts and processes of architectural design and
energy modeling.
3. Guidance documentation: Tool guidance and documentation is important to the usability
of tools.
4. Improved interoperability: This functionality is expected to ease the process and feedback
loop between design and energy simulation domain
5. Rapid feedback: Obtaining feedback within allowable time is critical in determining the
utility of the analysis results.
6. Built-in database and library: Extensive library support and appropriate recommendations
for construction elements and materials are important for the designers, especially in the
early design phase. Comprehensive weather data should also be made available.
7. Visualization to support decision-making: The post processing functionalities in the
selected tools are limited to conventional numerical and graphical reports of values, such
as loads and temperatures. It would be desirable to develop visualizations that would
better facilitate qualitative understanding of the design performance to the user and
provide appropriate guidance in the context of early design decision-making.
25
Pedrini and Szokolay (2005) presented the results of a survey, which was developed based on the
theory of a designer's thinking and activities, with the aim of better understanding how to improve
energy performance and explaining why energy tools are not popular in the design domain. The
authors concluded their study by providing recommendations for improving the energy
simulation tools for early stage use, based on the survey results and findings of prior studies
(Pedrini 2003):
“The geometric modeling should be compatible with the schematic phase. It
could emphasize the use of 3D model from the beginning of the process, adopt
representative defaults, make use of intuitive interface, especially for drawings,
and the level of geometric details could gradually increase.
Parametric analysis could be optimized if the user defined the range and the
interval of values that a specific variable can assume. Then, the alternatives would
be automatically created and simulated.
The architects look for simple answers and the outputs should make the
comparison of different solutions easy and fast. Frequently, the most important
output is the performance.” (Pedrini and Szokolay 2005)
According to Attia et al. (2009), the most reiterated criteria for an energy simulation tool can be
categorized as “Architects Friendly”, based on the works by Crawley et al. (2008), Reinhart and
Fitz (2006), and Hopfe et al. (2005):
“(1) Usability and information management (UIM) of interface, (2) integration of
intelligent design knowledge-base (IIKB), (3) interoperability of building modeling
(IBM), and finally (4) the accuracy of the tool and its ability to simulate complex
and detailed building components (AASDC).” (Attia et al. 2009)
Attia, Hensen, et al. (2012) added an “integration with building design process” as another
selection criterion. In addition, the authors provided the guidelines for the new tool for net zero
energy building (NZEB) design, based on the results of prior surveys and interviews conducted in
Egypt (Attia et al. 2011):
“Provide better guidance for design decisions to deliver NZEB in hot climates;
Enable sensitivity analysis to inform decision making and allow a variety of
alternatives to be created in short time; The comfort range criteria and design
strategies can be adjusted to respond to local definitions of indoor comfort, local
construction systems and local code requirements; Improve accessibility to
decision tools for small practices; Integrate the new tool with sufficiently
established, accurate tools; Match the cyclic design iterations and extend the
scope of tools to the conceptual phases of the design process; Allow connectivity
with established tools used by different disciplines and in later design stages; Very
easy to use and to learn, and adaptable for the less experienced with minimum
input.” (Attia, Gratia, et al. 2012)
Based on reviewing the efforts and research goals of these and other studies in this domain, this
research determines that in order to provide a “designing-in performance” environment for
designers, a design framework and tool requires a user friendly environment, as well as (1) rapid
generation of design alternatives; (2) rapid evaluation of design alternatives; (3) tradeoff
26
analysis for competing criteria; and (4) a search method to identify design alternatives with
better fit performance.
In order to achieve the identified criteria, in addition to overcoming the interoperability issue in
order to decrease the design cycle latency during multi-domain integration, the automation of
design alternative generation and evaluation are also needed.
2.2.3 CURRENT EFFORTS IN OVERCOMING THE OBSTACLES BETWEEN DESIGN AND ENERGY
SIMULATION DOMAIN
In addition to identifying obstacles and isolating the criteria required for integrating design and
energy simulation domain in the future, several efforts were made to overcome these issues.
These efforts are discussed below, according to the following three categories, corresponding to
the grouping of the obstacles by this research: (1) Tools & Tool Interoperability; (2) Lacking of
Domain Knowledge; and (3) Support Decision-making.
2.2.3.1 TOOLS AND TOOL INTEROPERABILITY
The extant studies in this field include those that attempted to improve user interface to increase
the usability of the tools or facilitate greater data translation among different platforms. The
majority of these efforts have focused on addressing the considered critical obstacle of
interoperability among different software platforms, applications, or user groups, including
software developers and researchers. For example, the Industry Foundation Class (IFC) was
established to standardize model translations. Software developers are also addressing
interoperability issues through their proprietary API and system functionality. In contrast,
researchers have explored scripting interfaces and self-developed plugins attempting to solve
existing interoperability issues between the design and performance analysis domains.
While solutions to tools and tool interoperability would ease the process in the generation and
evaluation of design alternatives, this effort is arguably insufficient (Mourshed, Kelliher, and
Keane 2003). The information transferred from a BIM tool to an energy analysis tool is still a one-
way trip. In other words, there is no way of implementing the knowledge gained or the changes
identified to the design performed in the energy simulation tool, except through manually re-
entering the information. As a result, tradeoff analyses or intelligent searching methods—the two
components that have also been identified as essential to facilitating a “design-in performance”
environment in early stage conceptual design—are not presently available.
2.2.3.2 LACKING OF DOMAIN KNOWLEDGE - MULTIDISCIPLINARY DESIGN FRAMEWORK INTEGRATION
Another stream of research has focused on the interoperability issue among different expertise
domains to overcome the obstacles regarding insufficient domain knowledge. Examples of these
efforts can be found in data model and process standardizations (Eastman et al. 2009, Augenbroe
et al. 2004, Plume and Mitchell 2007), along with collaborative framework development
(Augenbroe 1992, Kalay 1998, Papamichael, LaPorta, and Chauvet 1997, Holzer 2010, Toth et al.
2011). These efforts focus on collaborative framework development to overcome interoperability
27
between expert domains and facilitate the inclusion of performance feedback. Some examples
are discussed below.
Pohl, Myers, and Chapman (1990) presented a prototype working model, Intelligent Computer-
Aided Design System (ICADS), with the intent to provide a supportive computational environment
for architectural and engineering disciplines by focusing on the development of an intelligent
interface between building designer and project knowledge database. The expert domains within
this prototype framework include structural support, thermal environment, daylighting, and
acoustics. Based on the project’s prior research, the online access and expert support are the two
main missing components from the available CAD tools that can provide a useful design
environment.
Augenbroe (1992) presented a prototype concept of an integrated framework project, COMBINE
(Computer Models for the Building Industry in Europe). This project was developed from the
viewpoint of both designers and engineers with a focus on both the process and the product. Not
only were known issues addressed for interoperability between different domains by promoting
a standardized data model, but the project also aimed to provide a conceptual limited local
intelligence within design tools to facilitate support for designers at the early stages of the design
process.
Papamichael, LaPorta, and Chauvet (1997) presented a software environment, the Building Design
Advisor (BDA) with the intent to support the integrated use of multiple analysis and visualization
tools throughout the building design process, from the initial, conceptual, and schematic phases,
to the detailed specification of building components and systems. The development of BDA
focused on a user interface for designer, as a medium that links analyses pertaining to different
disciplines. The aim was to mitigate the issue of the insufficient domain knowledge to drive
different analysis platform to support designer’s decision-making. The main efforts made in this
framework include (1) an algorithm to control the overall process and data management; (2) an
integrated data model; (3) interface media to link external simulation tools; and (4) graphic user
interface to facilitate data managing.
Kalay (1998) presented a semantically rich computational framework, P3, with the focus on
facilitating the design decision-making through the collaboration and socialization among
different disciplines. The framework consists of three main components, namely (1) data sharing
via the World Wide Web; (2) individual knowledge data repositories of each specific discipline;
and (3) a central computational project manager, as media and project organizer aimed at filtering
necessary information for the users in different positions. As such, the participants can be
exposed to all the necessary information to maximize their individual goals.
Mahdavi (1999) published a paper regarding a computational design support system, SEMPER,
toward the incorporation of simulation-based performance evaluation in building design. The
author developed a prototype that can link an object-oriented architectural model to multiple
simulation modules dynamically. The principal idea of the project is mapping corresponding
modifications between different simulation models to bypass the obstacles deriving from the
inherent heterologous data representation among different domains.
Lam et al. (2004) continued to extend the development of the SEMPER project into an internet-
based platform, SEMPER II, to facilitate geographically distributed multi-domain design
collaboration. A thermal simulation case study was provided to elaborate on the framework
28
structure and function. Recent efforts, such as DesignLink (Holzer 2010, Holzer and Downing
2010) and DEEPA (Toth et al. 2011), have also contributed to the development of multidisciplinary
design frameworks in a practical setting to further utilize the web-based interface in facilitation
of collaboration among multi-expert domains.
de Wilde (2004) initiated a project, design analysis integration (DAI), as part of his PhD research.
The aim of the DAI project is to use a prototype to demonstrate a developed strategy that provides
computational support during the building design process for rational design decisions regarding
the selection of energy saving building components. The goal is to improve the use of existing and
emerging building performance analysis tools by design and engineering teams. “Spearheads are
an improved functional embedding of performance analysis tools in the design process, increased
quality control for building analysis efforts, and exploitation of the opportunities provided by the
internet” (de Wilde 2004, 115).
O’Sullivan et al. (2004) presented a framework, the building energy monitoring, analyzing, and
controlling (BEMAC). The aim of this initiative is to enable monitoring, analyzing, and controlling
the building energy consumption through its lifecycle by combining building product model with
other existing building management and analysis tools.
In summary, the reviewed research efforts within this category all acknowledge that solving the
interoperability issues among different platforms is a prerequisite for further progress in this field.
Extant efforts to alleviate this issue have focused on either using the dynamically mapping data
translation or standardizing data model among different expert domains. In addition, they all
promote the need of an integrated platform to provide a supportive environment and
acknowledge the needs of a designer-specific and designer friendly user interface. Capitalizing on
the rapidly developing web technology is another commonly acknowledged trend to further
overcome the barriers caused by different representations of heterologous nature of different
domains’ data models and by geographical locations. However, most of the frameworks discussed
above are still very conceptual and in need of further development. While the intent of these
frameworks is to support design decision-making in early project phases, with expert knowledge
support to overcome the issues regarding insufficient domain knowledge, there is no focus on
providing either tradeoff analyses or intelligent searching methods—the two components that
have also been identified as essential to facilitating a “design-in performance” environment. In
addition, while some proposed frameworks are able to visualize different analyzed data with rapid
design alternative generation, process evaluation of these frameworks adopted either by
researchers or by practitioners is still lacking. As a result, the usability of these frameworks is still
in need of further investigation.
2.2.3.3 SUPPORT DECISION-MAKING − INTELLIGENT AUTOMATED SYSTEM
As summarized by Augenbroe and Hensen (2004, 875-876), “Many aspirations remain to be
achieved, such as the support for rapid evaluation of alternative designs, better adaptation of
simulation tools to decision making processes, and team support of incremental design
strategies.” Clearly, there is a need for energy simulation tools to be more supportive for
designers during the decision-making process. To efforts made to overcome these obstacles tend
to emphasize on the data visualization and providing guidance towards decisions made based on
simulated results. However, an essential cause of obstacles in this category is the insufficient
29
feedback and the absence of cost-benefit (tradeoff) analysis, which would enable designers to
understand the impact of their decisions. To overcome these obstacles, the third group of efforts
in this research domain focuses on enabling rapid design alternative generation and evaluation,
combined with comparison of analysis results and tradeoff studies, to further support design
decision-making.
The studies in this category can be divided into two threads, namely (1) the development of
optimization techniques as intelligent searching methods that support tradeoff analyses for
identifying “best fits” across competing objectives, and (2) sensitivity/uncertainty analyses.
Among the works that promote using optimization algorithm to support decision-making is the
study by Kolokotsa et al. (2009), who presented the comparison of online and offline decision
approaches that utilize multi-objective programming optimization techniques, multi-criteria
decision analysis techniques, and their combinations, in order to reach optimum solutions, rank
alternatives, or provide tradeoffs between the criteria. Similarly, Diakaki et al. (2010) presented a
multi-objective decision model to enable evaluation of design alternatives according to buildings’
annual primary energy consumption, annual carbon dioxide emissions, and the initial investment
cost. A simple case study was used in this work to demonstrate the functionality of the proposed
decision model. Johnson and Cardalda (2002) presented an integrated decision support system
that combined an A* graph search algorithm with genetic algorithms (GA) to provide strategy
analysis and tradeoff studies that analyze all possible renovation actions and their tradeoffs to
develop the optimal solution. This category of studies also includes several efforts based on using
optimization techniques to rapidly generate and evaluate design alternative (Caldas and Norford
2002, Caldas 2008, Janssen 2009, Welle, Haymaker, and Rogers 2011, Flager et al. 2009, Yi and
Malkawi 2009, 2012, Asadi et al. 2012a, Tuhus-Dubrow and Krarti 2010).
The efforts focused on using sensitivity and uncertainty analysis as a support for decision-making
view the design evolution process as the accumulation of decisions made under uncertainty, since
a series of ill-defined problems typically need to be determined through a human-oriented
process (Fenton and Wang 2006). As a result, sensitivity analysis (SA) can be used to support the
decision-maker in identifying the most sensitive parameters. On the other hand, uncertainty
analysis (UA) can be used to support the outcome of simulations and make the user aware of the
risks associated with each option, in particular if they affect a specific performance aspect. The
extant studies focused on using SA/UA to support decision-making include the research
conducted by Struck et al. (2009), which provides empirical evidence of the high variability of the
option space that can be subjected to uncertainty and sensitivity analyses. Struck, Hensen, and
Kotek (2009) reported on a tool developed to support conceptual design with uncertainty
assessment. Hopfe (2009) conducted a research evaluating the use of SA/UA in simulation as a
means to facilitate decision-making between competing design options and design optimization.
Yildiz et al. (2012) presented an approach for developing guidelines on sensitive and robust design
parameters to help architects to design low-rise apartment buildings. Attia, Gratia, et al. (2012)
presented the development of an energy-oriented software tool that both accommodates the
Egyptian context and provides sensitivity analysis that aims to facilitate decision-making of zero
energy buildings. Burhenne et al. (2013) used a Monte Carlo-based methodology for uncertainty
quantification that combines the building simulation and the cost-benefit calculation to enhance
the design process or building operation and supports related decision-making.
30
2.2.4 SUMMARY OF BUILDING ENERGY SIMULATION & DESIGN
After reviewing the obstacles and the efforts made to overcome them in current energy
simulation and design domains, this research identifies the criteria for achieving design-in
performance. In sum, it should consist of a design framework with user friendly environment and
the ability to facilitate (1) rapid generation of design alternatives; 2) rapid evaluation of design
alternatives; (3) tradeoff analysis for competing criteria; and (4) provide a search method to
identify design alternatives with better-fit performance.
The analysis of the currently proposed or developed solutions to these issues revealed that a MDO
framework, which couples MOO with parametric design, is the most promising solution to
achieving the designing-in performance environment sought by this research. The use of
parameterization and automation integration can bypass the interoperability issues and
insufficient domain knowledge. While sensitivity analysis can help decrease uncertainties of often
changing and complexly coupled and de-coupled variables during the design process, design
objectives are often non-commensurable with their relative importance and are difficult to
evaluate prior to the post-sensitivity analysis (Grierson 2008). In addition, even with sensitivity
analysis in place, a method that can quickly identify higher-performing design is still needed.
Therefore, considering that time is still a dominant factor in determining the stopping points
during the early design stage, this research questions the feasibility of sensitivity analysis alone as
a primary approach to drive the generation of an early design solution space. As a result, a multi-
objective optimization and search inclusive of sensitivity analysis approach is chosen as the
foundation and focus of this research. In addition, through the development of a better user
interface for data representation, providing guidance, and multi-criteria design, decision-making
methods can further facilitate the process. However, they are not included in the scope of this
research at this stage of the framework development. Table 2-1 summarizes the findings of this
section.
31
TABLE 2-1: SUMMARY OF OBSTACLES & POTENTIAL SOLUTIONS BETWEEN THE DESIGN AND ENERGY SIMULATION DOMAINS.
Tools & Interoperability Lacking of Domain Knowledge Support Decision-making
Causes
1. Lack of designer- friendly tool to support early
stage design (Attia et al. 2009, Weytjens and
Verbeeck 2010, Lam, Huang, and Zhai 2004,
Riether and Butler 2008).
2. Interoperability issues between design and
energy simulation tools (Augenbroe et al. 2004,
Bazjanac 2008, buildingSMART 2013).
1. Unfamiliarity with energy simulation
programs(Bambardekar and Poerschke 2009).
2. Lack of energy simulation domain knowledge
(Bambardekar and Poerschke 2009, Aish and
Marsh 2011).
3. Lack of know-how to integrate energy
performance feedback into the design
process(Bambardekar and Poerschke 2009).
1. Insufficient feedback to support decision making
(Flager and Haymaker 2007).
2. Lack of relevant context for decision making
(Attia, Gratia, et al. 2012).
3. No 2D or 3D data visualization provided (Pilgrim
et al. 2003).
Results
1. Requires multiple platforms (Attia, Gratia, et al.
2012, Holzer, Tengono, and Downing 2007).
2. Requires manually updating or adjusting of whole
or partial models in both platforms (Sanguinetti
et al. 2010).
3. Loss of data during the model translation (Welle,
Haymaker, and Rogers 2011).
4. Inaccurate & inconsistent conversion between
architectural and analytical models (Welle,
Haymaker, and Rogers 2011).
5. Long analytical model preparation time (Welle,
Haymaker, and Rogers 2011).
1. Energy simulations are usually used for post-
design evaluation (Bambardekar and Poerschke
2009, Hensen and Lamberts 2011, Schlueter and
Thesseling 2009).
2. Energy simulation results are usually not used to
support early design decisions (Bambardekar and
Poerschke 2009, Flager and Haymaker 2007).
1. No trade off studies provided (Pilgrim et al.
2003).
2. Feedback can hardly support design decision
making (Attia, Gratia, et al. 2012, Bambardekar
and Poerschke 2009, Flager and Haymaker 2007).
Solutions
1. Platform integration (Crawley et al. 1997, Lam,
Wong, and Henry 1999).
2. Designer-friendly platform (Al-Homoud 2001,
Lam, Huang, and Zhai 2004).
3. Standardization of data exchange format
(Crawley et al. 1997, Eastman et al. 2009,
Augenbroe et al. 2004, Plume and Mitchell 2007).
4. Automation of data translation (Lam, Huang, and
Zhai 2004, Welle, Haymaker, and Rogers 2011).
1. Energy simulation guidance for designer is
needed (Lam, Huang, and Zhai 2004).
2. Utilize domain integrated platform to obtain
energy simulation experts’ input during the
design process (Pohl, Myers, and Chapman 1990,
Augenbroe 1992, Kalay 1998, Papamichael,
LaPorta, and Chauvet 1997, Holzer 2010, Toth et
al. 2011, Attia, Gratia, et al. 2012, Mahdavi 1999,
Lam et al. 2004, de Wilde 2004).
1. Use MDO to increase feedback and provide
trade-off studies (Al-Homoud 2001, Kolokotsa et
al. 2009, Diakaki et al. 2010, Johnson and
Cardalda 2002, Caldas 2008, Janssen 2009, Welle,
Haymaker, and Rogers 2011, Flager et al. 2009, Yi
and Malkawi 2009, Asadi et al. 2012a, Tuhus-
Dubrow and Krarti 2010).
2. Provide sensitivity analysis to indentify the
impacts of individual parameters (Attia, Gratia, et
al. 2012, Fenton and Wang 2006, Struck et al.
2009, Struck, Hensen, and Kotek 2009, Hopfe
2009, Yildiz et al. 2012, Burhenne et al. 2013).
3. Provide a user interface with 2D 3D design and
data visualization to support design decision
making (Pilgrim et al. 2003, Al-Homoud 2001,
Lam, Huang, and Zhai 2004).
32
2.3 DESIGN AUTOMATION & OPTIMIZATION
Typically, the interpretation of the term “optimization” is limited to the narrow mathematical
definition of seeking a convergence of conditions in finding the maximum or minimum of a
function. This typical use of the term can be observed from its definition in the Merriam-Webster
dictionary:
“Definition of Optimization: an act, process, or methodology of making
something (as a design, system, or decision) as fully perfect, functional, or
effective as possible; specifically: the mathematical procedures (as finding the
maximum of a function) involved in this.………Field of applied mathematics whose
principles and methods are used to solve quantitative problems in disciplines
including physics, biology, engineering, and economics. Questions of maximizing
or minimizing functions arising in the various disciplines can be solved using the
same mathematical tools. In a typical optimization problem, the goal is to find the
values of controllable factors determining the behaviour of a system (e.g., a
physical production process, an investment scheme) that maximize productivity
or minimize waste….” (Merriam-Webster 2013c)
However, when applied to design problems, which are typically understood as ill-defined (Simon
1973), and which often involve competing non-commensurable objectives, the Pareto optimal
solution is sought, instead of attempting to identify a single optimal solution (Radford and Gero
1980, Coello Coello, Lamont, and Van Veldhuisen 2007, Grierson 2008, Mela, Tiainen, and
Heinisuo 2012). In contrast to single objective optimization, Pareto optimization seeks a set of
solutions that are equally designated as optimal, since none of the criteria in each solution can be
improved upon without compromising at least one other criterion. This approach has been
extensively applied to other scientific fields, including engineering, economics, and logistics
(Coello Coello and Lamont 2004).
Pareto optimization, also known as multi-objective optimization, is often performed by
formulating the multi-objective problem mathematically and relying on computational tools to
automatically seek the optimality through the defined optimization algorithms. Mathematical
convergence in this process represents a state in which no further improvement can be made
under the defined conditions and through the application of the given measuring method.
As it has been pointed out earlier, design is a multidisciplinary process that is akin to a balancing
act between competing objectives, all vying for the greatest influence. With the advancement of
technology and increase in information fidelity and availability, the design process has become
more complex, rather than less so. Consequently, multidisciplinary design consideration has
become unavoidable. In order to manage the increased complexity and in the growing number of
competing objectives, a systematical problem-solving technique is needed. In the numerous
design and engineering fields, multidisciplinary design optimization (MDO) methods are being
increasingly explored as a potential approach to tackle and manage these problems.
33
Whether a single objective or multi-objective optimization is required, designers attempting to
utilize optimization techniques to tackle their design problems and challenges face several
inherent issues, as is summarized by Gero (1975). According to the author, the two main obstacles
the designers face when attempting to use optimization techniques are: (1) owing to the nature
of an architects’ training, formulating the design problem mathematically is not a norm; and (2)
architectural design itself as an endeavor in which some of the problems and considerations
potentially cannot be formulated and described in a mathematical form. While these issues are
still relevant, the applications of optimization approaches are still growing and are becoming a
trend in the building design industry, due to the technological advancements and the exponential
growth in computing power. The increasing demands of the building performance further add
value to the utilization of an optimization approach during the design process. This trend can be
observed from the review of Evins (2013). As shown in Figure 2-3, the use of the optimization
techniques has grown four-fold from 2002 to 2012, whereas the multi-objective optimization
methods have become the norm from 2009 onward.
2.3.1 MULTI-DISCIPLINARY DESIGN OPTIMIZATION (MDO)
The original intent of MDO methodology is to “exploit the state of the art in each contributing
engineering discipline and emphasizes the synergism of the disciplines and subsystems”
(Sobieszczanski-Sobieski 1993).” After years of development, “MDO methodology evolved means
by which such concerted action may be implemented in a systematic and mathematically-based
manner” (AIAA 1991). In general, MDO refers to optimization methods intended to solve design
problems that have several objective functions and incorporate a number of disciplines (Coello
Coello, Lamont, and Van Veldhuisen 2007). As defined by Poloni, MDO is "the art of finding the
best compromise" (Poloni and Pediroda 1997). This approach has been successfully adopted by
the aerospace industry and other engineering fields, such as the works of Laiserin (2008) and de
Weck (2012). The applications of this approach in the AEC field are also continuously growing
because of its potential to address some crucial issues found in the intersection between design
and performance analysis. For example, it has been shown to automatically generate and evaluate
design form alternatives, shorten the design cycle, and enhance the ability of a designer-to-design
exploration (Grobman, Yezioro, and Capeluto 2009). However, automatic updates on geometry
can only be achieved when seamless data transfer between the simulation and design tools is
enabled (Holzer, Tengono, and Downing 2007).
Current precedents in the AEC industry that apply the MDO concept can be grouped into two
main categories—multidiscipline collaboration and multi-objective optimization (MOO)
procedures. Examples of the extant studies that explored the collaboration aspects of the MDO
method between multiple disciplines and investigated the actual collaboration method among
various domain experts include the works of Toth et al. (2011) and Holzer (2010). This focus has
been previously discussed in Section 2.2.3.2. The other focus is on the application of MDOs’
procedure to explore multiple, and often competing, objective optimization through the
incorporation of parametric modeling and multi-objective optimization algorithms. This research
focuses on this latter aspect of MDOs. While the collaborative component is not explicitly
addressed, or excluded, there is an assumption that, through later development, the collaborative
component can be re-emphasized.
34
FIGURE 2-3: GRAPHIC SUMMARY OF THE REVIEW OF EVINS (2013) FOR COMPUTATIONAL OPTIMIZATION
METHODS APPLIED TO SUSTAINABLE BUILDING DESIGN. IMAGE FROM EVINS (2013).
35
The most notable studies in this second area of MDO focus explored the application of the cost
optimization methods to assist architects in finding cost-efficient geometry during the early stages
of design (Jadid and Idrees 2007, Schoch, Prakasvudhisarn, and Praditsmanont 2011). Other works
focused on investigating the application of a multi-objective genetic algorithm for finding the
optimal solution among the tradeoffs between capital expenditure, operation cost, and occupant
thermal comfort in building design (Wright, Loosemore, and Farmani 2002). Similarly, MDO was
reviewed in a building design setting with thermal, structural, financial, and environmental
performance evaluation by integrating all the platforms via an IFC scheme (Geyer 2009). Other
applications of MDO can be found in optimizations of structures and energy performance for
classrooms (Flager et al. 2009), energy and thermal comfort in residential buildings (Magnier and
Haghighat 2010, Asadi et al. 2012c), and window sizing and placement for maximizing indoor
comfort (Suga, Kato, and Hiyama 2010). These precedents all demonstrate the potential ability of
MDO to assist in identifying higher-performing solution sets among multiple competing criteria.
Among the efforts in MDO applications, it is also pointed out and acknowledged that designers
need to be able to understand general performance trends, as well as variable sensitivities, in
order to make informed decisions in guiding the optimization process. As a result, the applications
of MDO also promote the use of advanced plotting tools that enable multi-dimensional data
visualization, such as Pareto (Grierson and Khajehpour 2002, Stump et al. 2004) and Parallel
Coordinate (Parmee 2005, Parmee, Abraham, and Machwe 2008) plots.
2.3.2 EVOLUTIONARY DESIGN TECHNIQUES
In the applications of design optimization and MDO, the optimization algorithm is the key
component and the engine that drives the automation loop. Among various optimization
algorithms, evolutionary algorithm and evolutionary design techniques are deemed as the most
common means suitable for dealing with complex and ill-defined design problems.
Evolutionary Algorithms (EAs) are based on the natural evolution model formulated by Charles
Darwin (Bäck 1996). The population-based nature of EAs, which populates and generates a set of
optimal solutions in a single run, has been proven particularly useful and unique in solving multi-
objective optimization problems (Deb 2001). Originally, there were three main stream algorithms
within EAs: Evolution Strategies (ESs), Genetic Algorithms (GAs), and Evolutionary Programming
(EP) (Bäck 1996). The use of “genetic algorithm” has since become a broader term that does not
necessarily comply with Holland’s original theory or definition (Mitchell 1998). In this research,
the term “GA” is used in reference to a technique that simulates the evolutionary process, but
does not hold true to Holland’s original GA. GAs have become a common approach to problem-
solving across various problem domains over the last decade. Currently available global search
and optimization techniques have been divided by Coello Coello, Lamont, and Van Veldhuisen
(2007) into three categories—enumerative, deterministic, and stochastic. Since building
phenomena are very often nonlinear, which leads to discontinuous output (Wetter and Wright
2004), the stochastic approaches are typically more suitable to building applications. Among
various stochastic search methods, evolutionary computation (EC) is a generic term for the
stochastic searching techniques that simulate the natural evolutionary process, known as
evolutionary algorithms (EAs). The primary reasons for their popularity include broad applicability,
36
ease of use, and capacity to explore numerous variables from a global perspective (Goldberg
1989).
Owing to the complexity of the building design field, GAs are considered a suitable means since
they are capable of generating large solution pools addressing multiple variables and providing
lists of optimum solutions, rather than a single solution (Haupt and Haupt 2004). As shown in
Figure 2-3, Genetic Algorithm is the most common optimization method used in sustainable
building design (Evins 2013).
“A genetic algorithm is a search technique adequate for searching noisy solution
spaces with local and global minima. Because it searches from a population of
points, not a single point, the probability of the search getting trapped in a local
minimum is limited. GAs start searching by randomly sampling within the solution
space, and then use stochastic operators to direct a hill-climbing process based
on objective function values. ” (Caldas 2001)
“GA are not guaranteed to find the global optimum solution to a problem, they
are satisfied with finding “acceptable good” solutions to the problem. GAs are an
extremely general tool, and so specific techniques for solving particular problems
are likely to out-perform GAs in both speed and accuracy of the final result. GAs
are something worth trying when everything else as failed or when we know
absolutely nothing of the search space.” (Sivanandam and Deepa 2008)
Besides John Holland’s work (Holland 1992), Goldberg is also recognized as a pioneer in the
application of GAs (Goldberg 1989). The development of an influential evolutionary architectural
theory and application of GAs in building design and complex geometry is credited to John Frazer
(Frazer 1995). On the other hand, John Gero is credited for the application of GAs in design
research and is well known for his introduction to utilizing environmental performance in
addressing multi-objective design problems and its use of Pareto optimization for enabling design
decision-making (Gero, D'Cruz, and Radford 1983).
With regards to more specific performance criteria, Kicinger et al. have previously established a
detailed chronological classification of ECs used within the domain of structural design (Kicinger,
Arciszewski, and Jong 2005). In addition, there are other established applications of GAs to specific
performance optimization. For example, recent environmental applications of GAs to multi-
objective design optimization problems include the integration of CFD analysis and the coupling
of indoor comfort factors with external climate descriptors (Huang, Kato, and Hu 2012). Similarly,
Wang et al. and Charron et al. used GAs to achieve higher environmental performance building
design (Wang, Zmeureanu, and Rivard 2005, Charron and Athienitis 2006). Caldas and Nordford
also used a GA to optimize the thermal and lighting performance of a design solution (Caldas and
Norford 2002). In addition, GAs have an established use in the application of optimizing overall
energy performance, as demonstrated by Tuhus-Dubrow and Krarti, who developed a simulation-
optimization tool that couples a GA with building energy simulation in order to optimize building
shape and envelope (Tuhus-Dubrow and Krarti 2010). In another study, a multi-objective GA
method was applied to achieve economical optimization by minimizing energy consumption
(Znouda, Ghrab-Morcos, and Hadj-Alouane 2007). All of these studies justify the need for a
continued development of systematic optimization algorithms that can handle large solution sets
with multiple competing objectives to generate a more optimal solution set.
37
Whether problems to be addressed are single or multi-objective, the use of the optimization
algorithm in the building design field is increasing in frequency. However, researchers argue that
seeking mathematical convergence during the building design “optimization” process may be
nonrealistic, considering the wicked nature of these problems (Broadbent 2000, Janssen 2004,
Buchanan 1992). Even if a mathematical convergence can be reached, issues regarding whether
it is global or local optimal solution persist, since too many uncertainties remain. As a result, it is
suggested that the use of optimization techniques be concentrated more on exploring design
solutions, rather than finding the “best”, as long as improvements can be found against a base
case (Attia et al. 2013). As such, this research builds upon the use of a GA for design optimization
with the ability to generate a set of better-fit solutions. The intent is not to identify an optimal
solution per se, as that is often not possible, but rather to provide an environment in which
informed tradeoff decisions can be made more rapidly and with more certainty by the human
design team.
2.3.3 STATE OF THE ART OF MDO IN BUILDING PERFORMANCE FEEDBACK
While MDO has drawn the attention of the AEC industry due to being capable of providing
potential solutions to overcome obstacles that exist between design and other performance
analysis domains, there are existing gaps that prevent this approach from being adopted by
designers during the early stages of the design process, especially between the design domain
and energy simulation. Some of the current schisms of the use of optimization techniques have
been recently reviewed by Attia et al. (2013) and Evins (2013). Attia et al. (2013) focused on
assessing the obstacles and needs for integrating building performance optimization tools in a net
zero energy building, while Evins (2013) reviewed computational optimization methods that have
been applied to sustainable building design. Both identified the use of a GA-based multi-objective
optimization method as a trend. However, the applications of building performance optimization
in actual design process by designers and practitioners needs to be explored to understand the
actual benefits and difficulties an MDO lends to design decision-making. Some other gasps and
findings are listed below.
From interview report of Attia et al. (2013):
“Regarding the hard or technical obstacles, the interviewees’ comments and their
frequency is listed as follows:
• Uncertainty of simulation model input (27)
• Long computation time (24)
• Missing information on cost, occupancy schedules, etc. (19)
•Difficulty of problem definition (objectives arrangement and constraint violation)
(12)
•Missing environments integrating and linking simulation and optimization
seamlessly (16)
•Low interoperability and flexibility of models for exchange between different
design, construction, simulation, cost estimation and optimization tools (11)
•Lack of environment with friendly GUI allowing post processing and visualization
techniques (7)
38
……..
Interviewees mentioned many ideas that contrast the hard obstacles mentioned
previously. However, some significant ideas on future feature of optimization
tools include:
• Doing optimization in real time within a BIM model and allowing adjustment on
the fly
• Allowing parallel computing to reduce computation time
• Develop better GUI and model the building in 3D
• Couple simulation and optimization
• Connect real physical building components performance to optimization
models for better information on cost and occupancy etc.
• Allow automation of building simulation with some default templates and
strategies
• Profit from the gaming industry by developing interactive optimization
environments for example talking to an oracle friend or wizard that guides the
optimization process for better input quality and error detection and diagnostics”
(Attia et al. 2013, 120)
Attia further concluded that “At present, the integration of BPO into the design process is a
research issue. While this sample of experts confirms that BPO will add value to the design we do
not have the proof…More research is needed on the experience of designers with BPO” (Attia et
al. 2013). This statement supports the intuition and of this research.
While both reviews presented above identified some directions and needs of current MDO
applications, the shortcomings of the current MDO efforts adopted by designers during early
stage design are still unclear. As a result, by referencing their work, a literature review of current
MDO applications in building performance domain was conducted for the purpose of isolating the
gaps in current efforts and determining the point of departure and focus of this research. The list
of literature sources that have been included as detailed reviewed papers is presented in
Appendix A: Table A - 1. Although the works within the building energy simulation and design
domain are preferred, the studies related to other performance analysis domain were also
included in the review. The main review focus of these efforts was on their application design
phase, objective functions, optimization approach, considered parameters, actor domain, and
geometry complexity. The efforts that focused on energy simulation domain from designers’ point
of view are further examined their application process, optimization approach, and explored cases.
A critical distinction of this research is the focus on the application of MDO in the architectural
design field, whereas previous studies have primarily focused on the building science or
engineering fields (Attia et al. 2013). Other recent representative efforts focused on designers
utilizing parametric design and optimization techniques in energy simulation and include
Janssen’s EPPD (Janssen 2009) and Caldas’ GENE_ARCH (Caldas 2008), along with the
collaborative works of Yi and Malkawi (Yi and Malkawi 2009, 2012). Janssen’s EPPD utilized an
asynchronous decentralized evolutionary approach to accelerate the feedback process with the
aim of making it easier for designers to use evolutionary algorithms. However, lack of flexibility
for designers to formulate their design problem and the amount of time required in generating
feedback are reported as remaining challenges for EPPD (Janssen 2009). While a user interface
39
for non-programmers has been developed to facilitate the process, there are currently no
usability studies presented outside of those conducted by the EPPD research team (Janssen,
Basol, and Chen 2011). In parallel, Caldas’s GENE_ARCH (Caldas 2008) is a GA-based multi-
objective optimization (MOO) design exploration tool that incorporates energy and daylighting
performance as objective functions. Thus far, it has been applied to examine façade
configurations and shape generations. While the stated purpose of GENE_ARCH is to assist
architects in pursuing more sustainable design, Caldas asserts that, “when a design is generated
and evaluated by GENE_ ARCH, it is a whole building entity that is being assessed, not an initial
design concept or an abstract geometrical shape” (Caldas 2008). As a result, the ability of
GENE_ARCH to assist design exploration during the early design stage, where concept and form
evaluation is needed, has not been adequately resolved. In order to extend the scope of the
design problems that can be explored through GENE_ARCH, it is further integrated with a shape
grammar to act as GENE_ARCH’s shape generation module (Caldas 2011). However, the usability
and flexibility of GENE_ARCH outside that of the research team has been neither explored nor
evaluated. In addition, during Caldas’s research, it was found that reductions in overall energy
consumption were observed in direct relation to the overall building size, which unfortunately led
to optimal designs only being identified as those minimizing space within the allowable design
constraints (Caldas 2005, Caldas 2006). As such, it can be argued that reliance on design
constraints alone is not sufficient to control optimization results. While the use of energy use
intensity (EUI), instead of overall energy consumption, as the objective function measurement
negates the issue of preference towards minimized program, a means of including an evaluation
of compliance with desired spatial programming areas is still needed to provide the tradeoffs
between expected performance and met design requirements. Furthermore, the geometries
explored through GENE_ARCH are currently limited to the stacking of simple orthogonal box
volumes with roof tapering and façade opening variations, with more complex geometries still
left unexplored (Caldas 2008, Caldas 2011). In an effort to increase the geometric complexity
available during the building performance optimization process, Yi and Malkawi utilized
hierarchical point-based relations to define the design form (Yi and Malkawi 2009). The proposed
design method was subsequently applied to include consideration of Computational Fluid
Dynamics (CFD) and energy simulation (Yi and Malkawi 2012). While this approach successfully
demonstrates the ability to explore more complex forms during the optimization process, it
requires designers to define their design concept as a series of hierarchical point-based
relationships. As this is not currently included in published design processes outside of the original
research team, there is concern regarding the usability and applicability of this method of
geometric definition during the design process.
While the current efforts of Janssen, Caldas, and Yi and Malkawi recognize the importance of form
exploration and its impact on energy use, some approaches are unable to accommodate effective
exploration of complex geometry (Janssen 2009, Caldas 2008), while others are only able to do so
through a customized encoding scheme (Janssen, Frazer, and Tang 2005, Caldas 2011) or specified
design logic (Yi and Malkawi 2009, 2012). This limits their frameworks’ ability to be adopted and
further tested for its utility in early stage design, where the exploration of varying degrees of form
is needed. Subsequently published experimental case studies have relied solely on the
researchers in the role of the experimental user. Therefore, the impact of these frameworks on
the design process outside of a controlled environment remains an essential question.
40
Consequently, the applicability of these frameworks on the design process, and that of MDO, in
general, for the early stage design process, has been left largely unexplored or measured.
2.3.4 SUMMARY OF DESIGN AUTOMATION & OPTIMIZATION
The studies reviewed above have demonstrated the potential of adopting multidisciplinary design
optimization (MDO) methods to provide a performance feedback loop for supporting early design
stage decision-making (Flager et al. 2009, Welle, Haymaker, and Rogers 2011). However, extant
research exploring MDO in the AEC field has typically employed simplified geometry (Flager et al.
2009, Welle, Haymaker, and Rogers 2011), while works involving more complex geometry have
been limited to single domain optimization (Yi and Malkawi 2009). The few precedents that
explored both MDO and complex geometry have not included the energy performance domain
as a domain of interest (Turrin, von Buelow, and Stouffs 2011, Keough and Benjamin 2010). Where
the energy performance domain has been included for optimization, the relationship between
design form and energy performance has been largely excluded. Instead, the optimization process
has focused on mechanical systems (Pantelic, Raphael, and Tham 2012) and simple geometric
modifications, such as window sizing and placement (Fesanghary, Asadi, and Geem 2012, Hamdy,
Hasan, and Siren 2011a). Furthermore, the application of most of the developed approaches to
the overall design process remains largely unexplored. This gap in existing research indicates the
need for a MDO design framework that is able to incorporate both conceptual energy analysis
and exploration of complex geometry for the purpose of providing early stage design performance
feedback. Once this is accomplished, the established MDO design framework can be applied to
the overall design process, where its impact can be observed and evaluated. Table 2-2 summarizes
the gaps of current MDO precedents and the potential means of addressing them.
41
TABLE 2-2: SUMMARY OF OBSTACLES & POTENTIAL SOLUTIONS AIMED AT CLOSING THE GAPS OF CURRENT MDO APPROACH USED BY DESIGNER AT THE EARLY
STAGE OF THE DESIGN PROCESS.
Applicability for Early Stage Design Lacking of Domain Knowledge Support Decision-making
Causes
1. Main actors are engineers & researchers not
designers (Attia et al. 2013, Evins 2013).
2. No suitable & flexible framework for designer
(Attia et al. 2013).
3. Long analysis times to reach mathematical
convergence solution (Attia et al. 2013).
1. Designers’ ability to formulate initial design
(Gero 1975, Attia et al. 2013).
2. Lack of understanding of the MDO mechanism
and the use of generated data (Attia et al. 2013).
1. Too many Pareto optimal solutions (Attia et al.
2013).
2. Lacks user friendly user interface to support post
data processing and visualization (Attia et al.
2013).
Results
1. Simplified geometries typically explored (Attia et
al. 2013, Yi and Malkawi 2009, Welle, Haymaker,
and Rogers 2011, Flager et al. 2009).
2. Existing framework has not been validated
against the needs of the early stage design
process (Attia et al. 2013).
3. While experts confirm that MDOs will add value
to the design, no evidence supporting this claim
has been provided (Attia et al. 2013).
1. The applicability of MDO by designers into design
process remains unknown (Attia et al. 2013).
1. The use of generated data to support decision-
making is still unknown.
2. Long post-data processing time (Attia et al.
2013).
Solutions
1. Develop a MDO framework for designers (Caldas
2008, Janssen 2009, Yi and Malkawi 2009).
2. Validate the framework against the need for
early stage design process.
1. Evaluate the framework by designers. 1. MCDM techniques are needed (Fenton and
Wang 2006, Grierson 2008, Mela, Tiainen, and
Heinisuo 2012, Opricovic and Tzeng 2007).
2. A user friendly user interface for the framework
is needed (Grierson and Khajehpour 2002,
Parmee 2005, Parmee, Abraham, and Machwe
2008, Attia et al. 2013).
3. Conduct case-based experiments to understand
the efforts required for designers to utilize
generated data.
42
2.4 SUMMARY OF THE LITERATURE REVIEW + RESEARCH’S POINT OF DEPARTURE
In order to solve the complex and highly uncertain design problems during the early stages of the
design process, this research posits that the MDO method that uses parametric modeling coupled
with a GA-based multi-objective optimization method would serve as a potential solution. The
MDO approach also appears to have the greatest potential to overcome the barrier between
design and energy simulation domain and to further achieve the identified criteria for “designing-
in performance”.
At present, however, most of the precedents have either had singular domain emphasis on
structural performance, detailed mechanical systems, or have been applied to simplified
geometric settings. In particular, the application of preliminary energy performance feedback to
support complex geometry has not been fully understood, and therefore remains undeveloped.
In addition, it has been noted that, in previous explorations, the majority of actors involved have
been either researchers or engineers, as opposed to architects, designers, or students. As such,
there is the question of applicability of the proposed approaches to early stage design, when the
main user in question becomes the designer. While the research focus has shifted to applying
MDO from a designers’ point of view, with the intent to assist early stage design, there are still
significant gaps regarding the flexibility and usability of the framework by designers. As a result,
the utility of the MDO framework to the designers during the early stages of the design process is
a research question that needs to be answered and addressed.
The key findings that emerged from the review of the relevant literature are summarized below.
1. A “Designing-in Performance” framework should provide user-friendly interface and
enable:
a. rapid generation of design alternatives;
b. rapid evaluation of design alternatives;
c. trade-off analysis for competing criteria;
d. and provide a search method to identify design alternatives with better fit
performance.
2. Most potential means to achieve a designing-in performance criteria involve a MDO
framework that includes four essential components—parameterization, platform
integration, automation, and a GA-based multi-objective optimization algorithm.
3. In order to facilitate the process evaluation, the framework needs to be flexible and
adaptable for designers. As a result, a selection from commonly used platforms is
recommended as the choice of the integrated platform.
4. The framework needs to be validated against the needs of early stage design prior to
conducting the process evaluation, in particular with respect to:
a. the ability to accommodate formal variety and varying degrees of geometric
complexity;
43
b. the ability to provide improved performance feedback for multiple objective
functions.
5. In order to evaluate the applicability of the framework and the effectiveness of the
feedback in assisting designers’ decision-making, a process evaluation metrics and a
measurement method need to be established.
Figure 2-4 summarizes the gaps identified by the literature review and highlights the foci of the
present study. Based on the current gaps and the research goal, in order to proceed on with the
research, it can be established that a designer-applicable MDO framework needs to be developed
first. Second, the established framework needs to be validated against the needs of designers
during the early stage of design. Third, process evaluation metrics and measurements need to be
established and defined. Fourth, the research can then evaluate the processes the designer
engages in when applying the framework based on the established evaluation metrics. Finally, the
applicability and the impact of the established framework can be understood and analyzed. It
should be noted that several components—such as a user interface, user guidance, data
visualization, MCDM, etc.—are critical in bridging the gaps between design and energy simulation
domain and mitigating the downfalls of current MDO precedents. Although the importance of
those components is acknowledged, they have been excluded from the current research scope
during this research period.
44
FIGURE 2-4: HIGHLIGHTS OF THE RESEARCH APPROACHES AND FOCI.
45
CHAPTER 3 RESEARCH METHODOLOGY
3.1 RESEARCH METHOD OVERVIEW
The research process and methodology adopted in this study can be described by the systems
development research process proposed by Nunamaker, Chen, and Purdin (1990), as illustrated
in Figure 3-1. In this case, this research treats the proposed framework as a “system”—as
identified and proposed through the literature review. The research then proceeds to develop the
framework by utilizing the development of a prototype tool to further refine the functionalities
and workflow of the framework through an iterative testing and evaluation process. This research
method has been justified and adopted by several precedents, including Janssen (2004) and
Bukhari (2011), as suitable in meeting the present research objectives.
As outlined in Table 1-2, the overall research methodology and process adopted in this work can
be grouped into six research methods. The order of these activities is not an indication of the
chronological order of the research process, as several methods are pursued in parallel, as
illustrated in Figure 3-1. The in-depth description of each research method is introduced as follows:
1. Literature Review
2. Prototype Development
3. Establishing Framework Validation and Evaluation Metrics
4. Hypothetical Case-based Experiments
5. Design Professional Case-based Experiment
6. Pedagogical Case-based Experiments
46
FIGURE 3-1: SYSTEM DEVELOPMENT RESEARCH PROCESS, PROPOSED BY NUNAMAKER, CHEN, AND PURDIN
(1990). DIAGRAM IS REDRAWN BASED ON THE RESEARCH FROM NUNAMAKER, CHEN, AND
PURDIN (1990).
47
3.2 LITERATURE REVIEW
Literature review is conducted in this research for the following purposes:
1. By reviewing current design technology and energy simulation tools, understanding of the
obstacles between design and energy simulation domains is increased, allowing for
identification of the potential means of addressing them.
2. By reviewing precedents’ predictions and recommendations regarding the future trend
of energy simulation tools, this research isolates the criteria to achieve a “designing-in
performance” environment.
3. By reviewing current efforts in solving these obstacles, the research can identify the most
suitable means as the proposed framework.
4. By reviewing the efforts that use similar approaches and accessing the gaps of current
efforts, the focus and contribution of the research can be identified.
5. By reviewing the current needs of early stage design, the framework criteria can be
identified.
6. By referencing prior research regarding the process evaluation and design activity analysis,
this research establishes the process evaluation method and measurement metrics for
the framework.
It should be noted that the literature review is conducted throughout the research and has been
updated continuously during the research period.
48
3.3 PROTOTYPE TOOL DEVELOPMENT
Informed by the findings of the literature review, which serves as the foundation for this research
on which the conceptual framework can be formed, a prototype tool that enables the proposed
framework can be developed in order to demonstrate and support the research hypothesis. As a
result, the research proceeds to develop a prototype tool that has the necessary functionalities
that are identified in the literature review.
A MDO framework involving:
1. Automated system: Platform Integration + Automation + Multi-objective
Optimization
2. Interaction with designers: Problem Formulation + Multi-criteria Decision-
making, i.e., trade-off study
Framework development is an iteratively implemented process. The outline of the framework
development steps and processes is illustrated in Figure 3-2, while the detailed steps of the
development method and process are documented in Chapter 4.
FIGURE 3-2: OUTLINE OF THE FRAMEWORK DEVELOPMENT METHOD & PROCESS.
49
3.4 ESTABLISHING FRAMEWORK VALIDATION & EVALUATION METRICS
To measure the effectiveness of the MDO framework in the early stages of the design process, an
evaluation metric is needed in order to isolate the means by which the newly developed
simulation framework can be measured against the existing simulation frameworks. A broad
spectrum of disciplines has taken an interest in dissecting, analyzing, and measuring the design
process in order to identify possible means of improvement. The works of Eastman (1968), Akin
(1984), Cross, Christiaans, and Dorst (1996) and Gero and Mc Neill (1998) are the early
representative efforts in the architectural design field that attempted to measure, describe, and
analyze the design protocol and activities. Subsequently, Kalay (1999) and Stoyell et al. (2001)
started to address the performance-based design aspects of the design process by measuring the
impact of design activities on the building lifecycle performance. Some studies attempted to
evaluate robustness and the flexibility of the early stage design exploration process (Simpson et
al. 1996). Recently, the work of Clevenger and Haymaker (2011) emphasized the need of providing
metrics by which to assess design guidance. However, as summarized by Clevenger et al., “In
general, limited real-world data exist to evaluate the exploration performance achieved by actual
designers. This is due, primarily, to the fact that parallel or redundant explorations are not
performed across strategies in the real world due to limited project resources” (Clevenger,
Haymaker, and Ehrich 2013). In addition, the majority of these studies examined the design
process from a macro scope approach, attempting to link individual activities with the overall
design results. However, the subjective nature of the design process is highly reliant on individual
preferences. Thus, it is ill suited for these top-down measurements as a means of providing clarity
regarding the process evaluation. The subject of interest for this research is thus on the
integration processes between geometric design and energy performance feedback for the
purpose of early design decision-making. Therefore, the required metrics are established
specifically for the purpose of evaluating these processes and their ability to assist or hinder the
early design process.
In order to consider both the quantitative and qualitative values in application to the integration
of design and energy simulation process, while maintaining consistent results across different
project activities, this study narrowed the scope of design activities of interest to six steps, as
illustrated in Figure 3-3.
The identified six steps for energy simulation are:
1. Generate design configuration
2. Transfer design model to energy simulation model
3. Modify energy model and apply energy related attributes
4. Run analysis
5. Evaluate results
6. Execute design decisions based on available feedback
50
FIGURE 3-3: THE SIX-STEP PROCESS FOR INTEGRATING DESIGN AND ENERGY SIMULATION INDEPENDENT OF
PLATFORM OR TOOL USED. THESE STEPS CAN BE IMPLEMENTED MANUALLY OR AUTOMATICALLY,
DEPENDING ON USER’S SELECTED TOOLS AND PLATFORMS.
These six steps are applicable even within the context of platforms that combine the design
geometry and energy simulation domains. When such a platform is utilized, steps 2 and 3 can be
considered as being completed through an automated process, rather than a manual one. In order
to evaluate the effectiveness of the six-step process, various factors must be taken into
consideration. Referencing the categorization method of Atman et al. (2005), this research
proceeded to categorize the types of measurements to be collected into four groups—design
problem, process, product, and actor domain and experience—as listed in the
51
Table 3-1. The was aim was to provide a basis by which to observe the impact of both technical
and human factors together, in order to determine the effectiveness or potential value of the
developed framework. One issue to note is that the developed evaluation metric was limited to
the available measurable items collected through the exploration of the experimental cases.
Measurements within the design problem category are values collected regarding the physical
aspects of the design and are further divided into two subcategories—project complexity and
design complexity. Project complexity refers to the project size, as measured in square feet, and
the number of types of program spaces, such as parking, commercial or residential, that are
included within the design problem. Design complexity refers to the number of surfaces required
to be included in the energy model, along with the number of available parameters, as provided
by the design problem. Process-based measurements are divided into four subcategories, namely
speed, feedback, simulation process observation, and exploration process. Speed is measured in
time taken to complete the specified activities, such as creating the design geometry, running the
energy analysis, and calculating the three objective scores. The feedback category records the
time taken for all three objective scores to become available for feedback purpose, i.e., enabling
a design decision or choice, and whether the method of obtaining the objective scores was manual
or automated. Simulation process observation describes the implementation method used when
completing the previously described six-step energy simulation process. Finally, exploration
process provides diagrams illustrating the decision-making patterns used by the hypothetical case
studies and the pedagogical benchmark cases. Product measures focus on evaluating the resulting
design alternatives of each process of interests. These include the number of design alternatives
generated during an 8-hour work period and the solution quality, as defined by EUI, NPV, and SPC.
The user experience section documents the users’ background and experience in utilizing
parametric modeling and previous experience using H.D.S. Beagle.
The effectiveness of the feedback loop defined in this research has two separate levels—the
effectiveness of the algorithm, and the design effectiveness of the solution space. If the solution
space generated by the algorithm can continuously provide design alternatives with improved
performance—as defined as Pareto solutions through the EUI, NPV and SPC scores—the feedback
is deemed effective. Regarding the effectiveness of the solution space with respect to design
decision-making, two considerations should be noted, namely (1) whether the designer’s final
selection’s performance in terms of EUI, NPV, and SPC can improve upon the initial design (if that
is the case, the feedback is effective), and (2) whether the designer’s adjusted exploration range
can result in a solution space with improved performance (if that is the case, the feedback is
effective).
52
TABLE 3-1: EVALUATION METRICS USED IN THIS RESEARCH.
Categories/Measures
Record
Unit/Items/Method
Hypothetical
Case-based
Experiment
Design Profession
Case-based
Experiment
Pedagogical Case-
based Experiment
In-House
MEP
EEPFD
Course
Design
Studio
Workshop
Problem Measures
Project Complexity
Project size ft² √ √ √ √ √ √ √
Space type no. Number √ √ √ √ √ √ √
Design complexity
Initial energy model
surface no.
Number √ √ √ √ √
Explored parameter no.
(Design/Energy)
Number/descriptive √ √ √ √ √
Process Measures
Speed
Time required to create
design geometry
Minutes √ √ √ √ √ √ √
Time required to run
energy analysis
Minutes √ √ √ √ √ √ √
Time required to
calculate three
objectives
Minutes √ √ √ √
Feedback Process
Performance feedback
time per result
Hours √ √ √ √ √ √ √
Manual/Automated M/A √ √ √ √ √ √ √
Simulation process
6 step implementation Diagram, Descriptive √ √ √ √ √ √ √
Exploration process
measures
Decision-making patterns Diagram, Descriptive √ √ √ √ √ √ √
GA Settings Number √ √ √ √ √
Product Measures
Feedback Quantity
Feedback no./8 hrs. Number √ √ √ √ √ √ √
Feedback Quality
Initial design
performance
EUI, NPV, SPC √ √ √ √ √
Solution space
performance
EUI, NPV, SPC √ √ √ √ √
Output information Descriptive √ √ √ √ √ √ √
Final design performance EUI, NPV, SPC √ √
User Experience
Actor Domain Descriptive √ √ √ √ √ √ √
Parametric design Descriptive √ √ √ √ √ √ √
Energy simulation Descriptive √ √ √ √ √ √ √
H.D.S. Beagle Descriptive √ √ √ √ √ √ √
53
3.5 HYPOTHETICAL CASE-BASED EXPERIMENT
During this research, five hypothetical case-based experiment sets are conducted. The first
experiment set is used to gauge the technology affordance of the prototype tool and framework
during the prototype development period. The second experiment set aims to validate the GA
against the criteria identified during the literature review to fulfill the needs of early stage design.
The two subsequent experiment sets are used to understand the relationship among complexity,
performance, and the subsequent relevance of the developed framework. The last experiment
set aims to observe the data generated through the framework as a means to propose the best
practice of using the framework during the early stage of the design process.
While the same research method and process is adopted in all hypothetical experiment sets, their
observation focuses vary. The process outline is illustrated in Figure 3-4, while the detailed
experiment descriptions can be found in 0. As a part of this research, 12 test cases have been
prepared for testing during the five hypothetical case-based experiments. The definition of each
test case, henceforth referred to as a “scenario”, is defined and determined during the technology
framework development period. The measurement and metrics of the research are defined in
Section 3.4.
FIGURE 3-4: OUTLINE OF THE HYPOTHETICAL CASE-BASED EXPERIMENT METHOD & PROCESS.
The two critical criteria for evaluation of the framework’s applicability are the
ability to
1) Provide a solution space with an improved performance, across the multiple
competing objective functions
2) Be adaptable to a wide spectrum of design scenarios, both in typology and
geometric complexity
54
3.6 DESIGN PROFESSION CASE-BASED EXPERIMENT
The design profession case-based experiment is conducted as part of the framework evaluation.
The purpose of this experiment is to compare and contrast the existing design processes with
those offered by the developed framework in order to evaluate the applicability and impact of
the framework on the early stage design process for a real word design problem. This portion of
the research method is developed based on the work of Gerber (2007). The outline of the design
profession case-based experiment is illustrated in Figure 3-5.
FIGURE 3-5: OUTLINE OF THE DESIGN PROFESSION CASE-BASED EXPERIMENT METHOD & PROCESS.
In order to ensure the comparability and completeness of the collected data, the research first
establishes a general case study framework prior to commencing the experimental work. In
addition, a semi-structured interview guideline is established to lead the pre-interview and post-
interview process. The case study framework and interview guidelines can be found in Appendix
B, while the detailed case description and experimental process are presented in 0.
55
3.7 PEDAGOGICAL CASE-BASED EXPERIMENT
The pedagogical experiment methods adopted by this research are inspired by the work of Gerber
and Flager (2011) and Flager, Gerber, and Kallman (2014), who used pedagogical experiments to
test the applicability of their new design approach and generated a means of measuring the
impact of the framework within the pedagogical experimental setting. In this research, three
types of pedagogical case-based experiments are conducted: (1) a computation design course
experimental settings; (2) a design studio experimental setting; and (3) a computational workshop
setting. Each experiment type is designed as part of the framework process evaluation. For the
first experimental setting, the benchmark process is designed, allowing observation of the
enabling or disrupting effects of the automation component of the developed framework on the
design process. In addition, the usability of the framework is evaluated through measuring the
problem formulation time, model evaluation, and a course survey. In the second experimental
setting, students act as the main users, yielding valuable information on the usability of the
framework by the established evaluation metrics. Pre- and post-interviews are conducted to elicit
student’s views regarding the established framework. The last experimental setting is intended to
further extend the data set from the prior two experimental settings in order to validate previous
observations. Furthermore, the feedback measurements defined by the research are collected to
enhance the understanding of the framework utility to the process. The outline of the pedagogical
case-based experiment method is illustrated in Figure 3-6, while the detailed description of each
experimental setting can be found in Chapter 7.
FIGURE 3-6: OUTLINE OF THE PEDAGOGICAL CASE-BASED EXPERIMENT METHOD & PROCESS.
56
CHAPTER 4 EVOLUTIONARY ENERGY PERFORMANCE FEEDBACK FOR
DESIGN (EEPFD): A MDO DESIGN FRAMEWORK FOR EARLY
STAGE ENERGY SIMULATION FEEDBACK
4.1 INTRODUCING EEPFD – THE PROPOSED DESIGNER-ORIENTED MDO DESIGN
FRAMEWORK
The authors of the literature sources reviewed in the previous chapter have identified a promising
applicability of MDO to enable a “Designing-In Performance” environment by demonstrating the
ability of MDO to reduce the design cycle latency, solve interoperability issues, and provide better
performance results (Flager et al. 2009, Welle, Haymaker, and Rogers 2011). However, the
relationship between design form exploration and energy performance has been largely excluded
from previous research efforts. Furthermore, current attempts have yet to fully explore the
applicability of this approach in the context of the early stage design, where rapid exploration of
variety and alternatives is desired. In response to this gap in existing research, there is a need for
a designer-oriented MDO framework that incorporates both conceptual energy analysis and the
exploration of overall building geometric configurations for the purpose of providing early stage
energy performance feedback. To meet this need, a novel MDO framework, entitled Evolutionary
Energy Performance Feedback for Design (EEPFD), is proposed. While it is acknowledged that
MDO methodology is applicable to various objectives and types of performance feedback, this
research focuses on investigating whether the proposed MDO framework is capable of enabling
a “designing-in performance” environment, where energy performance feedback can influence
design decision-making.
The theoretical foundation of EEPFD is built upon five major components found in the precedents’
MDO methodology as introduced in Section 2.3. These components can be divided into two areas
of interest, one of which is within the automated loop itself, while the other is in the interaction
between designers and the automation system. The components within the automated loop are:
(1) platform integration; (2) automation; and (3) a GA-based multi-objective optimization
algorithm.
1. Platform Integration: Through platform integration, designers will be able to utilize their
familiar working platform, thus avoiding the errors and inconsistencies in the data input
due to the obstacles contributed from user interface and insufficient domain knowledge.
The integrated platform can also overcome the issues of interoperability between design
and energy simulation platforms.
2. Automation: Automating the translation of a design model to an energy model helps to
overcome inconsistencies, input redundancy, and errors commonly occurring during the
model translation, simulation, and evaluation process, thereby reducing design cycle
latency.
57
3. GA-based multi-objective optimization algorithm: The incorporation of a GA-based multi-
objective optimization algorithm as the driving engine of the automation enables the
automation system to systematically explore and search the best-fit solutions among the
competing objectives. In addition, it enables the inclusion of other performance
objectives as part of the tradeoff analysis, which has been identified through the
literature review as necessary to support design decision-making.
As a result, as illustrated as Figure 4-1, the proposed framework provides two interaction points
between designers and the automated systems. Prior to the design problem being explored
through the automated system, designers are required to define their design problem
mathematically, i.e., through parameterization. This provides the base on which the system can
automatically generate, analyze, and evaluate design alternatives until the automated process is
terminated. The second interaction point of EEPFD occurs at the end of the automation loop,
where a solution space containing all the explored design alternatives is provided, along with each
alternative’s performance analysis and evaluated and ranked results required for decision-making
support.
Before proceeding with the investigation of EEPFD, several items need to be formally defined,
such as the platforms to be integrated, the problem formulation method, the desired objective
functions, and the automation and optimization algorithm. Once the automation system is
formally defined, the two interaction points can be observed. In addition, when designers utilize
EEPFD, they can in theory design within their preferred design platform, using parametric design
to formally define their design problem, so that the automated system can generate and evaluate
the design alternatives and provide feedback results for tradeoff studies. As the currently
available tools are not capable of providing the functionalities of the desired automation loop
required to enable EEPFD, developing a prototype with these capabilities is recognized as
essential prior to proceeding with the investigation into the application of EEPFD to the early stage
design process.
FIGURE 4-1: THE PROPOSED THEORETICAL MDO DESIGN FRAMEWORK, EVOLUTION ENERGY PERFORMANCE
FEEDBACK FOR DESIGN.
58
4.2 MANIFESTATION OF EEPFD
In order to explore EEPFD’ applicability to the design process, a prototype tool is developed and
utilized as a part of this research, in order to provide the functionalities necessary for realizing
EEPFD. The utilized prototype tool, H.D.S. Beagle, was initiated and developed through
participation in Autodesk’s IDEA Studio residency program based on Prof. David Gerber’s proposal,
entitled “Design Optioneering: Variation-Exploration-Correlation” (Autodesk 2013). Participation
by the author in this research project served as the starting point of formally defining the feasible
functionalities and the steps and practical process of realizing EEPFD. In this regard, though
currently heavily reliant on H.D.S. Beagle for proof of concept, EEPFD can be considered
independent. Ideally, the methodology described by EEPFD should provide a means of
implementing both H.D.S. Beagle and any other similarly tools to the early stage design process.
The development of H.D.S. Beagle can be separated into three stages: (1) working prototype; (2)
implementation adjustment; and (3) prototype validation. The original project research
development team for H.D.S. Beagle consisted of the author, Prof. David Gerber, and PhD
candidate in computer science Bei Penny Pan. With the support of Autodesk’s IDEA Studio
residency program, the initial working prototype for H.D.S. Beagle was developed between May
23, 2011 and August 19, 2011. After this period, further development of the prototype was
overseen by the author with a group of four volunteers from the USC Computer Science
Department: Junwen Chen, Shitian Shen, Yunsan Zhu, and Lu Ke. The prototype version available
as of September 2012 (H.D.S. Beagle v20120903), at the conclusion of this second development
stage, can be considered the most recently available Beta version of H.D.S. Beagle. All further
evaluations and validations are based on the use of this Beta version and can be considered an
ongoing stage at the time of this research.
Following this chapter is the introduction of the overall development process for H.D.S. Beagle
and the steps and practical process of applying EEPFD with H.D.S. Beagle. Section 4.3 provides an
in-depth description of Beagle’s development while a presentation of the resulting steps and
practical process of applying EEPFD is provided in Section 4.4. Decisions regarding key
components of both the Beagle and EEPFD were made based on the experience and knowledge
of the research team. Through the initial proposal to Autodesk’s IDEA Studio, Prof. David Gerber
determined the domains of interest, as well as identified tool platforms to be used, and the
concept of the methodology. However, several tasks and detailed components were still in need
of explicit definition for the prototype to be functional. This necessary development process can
be broken down into the ten steps, described below, and illustrated in Figure 4-2.
1) Determination of H.D.S. Beagle’s mechanism: This step determined the location of the
proposed application’s engine. For example, the issue of whether the proposed application should
be a stand-alone application or should be integrated with another platform as a plug-in needed
to be addressed. After consulting with specialists from Autodesk, it was determined that the
proposed prototype tool would function as a Revit plug-in.
2) Enabling interoperability between Revit and Excel: Before the pursuit of automation it was
necessary to establish the ability of the platforms to communicate as needed. During this step
testing of the functionality of the Revit API and Excel API was performed as needed by the research
team.
59
3) Automating generation of design alternatives according to desired tool capability: This step
identified the process of utilizing the communication between Revit and Excel, to be used in the
application to alter a parametric model in Revit, based on varying parametric values provided
through Excel. As a part of this step, it was necessary to identify the method to be utilized by users
to inform the system of the desired parameters and ranges. This determination affected the final
functionality of the platforms, as a part of step 8.
4) Determination of the objective functions: Despite the previously identified domains of interest,
the explicit definition, and means of measurement regarding the objective functions needed to
be addressed. At this point, all objectives and their respective means of measurement were
defined by the research team. The selected objective functions and their definitions can be found
in Section 0.
5) Automation of the obtaining energy analysis results: As energy analysis is included as part of
the desired objective function for the purpose of evaluating energy performance, the automation
method for obtaining analysis results from GBS needed to be defined.
6) Automating calculations of the three objective functions: After the determination of the
objective functions and their means of measurement, the research proceeded to automate this
process.
7) Determination of the variables suitable for exploration in H.D.S. Beagle: This step was instigated
in parallel with steps 3 to 6. During this time, the different types of variables that were suitable
for exploration through H.D.S. Beagle were identified and defined. In addition, the method of
variation for explicit categories of variables was defined.
8) Determination of platform functionalities: The definition of platform’s functionalities was
continuously updated to reflect the decisions made during steps 3 to 7.
9) Implementation of GA-based multi-objective optimization algorithm: Once the platform
functionality was finalized and the objective functions had been determined, the implementation
of the automated GA-based MOO was addressed. Here, the implementation mechanism, as
decided by the research team, included encoding, population, crossover, mutation, selection, and
termination. During this step, adjustments to the steps 3-8 were made as needed.
10) Determination of H.D.S. Beagle’s User Interface and finalization of EEPFD: Once the
automation loop was completed, the user interface of H.D.S. Beagle and the workflow of EEPFD
were finalized. During this step, necessary adjustments to the previous 3-9 steps were
implemented as needed.
60
FIGURE 4-2: THE DEVELOPMENT PROCESS OF EEPFD & H.D.S. BEAGLE.
61
4.3 THE PROTOTYPE TOOL DEVELOPMENT FOR EEPFD - H.D.S. BEAGLE
This section describes the development of H.D.S. Beagle. First, Section 4.3.1 provides an
introduction to the overall system functionality of H.D.S. Beagle, while a detailed description of
the customized GA-based multi-objective optimization algorithm in H.D.S. Beagle is given in
Section 4.3.2. Finally, the resulting final workflow for H.D.S. Beagle (v20120903) is outlined in
Section 4.3.3.
4.3.1 H.D.S. BEAGLE PLATFORM SELECTION & INTEGRATION
The development of EEPFD necessitates an ability to generate the desired automation and
optimization routine. In response, a prototype tool, H.D.S. Beagle, is developed in parallel as a
plugin for Autodesk® Revit® (Revit), integrating Autodesk® Green Building Studio® (GBS) and
Microsoft® Excel® (Excel) with Revit, as needed. The selection of the platforms is made based on
three considerations: (1) the research interest focus on energy, financial, and design requirement
domains; (2) the potential for future cloud-based applications; and (3) the bypassing of typically
encountered interoperability issues between Revit and GBS. The overall system architecture is
illustrated Figure 4-3.
FIGURE 4-3: H.D.S. BEAGLE SYSTEM OVERVIEW.
62
Autodesk® Revit® 2013 (Revit)
i
:
Revit is a building information modeling tool selected to serve as the geometric parameterization
platform for H.D.S. Beagle, thereby enabling designers to define their building geometry by
providing a series of parameters that control the design’s geometric configurations. This platform
also serves as an insert point for the energy settings necessary for a conceptual energy analysis
through GBS. In this manner, H.D.S. Beagle is able to utilize the functionality of Revit to streamline
the translation of design models into analyzable energy models. As a result, H.D.S. Beagle
bypasses repetitive geometric modeling and parametric input typically required during
conventional energy analysis processes. This process also benefits from the Conceptual Energy
Analysis Setting dialog box designed in Revit, which intentionally employs a collection of simplified
energy settings in order to be more easily accessible for use by designers at the early stages of
design (Smith, Bernhardt, and Jezyk 2011).
Autodesk® Green Building Studio® Web-Based Energy Analysis Service (GBS)
GBS is a web-based energy analysis service that relies on the gbXML file format to securely
transfer building information between design tools, i.e., Revit and its web-based whole building
energy analysis engine DOE-2.2 (DOE 2011b). The web-based analysis capability of GBS allows
H.D.S. Beagle to execute analyses of individual designs in parallel over the cloud, versus analysis
of individual designs in sequence, resulting in reduced time requirements for the exploration
process.
Microsoft® Excel® 2010 (Excel)
Excel is a spreadsheet software selected to serve as a user interface proxy for H.D.S. Beagle, in
which designers can set up design parameter ranges of interest, constraints, spatial program
parameters, and the spatial programming compliance formulae. In addition, Excel is used by H.D.S.
Beagle to store financial parameters and relevant formulae. Excel is used for these purposes for
two primary reasons: (1) existing extensive familiarity of Excel within the professional field of
architecture, and (2) Excel’s flexibility in organizing data—H.D.S. Beagle requirement that was
recognized during the prototype’s development period. As design problems are unique
combinations of parameters and constraints, the research team found that the most efficient
method by which to extract user-defined parameter ranges, level settings, and formulae for
calculation was through the generation of an Excel template formatted for H.D.S. Beagle. The
formatted template provides flexibility and extensibility necessary to accommodate the broadest
range of early stage design problems. In particular, this enables the customization of the SPC and
i
The Conceptual Energy Analysis (CAE) functionality utilized in this research was originally adopted from a Beta
program, Autodesk® Project Vasari (Vasari), which was later included in Autodesk® Revit Architecture® 2012 and all
subsequent versions. During the prototype development period, a separate version of H.D.S. Beagle was originally
developed for both Vasari and Revit, with the critical difference being the communication method necessary for
interaction with GBS. While the overall functionality of both versions remains consistent and the communication
method used by Vasari is considered more favorable, due to the instability of the Vasari Beta, the Revit version of H.D.S.
Beagle is being utilized to generate the data needed by this research. Therefore, all references to H.D.S. Beagle are with
exclusive reference to the Revit version of H.D.S. Beagle and can be considered as representative of the most updated
available functionality of the prototype.
63
NPV formulae, in order to ensure the suitability of these settings for each design problem. A
description of the generated Excel template for H.D.S. Beagle is provided in Table 4-1. As H.D.S.
Beagle is still in the prototype development phase, it is likely that a generated User Interface (UI)
in Revit would eliminate the need for Excel by providing these functions directly.
TABLE 4-1: H.D.S. BEAGLE EXCEL TEMPLATE WORKSHEETS AND FUNCTIONS.
Worksheet Name Description
GeometryParam This worksheet defines the parameter names and ranges of the modifiable
parameters that relate directly to architectural geometry.
LevelSetting This worksheet defines the level settings of each mass geometry.
ProjectConstraints This worksheet defines the project constraints, such as FAR, height, setback, etc.
The application will check the model according to the information provided in this
worksheet.
SPCScoreParam This worksheet defines the parameter information that will be used in the SPC
Formula worksheet.
SPCFormula This worksheet defines the calculation method of the design compliance score.
Currently, the project uses the difference between the designed space area and
the required space area to rank the design.
FinancialParam This worksheet defines the parameter information that will be used in the
Financial Pro Forma worksheet.
FinancialProForma This worksheet defines the calculation method of the financial performance.
Currently, the project uses Net Present Value (NPV) as the basis of the financial
score.
64
H.D.S. Beagle v20120903
H.D.S. Beagle is developed in C# as a plugin for Revit by using the Revit API, GBS SDK, and Excel
API, as shown in Figure 4-4. This plugin not only serves as a means to integrate the three domains
of interest of this research, but also as a host for the evolutionary engine, enabling it to
automatically extract necessary information from each platform and automate the exploration
process through the application of a custom GA-based multi-objective optimization algorithm.
The following is a summary of the provided functions by H.D.S. Beagle v20120903:
1) Enables user-defined Genetic Algorithm settings, including initial population size, generation
population size, crossover ratio, mutation ratio, maximum generation numbers, and selection size.
2) Generates new design alternatives in Revit within user-designated parametric value ranges and
GA settings.
3) Connects to GBS for energy analysis. This function includes exporting design alternatives as
energy models in gbXML format (*.xml), sending energy models to GBS for energy analysis,
downloading results, and extracting relevant information for EUI and NPV calculations.
4) Calculates the SPC and NPV according to the definitions provided in Excel for all design
alternatives.
5) Updates geometry definitions and generates new design alternatives in Revit, according to the
populated parameter values generated by the optimization algorithm.
6) Provides the option to automatically assign the IDs for all surfaces to enable the exploration of
energy-related properties for individual surfaces of design alternatives.
FIGURE 4-4: H.D.S. BEAGLE AUTOMATION LOOP DEVELOPED THROUGH C# AS A PLUG-IN FOR REVIT, USING
REVIT API, EXCEL API, AND GBS SDK.
65
4.3.2 THE IMPLEMENTATION OF A GA-BASED MULTI-OBJECTIVE OPTIMIZATION IN H.D.S. BEAGLE
The objective of EEPFD is to efficiently search and vary modifiable variables with the intent of
minimizing energy use intensity and maximizing net present value while adhering to project
design requirements.
One brute force solution is to traverse all possible combinations of the values of all design
variables in order to find ones meeting the designated optimal characteristics. However, such an
approach is time consuming by nature and the time requirement increases exponentially as the
number of design variables increases. For example, if a design problem is reduced to 10 design
parameters and 13 energy parameters with just a choice between 2 options per parameter, then
the solution space for this design would include 2^((10+13)) = 8,388,608 options in need of
analysis. If each solution takes 3 minutes to analyze, then a total of approximately 17,497 days
will be needed to analyze all potential solutions. However, in real design problems, parametric
options of each parameter typically exceed two, with solution pools increasing exponentially in
response. Therefore, the ability to isolate and identify optimal solutions in a more efficient
manner than the brute force solution is the main goal of this research.
In response, this research adopted and applied a genetic algorithm (GA) as the foundation for
EEPFD. As previously introduced, GAs are one of the optimization methods that are considered
effective at implementing a random search approach. By emulating the principles of evolution
through natural selection, when confronted with complex problems, GAs are able to search for
optimal solutions by adopting a “survival of the fittest” principle through the process of
reproduction, crossover, mutation, and selection. However, before a genetic algorithm can be
applied to generating a solution for any problem, the problem(s) must first be translated into a
series of objective functions defined through a mathematical format. Based on the problem itself,
potential solutions are represented by chromosomes and are coded into genes. The fitness of
each individual solution is then evaluated accordingly against the objective functions, followed by
the GA’s repetition of selection, reproduction, crossover, and mutation of surviving elements
(referred to as “individuals”) until termination criteria are met. Figure 4-5 illustrates the GA
process implemented in this research. The following sections provide an in-depth description of
critical key components, namely objective functions, problem formulation, encoding, population,
GA operators, evaluation, and termination criteria.
66
FIGURE 4-5: THE WORKING PRINCIPLE OF EEPFD’S GA BASED MULTI-OBJECTIVE OPTIMIZATION FRAMEWORK.
67
4.3.2.1 OBJECTIVE FUNCTIONS
The objective functions selected for this research are divided into spatial programming
compliance, energy performance, and financial performance. The spatial programming
compliance (SPC) score evaluates the meeting of the project-defined program requirements by a
generated design alternative. The energy use intensity (EUI) value evaluates the estimated energy
performance of the generated design alternative. Finally, the financial performance is provided in
the format of a schematic net present value (NPV) that is calculated according to the definition of
the financial pro forma for each generated design alternative. The three objective functions can
be formulaically expressed as follows:
𝑆 𝑜𝑏𝑗 = 𝑀𝑎𝑥 . 𝑆𝑃𝐶
𝐸 𝑜𝑏𝑗 = 𝑀𝑖𝑛 . 𝐸𝑈𝐼
𝐹 𝑜𝑏𝑗 = 𝑀𝑎𝑥 . 𝑆𝑃𝐶
𝑤 ℎ𝑒𝑟𝑒
𝑆 𝑜𝑏𝑗 = 𝑆𝑝𝑎𝑡𝑖𝑎𝑙 𝑃𝑟𝑜𝑔𝑟𝑎𝑚𝑚𝑖𝑛𝑔 𝐶𝑜𝑚𝑝𝑙𝑖𝑎𝑛𝑐𝑒 𝑂𝑏𝑗𝑒𝑐𝑡𝑖𝑣𝑒 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛
𝐸 𝑜𝑏𝑗 = 𝐸𝑛𝑒𝑟𝑔𝑦 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 𝑂𝑏𝑗𝑒𝑐 𝑡 𝑖𝑣𝑒 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛
𝐹 𝑜𝑏𝑗 = 𝐹𝑖𝑛𝑎𝑛𝑖𝑐𝑎𝑙 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 𝑂𝑏𝑗𝑒𝑐𝑡𝑖𝑣𝑒 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛
𝑆𝑃𝐶 = 𝑆𝑝𝑎𝑡𝑖𝑎𝑙 𝑃𝑟𝑜𝑔𝑟𝑎𝑚𝑚𝑖𝑛𝑔 𝐶𝑜𝑚𝑝𝑙𝑖𝑎𝑛𝑐𝑒 𝑆𝑐𝑜𝑟𝑒
𝐸𝑈𝐼 = 𝐸𝑛𝑒𝑟𝑔𝑦 𝑈𝑠𝑒 𝐼𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦
𝑁𝑃𝑉 = 𝑁𝑒𝑡 𝑃 𝑟𝑒𝑠𝑒𝑛𝑡 𝑉𝑎𝑙𝑢𝑒
4.3.2.1.1 Spatial Programming Compliance Objective Function: Spatial Programming Compliance
Score (SPC)
Program requirements are usually the first criterion to be considered by a design project and are
typically a driving factor during the initial stages of the design process. While the flexible nature
of program requirements allows for compromise in the interest of meeting other design goals,
such as minimizing energy consumption (Caldas 2006), this component should be included in any
design exploration process involving tradeoff analysis studies. In response, SPC is selected by this
research as an objective function as a part of the tradeoff analysis. Table 4-2 provides an example
of the SPC calculation, as defined by this research. Input for the SPC calculation is provided
through three sources: (1) project requirements from user; (2) design model information from
generated design alternative; and (3) research-defined calculation formula. The first set of values
is provided by the designer through the Excel template in order to define the project’s spatial
requirements. In the example provided in Table 4-2, the project requirements are defined as
55,000 ft² of office area, 50,000 ft² of hotel, 40,000 ft² of retail, and 22,680 ft² of parking area. The
second set of inputs is extracted automatically by the Beagle from each design alternative,
according to each alternative’s generated spatial programming results. For example, a design
alternative for the requirements in Table 4-2 may have 45,000 ft² of office, 37,000 ft² of hotel,
42,000 ft² of retail, and 21,650 ft² of parking area. The final value set is derived from the input of
the previous two sets into the SPC formula, as defined in Table 4-2, also in an automated fashion.
In this case, the previously outlined design alternative would receive an SPC score of 86.5, thereby
indicating 86.5% compliance with the user-defined design program requirements. Table 4-3
provides an example of the SPC setup for a residential project, demonstrating the versatility of
this calculation method available to the user.
68
TABLE 4-2: AN EXAMPLE OF SPATIAL PROGRAM PARAMETERS & SCORING FORMULA FOR A MIX-USED BUILDING
EXPERIMENTAL CASE.
SPATIAL PROGRAM PARAMETER NAME VALUE VALUE SOURCE
Office Area Requirement 55000*
(1)
Hotel Area Requirement 50000*
Retail Area Requirement 40000*
Parking Area Requirement 22680*
Total Parking Area Gross Floor Area: Parking**
(2)
Total Office Area Gross Floor Area: Office**
Total Retail Area Gross Floor Area: Retail**
Total Hotel Area Gross Floor Area: Hotel**
SPATIAL PROGRAMMING COMPLIANCE SCORE FORMULA
a= abs(Total Office Area-Office Area Requirement)/Office Area Requirement
(3)
b= abs(Total Hotel Area-Hotel Area Requirement)/Hotel Area Requirement
c= abs(Total Retail Area-Retail Area Requirement)/Retail Area Requirement
d= abs(Total Parking Area-Parking Area Requirement)/Parking Area Requirement
FINAL SCORE = 100* (1-(a+b+c+d)/4)
*value obtained from the excel template (*xlxs) defined by user for each project.
**value obtained from the design model (*rvt).
TABLE 4-3: AN EXAMPLE OF SPATIAL PROGRAM PARAMETERS & SCORING FORMULA FOR A RESIDENTIAL
BUILDING EXPERIMENTAL CASE.
SPATIAL PROGRAM PARAMETER NAME VALUE
VALUE
SOURCE
Bedroom Area Requirement 1020*
(1)
Living Area Requirement 1120*
Bathroom Area Requirement 368*
Kitchen Area Requirement 216*
Garage Area Requirement 528*
Total Bedroom Area Gross Floor Area: Bedroom**
(2)
Total Living Area Gross Floor Area: Living**
Total Bathroom Area Gross Floor Area: Bathroom**
Total Kitchen Area Gross Floor Area: Kitchen**
Total Garage Area Gross Floor Area: Garage**
SPATIAL PROGRAMMING COMPLIANCE SCORE FORMULA
a= abs(Total Bedroom Area-Bedroom Area Requirement)/Bedroom Area Requirement
(3)
b= abs(Total Living Area-Living Area Requirement)/Living Area Requirement
c= abs(Total Bathroom Area-Bathroom Area Requirement)/Bathroom Area Requirement
d= abs(Total Kitchen Area-Kitchen Area Requirement)/Kitchen Area Requirement
e= abs(Total Garage Area-Garage Area Requirement)/Garage Area Requirement
FINAL SCORE = 100* (1-(a+b+c+d+e)/5)
*value obtained from the excel template (*xlxs) defined by user for each project.
**value obtained from the design model (*rvt).
69
4.3.2.1.2 Energy Performance Objective Function: Energy Use Intensity (EUI)
As the goal of this research is to assist with early design decision-making by providing energy
performance feedback, the calculated energy use intensity (EUI) is selected, as it reflects the
estimated overall building energy consumption. The DOE-2.2 simulation engine, implemented
through Autodesk® Green Building Studio® web service, is employed for these building energy
performance simulations. However, it should be noted that relevant energy-related settings, such
as space types, construction materials, etc., are provided through the UI of Revit’s CEA. All
relevant options and energy-related properties are provided in Appendix C.1 to Appendix C.3
i
.
FIGURE 4-6: UTILIZED PROCESS FOR OBTAINING EUI VALUES IN EEPFD.
i
According to the software development team of Revit’s CEA, the assumptions of the energy simulation context are
based on the following: “Schedules (e.g. Occupancy) are based on the California Non-residential New Construction
Baseline Study 1999. Envelope thermal characteristics, Lighting Power Density, and HVAC efficiency are based on
ASHRAE 90.1 2007 and ASHRAE 90.2 2007. Equipment power density & Domestic Hot Water loads are derived from the
California 2005 Title 24 Energy Code. Occupancy density and ventilation values come from ASHRAE 62.1-2007. The
HVAC defaults for building type, size, and other miscellaneous building characteristics are based on the 2003
Commercial Buildings Energy Consumption Survey (CBECS).” (Smith, Bernhardt, and Jezyk 2011).
70
Figure 4-6 illustrates the process used in this research to obtain the energy performance
simulation results. This process relies on the automatic translation of the geometric model into
an energy model, which can, in turn, be analyzed by the simulation engine. This allows for an easy
transition between the geometrically generated design alternative and the simulation engine.
Once the energy model is provided to the web service, the energy simulation results can be
downloaded for review, once the analysis has been completed.
For the energy simulation performed for each design alternative, a series of values of interest are
generated and are extracted by H.D.S. Beagle to contribute to the evaluation of the performance
of each design alternative. Despite the abundance of data available, this research focuses on
utilizing the total energy use intensity, as provided by the simulation engine. However, values
regarding expected electricity usage, cost of electricity, fuel usage, and fuel costs are also
extracted to contribute to the NPV calculations.
4.3.2.1.3 Financial Performance Objective Function: Net Present Value (NPV)
The primary goal of including this objective is to provide the means by which to evaluate potential
design alternatives, according to their estimated costs and prospective income. For this purpose,
a net present value formula is utilized to provide a design alternative’s financial performance
through a broad estimate of expected construction costs, operation costs, and generated revenue
values. The information pertaining to each design alternative is extracted from both the
generated geometry and the simulated energy analysis results. Construction costs are derived
from combining calculated material quantities from the generated geometry, with their
respective user-provided per unit prices. Operation costs are calculated by combining expected
fuel and electricity usage from the energy simulation results, with per unit costs provided by the
user. Finally, prospective income is derived from a user-defined value for each square foot of
specified program, combined with the calculated program quantities, based on the generated
geometry. Table 4-4 summarizes these elements and the means by which they are implemented
to generate the NPV value of each design alternative.
The list of available parameters contributing to the estimate of the construction cost of a design
alternative is generated by the research, based on available extractable information and
perceived relevancy. However, this list can be considered flexible in nature, as it can be modified
based on user preferences. The same approach can be applied to the contributing parameters for
the operation cost estimates. Table 4-5 provides an exhaustive list of related financial parameters
and the default settings are listed as available in the Excel template worksheet. This worksheet is
used by H.D.S. Beagle to obtain the financial parameter values and calculate the NPV accordingly.
The list of available parameters contributing to the estimate of the construction cost of a design
alternative is considered flexible and extensible, with the ability to contract and expand due to
the user preferences and the levels of market cost detail. The same is applicable to the
contributing parameters for the operation cost estimates. It should be noted that, due to this
flexible nature, the accuracy and extensive quality of the resulting NPV score is highly dependent
on the user input for the specific location and the timing of the market conditions.
71
TABLE 4-4: FINANCIAL SETTINGS, PARAMETERS, AND FORMULAE IN THE FINANCIAL MODEL OF H.D.S. BEAGLE.
COST CATEGORY FINANCIAL PARAM NAME FORMULA
CONSTRUCTION
COST
Land Acquisition Unit Land Acquisition Cost*Lot Size
Structure Unit Structure Cost*Gross Volume
Floor Construction Unit Floor Construction Type Cost*Mass Floor Area
Slab Construction Unit Slab Construction Type Cost*Mass Slab Area
Exterior Wall Construction Unit Exterior Wall Construction Cost*Mass Exterior Wall Area
Exterior Wall - Underground
Construction
Unit Exterior Wall - Underground Construction Cost*Mass Exterior Wall -
underground Area
Interior Wall Construction Unit Interior Wall Construction Cost*Mass Interior Wall Area
Glazing Construction Unit Glazing Type Cost*Mass Glazing Area
Roof Construction Unit Roof Construction Cost*Mass Roof Area
Skylight Construction Unit Skylight Construction Cost*Mass Skylight Area
Shade Construction Unit Shade Construction Cost*Mass Sade Area
Circulation Cost Unit Circulation Cost*Gross Floor Area
HVAC System Cost Unit HVAC System Cost*Gross Volume
OPERATION
COST
Annual Electricity Cost Electricity EUI*Electrical Cost*Gross Floor Area
Annual Fuel Cost Fuel EUI*Fuel Cost*Gross Floor Area
Annual Operating Cost Unit Operation Cost*Gross Floor Area
REVENUE
Annual Lease Payment Unit Lease Payment*Gross Floor Area
CASH FLOW
Cash Flow Time Span Fixed User Defined Value
Discount Rate Fixed User Defined Value
FINANCIAL SCORE
𝑁𝑃𝑉 = ( ∑
𝐶 𝑡 (1 + 𝑟 )
𝑡 𝑇 𝑡 =1
) − 𝐶 0
𝑤 ℎ𝑒𝑟𝑒
𝑇 = 𝐶𝑎𝑠 ℎ 𝐹𝑙𝑜𝑤 𝑇𝑖𝑚𝑒 𝑆𝑝𝑎𝑛
𝑟 = 𝐴𝑛𝑛𝑢𝑎𝑙 𝑅𝑎𝑡𝑒 𝑜𝑓 𝑅𝑒𝑡𝑢𝑟𝑛
𝐶 0
= 𝐶𝑜𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑖𝑜𝑛 𝐶𝑜𝑠𝑡
𝐶 𝑡 = 𝑅𝑒𝑣𝑒𝑛𝑢𝑒 − 𝑂𝑝𝑒𝑟𝑎𝑡𝑖𝑜𝑛 𝐶𝑜𝑠𝑡
Note:
Italic text – Value retrieved from user modifiable template.
Underline text– Value retrieved from the conceptual mass building (.rvt).
Italic underline bold text – Value retrieved from the energy analysis results.
72
TABLE 4-5: ADJUSTABLE FINANCIAL PARAMETERS IN H.D.S. BEAGLE.
COST CATEGORIES FINANCIAL PARAM NAME AVAILABLE OPTION*
DEFAULT
VALUE
CONSTRUCTION
COST
Unit Land Acquisition Cost 2400
Unit Structure Cost 40
Unit Floor Construction Type
Cost
Lightweight Construction – High Insulation 100
Lightweight Construction – Typical Insulation 80
Lightweight Construction – Low Insulation 70
Lightweight Construction – No Insulation 60
High Mass Construction – Frigid Climate Slab Insulation 85
High Mass Construction – Cold Climate Slab Insulation 90
High Mass Construction – No Insulation 60
Unit Slab Construction Type
Cost
High Mass Construction – Frigid Climate Slab Insulation 120
High Mass Construction – Cold Climate Slab Insulation 100
High Mass Construction – No Insulation 90
Unit Exterior Wall
Construction Cost
Lightweight Construction – High Insulation 80
Lightweight Construction – Typical Cold Climate
Insulation
70
Lightweight Construction – Typical Mild Climate
Insulation
50
Lightweight Construction – Low Insulation 40
Lightweight Construction – No Insulation 30
High Mass Construction – High Insulation 100
High Mass Construction – Typical Cold Climate Insulation 90
High Mass Construction – Typical Mild Climate
Insulation
85
High Mass Construction – No Insulation 82
Unit Exterior Wall -
Underground Construction
Cost
High Mass Construction – High Insulation 90
High Mass Construction – Typical Cold Climate Insulation 80
High Mass Construction – Typical Mild Climate
Insulation
60
High Mass Construction – No Insulation 50
Unit Interior Wall
Construction Cost
Lightweight Construction – No Insulation 20
High Mass Construction – No Insulation 30
Unit Glazing Type Cost
Single Pane Clear - No Coating 20
Single Pane - Tinted 25
Single Pane - Reflective 27
Double Pane Clear – No Coating 30
Double Pane - Tinted 35
Double Pane - Reflective 37
Double Pane Clear – LowE Cold Climate, High SHGC 40
Double Pane Clear – LowE Hot Climate, Low SHGC 50
Double Pane Clear - High Performance, LowE, High Tvis,
Low SHGC
60
Triple Pane Clear - LowE Hot or Cold Climate 65
Quad Pane Clear - LowE Hot or Cold Climate 80
Unit Roof Construction Cost
High Insulation - Cool Roof 100
High Insulation - Dark Roof 90
Typical Insulation - Cool Roof 85
Typical Insulation - Dark Roof 80
Low Insulation - Cool Roof 55
Low Insulation - Dark Roof 50
No Insulation - Dark Roof 30
Unit Skylight Construction
Cost
Single Pane Clear - No Coating 35
Single Pane - Tinted 40
Single Pane - Reflective 43
73
COST CATEGORIES FINANCIAL PARAM NAME AVAILABLE OPTION*
DEFAULT
VALUE
Double Pane Clear – No Coating 55
Double Pane - Tinted 60
Double Pane - Reflective 63
Double Pane Clear – LowE Cold Climate, High SHGC 65
Double Pane Clear – LowE Hot Climate, Low SHGC 80
Double Pane Clear - High Performance, LowE, High Tvis,
Low SHGC
90
Triple Pane Clear - LowE Hot or Cold Climate 100
Quad Pane Clear - LowE Hot or Cold Climate 120
Unit Shade Construction Cost Basic Shade 20
Unit Circulation Cost 30
Unit HVAC System Cost 2-Pipe Fan Coil System, Chiller 5.96 COP, Boilers 84.5 eff 20
4-Pipe Fan Coil System, Chiller 5.96 COP, Boilers 84.5 eff 30
11 EER Packaged VAV, 84.5% boiler heating 40
12 SEER/0.9 AFUE Split/Packaged Gas, 5-11 Ton 50
12 SEER/7.7 HSPF Split Packaged Heat Pump 45
12 SEER/8.3 HSPF Packaged Terminal Heat Pump (PTHP) 65
Central VAV, Electric Resistance Heat, Chiller 5.96 COP 70
Central VAV, HW Heat, Chiller 5.96 COP, Boilers 84.5 eff 80
Residential 14 SEER/0.9 AFUE Split/Packaged Gas <5.5
ton
18
Residential 14 SEER/8.3 HSPF Split/Packaged Heat Pump 15
Residential 17 SEER/9.6 HSPF Split HP <5.5 ton 22
Underflow Air Distribution 30
OPERATION COST Unit Operation Cost 2
REVENUE** Unit Lease Payment:Retail 700
Unit Lease Payment:Office 600
Unit Lease Payment:Hotel 500
Unit Lease Payment:Parking 300
CASH FLOW Annual Interest Rate 10%
Investment Period 15
*The available options are derived from the available conceptual construction options from CAE platform.
**This category can be added by user according to each project.
4.3.2.2 DESIGN PROBLEM FORMULATION
In order to apply a GA to a problem, a formal mathematical definition of enquiry is needed. Once
the relationship between a problem and its respected solution pool is defined, the systematical
exploration and evaluation of the solution pool can be achieved. Parameters applicable to this
process can be divided into two primary categories—(1) those possessing a range of acceptable
values that define the solution pool of interest, and (2) those possessing given values that are
used to measure and evaluate individuals falling within that defined pool of interest.
Parameters possessing a range of acceptable values that are used to define the solution pool of
interest in the context of this research can be further divided into Design Parameters and Energy
Setting Parameters. Design Parameters drive geometric configurations and can be treated as
design-specific parameters to be defined by the user. Energy Setting Parameters, on the other
hand, are available through the Revit CEA settings platform. In addition, these two subcategories
can be considered genotype parameters, as they are used to define the “genes” later used by
74
EEPFD’s customized GA. Design Parameters and Energy Setting Parameters are described in detail
in the following section.
Parameters possessing a given value are used to measure and evaluate the performance of
individual potential design solutions according to EEPFD’s defined objective functions. These
parameters can be further divided into Spatial Program Parameters and Financial Pro Forma
Parameters, which are used by the SPC and NPV, respectively, as previously described. These
values are fixed in nature and are provided by the user.
4.3.2.2.1 Design Geometry Parameters
Design Geometry Parameters are used to define the associative parametric model and provide all
form-driving parameters, along with their acceptable ranges. The extent and configuration of
these parameters is dependent on the user preferences and should be configured with the
understanding that specifying wider ranges or describing the relationships in more depth would
yield a broader and deeper solution pool of interest.
This set of parameters can be further divided into three sections: driving, driven, and fixed. Driving
parameters can be considered as independent parameters possessing acceptable value ranges,
such as building height, number of levels, tapering or twisting factors, orientation, etc. Driven
design parameters do not possess a range of acceptable values and are directly dependent on
driving parameters. For example, retail space is defined as ground level, regardless of the resulting
number of office floors above. Fixed design parameters also possess only a single given value, and
are independent of driven and driving parameters. Figure 4-7 illustrates an example of parametric
models, where driving, driven, and fixed Design Geometry Parameters are all used to control form
and quantities. With respect to the GA, only driving parameters can later be used as “genes”.
In particular, consideration must be made regarding settings that affect both the resulting
geometric configuration and the energy performance of the potential design solution. For
example, space occupancy has a considerable impact on the expected energy usage. Therefore,
design geometry parameters must be established in such a way as to provide an association with
all generated levels and a space program type per level, in order to ensure consistency in the EUI
calculations.
4.3.2.2.2 Energy Setting Parameters
Energy setting parameters have a direct impact on the expected energy usage of the potential
design solution, such as quantity of glazing, or thermal properties of exterior wall construction.
However, the specific parameters of this type available to this research are limited by those
available through Revit and are described in Table 4-6.
As with Design Geometry Parameters, the level of depth explored by the Energy Setting
Parameters is directly dependent on the user. For example, the overall amount of glazing can be
applied to the building as a whole, or in varying amounts on specifically oriented surfaces, which
will have a varying impact on the calculated objective functions.
75
FIGURE 4-7: EXAMPLE OF PARAMETRIC MODEL AND DIVISION OF DESIGN GEOMETRY PARAMETERS INTO
DRIVING, DRIVEN, AND FIXED CATEGORIES.
TABLE 4-6: ENERGY SETTING PARAMETERS AVAILABLE FOR EXPLORATION BY NAME, TYPE, RANGE, AND
CHANGEABILITY FORM GLOBAL TO LOCAL SURFACE VALUES.
ID Energy Setting Parameter Name Parameter Type Variation Range/(Unit)
E1 Conceptual Construction
Enumeration NA
E1-1 Mass Exterior Wall
E1-2 Mass Interior Wall
E1-3 Mass Exterior Wall - Underground
E1-4 Mass Roof
E1-5 Mass Floor
E1-6 Mass Slab
E1-7 Mass Glazing
E1-8 Mass Skylight
Mass Shade
Mass Opening
E2 Target Percentage Glazing Real number [0,1]
E3 Target Sill Height Real number [0,*] (Project Length Unit)
E4 Shade Depth Real number [0,*](Project Length Unit)
E5 Target Percentage Skylight Real number [0,1]
E6 Skylight Width & Depth Real number [0,*](Project Length Unit)
*Value dependent on input from user
76
4.3.2.3 GA ENCODING
In applying a GA, there are three typical coding schemes—binary, numerical, and symbolic—with
the binary coding scheme being the original and most common. In the binary coding system, a
genotype may be represented as 001001101, whereby individual bits can be considered as “genes”
and binary strings as “chromosomes” of the genotype (Maher and Poon 1996). Therefore, in a GA
with a binary coding scheme, it is necessary to design a means of encoding features and
individuals into binary strings.
In the context of a design problem within H.D.S. Beagle, each individual stands for an architectural
geometric design alternative. Thereby, genes correspond to design variables. However, when this
approach is applied to multidimensional, high-precision numerical problems, some drawbacks are
encountered. Since each design has its own particular geometric design constraints, design
variables are restricted to fluctuate within a previously defined range. As a result, if design
variables are encoded as simple binary strings, it cannot be guaranteed that subsequent design
variables in generated offspring will adhere to previously provided ranges throughout the
crossover and mutation process. Therefore, this research employs the numerical coding scheme,
as it was determined to be the most suitable to supplying the flexibility needed by EEPFD. Such
an encoding method is widely employed as a means of solving practical problems (Beyer and
Schwefel 2002). The detailed comparison of the classic GA and the version adopted in H.D.S.
Beagle is show in Table 4-7.
Through the numerical encoding scheme, each parameter of interest possessing a user-defined
range of acceptable values can be considered as a “gene”. A “chromosome” is then defined as a
series of “genes” with a value falling within acceptable range assigned to each “gene”. Therefore,
a design with 13 parameters of interest, as described in Table 4-8Error! Reference source not
found., will have design alternatives with a chromosome comprised of 13 values, one assigned
for each gene within the set exploration range.
In the application of the numerical coding scheme, three types of parametric values could be
encountered: continuous numbers (real), discrete numbers (integers), and enumerations (e.g.,
different construction types (strings)). In addition, parameter values are confined to user-
established constraints during an engagement of the GA to yield valid results. Continuous
numbers can be considered as real numbers, with their values utilized exactly as generated by the
GA with no modification. For example, if the TwistAngle parameter, as listed in Table 4-8, is
defined as possessing a range of 0° to 90°, any value generated through either mutation or
crossover falling within this range would be considered valid, be it 54.3° or 45°. Discrete numbers,
however, are confined to integer values, despite the actual value generated by the GA. For
example, the parameter entitled Level#Hotel in Table 4-8 has a designated value of 4 to 6 stories.
However, as only integer values are of practical interest in this category, if a value of 5.3 is
obtained during the GA operation, this parameter would be assigned the nearest integer value,
i.e., 5. A similar approach is used for enumeration or string parameters. Enumeration parameters
are parameters that, instead of requesting a value, request that a selection be made from a text-
based list. For example, an enumeration parameter might only accept a construction type
selected from the provided options. In this case, an integer value is assigned to each option and
treated as such during the GA operation. After the operation is complete, the resulting integer
value is coordinated with the original text-based selection. For example, if there are 10 choices of
77
construction types available for Mass Glazing and a value of 3.2 is obtained through the GA
operation then the 3
rd
choice on the construction type list would be assigned to this gene.
TABLE 4-7: THE COMPARISON BETWEEN CLASSIC GENETIC ALGORITHM AND THE APPLIED GA IN H.D.S.
BEAGLE
Classic GA GA in H.D.S. Beagle
Genes Binary bit, domain: 0,1 Design & Energy Parameter, domain: real
numbers
Chromosome Binary String
e.g.: 110101101
Design & Energy Parameter List
e.g.: G1G2G3G4G5G6G7G8G9
Coding 0, 1 can be used as values
for any genes in the chromosome
Different Design & Energy Parameters
have different value ranges
Order The order of genes within
chromosomes is important
The order of genes is non-influential
TABLE 4-8: AN EXAMPLE OF THE LIST OF PARAMETERS, TYPES, AND THEIR EXPLORATION RANGES OF A GA RUN
WITH 13 PARAMETERS OF INTEREST.
Parameter Name Type Unit Exploration Range
G1 Level#Hotel Integer N/A [4, 6]
G2 Level#Office Integer N/A [4, 6]
G3 Level#Retail Integer N/A [4, 6]
G4 Level#Parking Integer N/A [1, 3]
G5 TwistAngle Real ° [0, 90]
G6 TopSetback Real ft [1, 10]
G7 ScaleFactor Real N/A [0.8, 1.25]
G8 CanyonWidth Real ft [12, 30]
G9 BaseSetback Real ft [6, 15]
G10 Target Percentage
Glazing
Real N/A [0.2, 0.83]
G11 Shade Depth Real ft [0, 4.5]
G12 Target Percentage
Skylight
Real N/A [0, 0.45]
G13 Conceptual
Construction
– Mass Glazing
strings N/A 1. Single Pane Clear - No Coating
2. Single Pane - Tinted
3. Single Pane - Reflective
4. Double Pane Clear – No Coating
5. Double Pane - Tinted
6. Double Pane - Reflective
7. Double Pane Clear – LowE Cold Climate, High SHGC
8. Double Pane Clear – LowE Hot Climate, Low SHGC
9. Double Pane Clear - High Performance, LowE, High
Tvis, Low SHGC
10. Triple Pane Clear - LowE Hot or Cold Climate
78
FIGURE 4-8: A SAMPLE DESIGN ALTERNATIVE WITH A “CHROMOSOME” COMPOSED OF 13 “GENES”
CORRESPONDING TO TABLE 4-8. AVAILABLE SELECTION OF “GENE” VALUES DEFINED THROUGH THE
INITIAL POPULATION SIZE OF 10 WITH AN EVEN DISTRIBUTION OF PARAMETRIC VALUES ACROSS
USER-DEFINED RANGES OF INTEREST FOR EACH “GENE”. HIGHLIGHTED VALUES ARE SPECIFIC TO THE
PROVIDED GRAPHIC EXAMPLE.
4.3.2.4 POPULATION METHODS
In this research, three population mechanisms are used by the GA, one for the initial population
(gen = 0); one for the first generation (gen = 1); and one for populating all subsequent generations
(gen > 1). This approach is used to ensure that the initial population provides a full spectrum of
possible offspring before proceeding with optimization. For the initial population (gen = 0), the
GA is instructed to evenly assign parameter values among the resulting offspring, in order to
ensure that all gene values of interest are available for further exploration. For example, in Table
4-8, G8 (entitled CanyonWidth) dictates the width between the two resulting towers. The user-
defined range of interest has been designated as 12 ft to 30 ft, with the initial population to
contain ten offspring. Therefore, the algorithm will assign values for G8 in increments of 2 ft to
cover the full range of interest within the initial population (gen = 0). If the initial population
contained 15 offspring, then the explored value for G8 would be in increments of 1.2 ft. It should
be noted that the user-defined initial population should be substantial enough in size to ensure
that all values of interest are included in the initial population. For example, as there are 10
options for G13 in Table 4-8, the initial population size should exceed a value of 10 offspring.
To ensure the depth of diversity available to all subsequent potential offspring, the populating of
the first generation (gen = 1) is designed so that all individuals resulting from the initial generation
(gen = 0) have a chance to become a parent to the next generation. Beyond the initial and first
generations (gen > 1), all ensuing parents are chosen via tournament selection, coupled with
elitism favoring higher performing or more “fit” individuals. In other words, the more fit the
79
individual, the higher the probability of its selection as a parent for the next generation. In
addition, for all subsequent generations (gen > 0), crossover and mutation operators are used, as
opposed to the even distribution of potential values. However, the parameters affected by these
operators are randomly chosen, thereby allowing the same parents to potentially have multiple
and varying offspring. This ensures the diversity of the generated offspring (gen > 0).
4.3.2.5 GA OPERATORS
4.3.2.5.1 Selection
Two selection methods are used in this research in order to identify parents for successive
generations. The first method is reserved solely for application to the first generation (gen = 1),
where each individual from the initial population (gen = 0) is guaranteed to be used as a parent
for the first generation (gen = 1) at least once. A tournament selection method is used for
identifying parents for all subsequent generations (gen > 1).
A commonly used GA selection method is based on the use of fitness-proportionate selection,
where the probability of each individual being selected in based on its perceived fitness (Baker
1987). Thus, individuals with higher rankings are more likely to be selected than those ranked
lower, and so on. However, this method may lead to early convergence by premature super
individuals (Rudolph 1994). To avoid this drawback, a tournament selection method is chosen
instead, as it allows the tournament groups to be composed of randomly selected individuals
within the Parental Pool. All members in the tournament group are then evaluated, where the
highest-ranked individual is selected as a parent for the next generation (Miller and Goldberg
1995). This process is repeated until all necessary parents have been identified for the next
generation. In the case of EEPFD, the Parental Pool is not confined to only the members of the
most recent generation, but also includes all previously selected parents of previous generations.
This approach ensures that, if no improvement is perceived in the offspring, a high-performing
individual will survive subsequent generations. While the tournament selection method can be
considered slower and more complex, it allows for increased diversity without relying heavily on
random selection, while favoring “fitter” individuals. This is important to H.D.S. Beagle, as, while
a particular combination of genes may yield poor results in one generation, individual genes with
the potential to perform exceptionally in another combination may be identified in later
generations. A pseudo code for the tournament logic is described below.
80
Algorithm: Tournament Selection
Requirements:
Ranked Offspring Pool to be selected – offspring_pool
Selection Size – S
Tournament Size – k
Output:
Selected offspring— selected_pool
1. Initial N as the number of selected offspring with 0.
2. While N != S
3. tournament_pool = RandomSelection(offspring_pool, k);
4. winner ← HighestRankingSelection (tournament_pool)
5. selected_pool.Add(winner)
6 offspring_pool,Delete(winner)
7. N ← N+1
8. End while loop
9. Return selected_pool
4.3.2.5.2 Crossover
As a GA operator, crossover refers to the exchange of parametric values between parents to breed
new combinations of these values in resulting offspring. Two strategies can be used to achieve
recombination, one of which is based on exchanging half or more of the design variables from
both parents to create two new individuals. In the second method, each design variable is
analyzed and “mediates” the new value for the variable as an average of the two initial values
obtained from the parents. Since the initial population (gen = 0) already uniformly created the
genes based on the search space, the first option is utilized to populate new individuals.
In the traditional use of a GA, there are single or multiple point exchanging methods used with
respect to crossover. In the context of this research, this approach to providing crossover and
recombination of genes based on designated points of distribution for exchange is unable to
accommodate varying quantities of genes, as found in design problems. Instead, a fixed percent
of the available design variables is selected to be exchanged between parents, and is referred to
as the crossover ratio. The user-provided crossover ratio thus indicates the proportion of
parameters available for this exchange. For example, a designated crossover rate of 60% would
mean that 60% of the parameters would be randomly selected for exchange of values between
the parents. This process is illustrated in Figure 4-9, where the exchanged parameters can be
tracked as G1, G2, G3, G4, G9, and G10.
81
FIGURE 4-9: AN EXAMPLE OF THE RESULTING EXCHANGE OF PARAMETRIC VALUES AS GENERATED THROUGH THE
GA CROSSOVER MECHANISM.
4.3.2.5.3 Mutation
Mutation functions also exist to allow for diversification of the population and the formation of
variants. Unlike the crossover operator, which is limited in selection to the exact parametric values
provided by the parents, the mutation mechanism allows for the introduction of new parametric
values, provided that they do not breech the user-provided range of interest. This allows for the
probability of new gene combinations occurring in later generations and provides a means of
increasing the available diversity of the offspring.
The mutation operator is engaged after the crossover process, where the user-provided mutation
ratio determines the probability of the event of mutation being allowed or not for the offspring
of interest. For example, if the mutation ratio is designated as 1%, each offspring has a 1% chance
of a mutation event occurring. If a mutation event is determined to have occurred, then the GA
will randomly select a parameter and determine its value based on the original set range of
interest, thereby replacing the value that would otherwise have been inherited from the parents.
However, it should be noted that the optimal settings for both crossover ratios and mutation
ratios have yet to be determined and are in need of future study.
82
4.3.2.6 EVALUATION
Each generated design alternative is evaluated according to the previously defined objective
functions. Since the multiple performance criteria of interest are often in direct competition, the
search towards the optimum becomes a matter of finding the best compromise through a study
of tradeoffs rather than the identification of a single optimized solution (Coello Coello, Lamont,
and Van Veldhuisen 2007). Therefore, a Pareto ranking method is utilized as an evaluation
mechanism. In the Pareto ranking method, the Pareto-Dominance (p<) concept is used to
compare two individuals. The superiority of one individual over another is decided by comparing
the two individuals’ performance across the multiple objectives. Below is the definition of the
Pareto-Dominance as applied to the previously defined three objective functions of EEPFD:
∀𝑓 ∈ {𝑆 𝑜𝑏𝑗 , 𝐸 𝑜𝑏𝑗 , 𝐹 𝑜𝑏𝑗 } 𝑓 (𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 1
) ≤ 𝑓 (𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 2
)
∃𝑓 ∈ {𝑆 𝑜𝑏𝑗 , 𝐸 𝑜𝑏𝑗 , 𝐹 𝑜𝑏𝑗 } 𝑓 (𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 1
) < 𝑓 (𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 2
)
⇒ 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 1
𝑝 < 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 2
According to this definition, if solution
1
is partially less than solution
2
(denoted by solution
1
p<
solution
2
), then solution
1
dominates solution
2
in the order of rank. For example, if individual A has
the objective scores of (94, 160, 65) and individual B has the scores (97, 102, 82) then individual
B would be considered dominant, or more “fit,” than individual A and would be ranked higher
since individual B’s objective scores are all considered more “fit” than individual A’s. However, if
individual C has the objective scores of (90, 104, 85) and individual D has the scores (98, 153, 90)
then individual C and D would be considered incomparable or unable to dominate each other. In
this example this designation would be made due to individual D having better “fit” scores in SPC
and NPV but not having a better “fit” EUI score than individual C. Since all objective scores are
considered equal in priority, individual C and individual D cannot clearly dominate each other and
so would be assigned the same rank. In this research the ranking of an individual implies the
number of individuals within the same pool which are considered dominant to the individual in
question. Therefore, the fittest individual in a set of offspring would be assigned the ranking of 1
with all other offspring following in suit.
𝑅𝑎𝑛𝑘 𝑖 = 1 + 𝑁𝑢𝑚 (𝐼𝑛𝑑𝑖𝑣𝑖𝑑𝑢𝑎𝑙 𝑑𝑜𝑚𝑖𝑛𝑎𝑡𝑒𝑑 )
Consequently, higher-ranked individuals are more likely to be selected as a parent for the
reproduction process. All individuals that do not dominate each other are therefore assigned the
same ranking and have equal probability of being selected as parents for the next generation. The
specific Pareto ranking method adopted in this research can be found in Fonseca and Fleming’s
Pareto ranking method (Fonseca and Fleming 1993).
83
Algorithm: Ranking Calculation
Requirements:
Offspring Pool to be ranked – offspring_pool
Output:
Ranked Offspring Pool— ranked_pool
1. Let R be the current rank number
2. R← 1
3. While offspring_pool is NOT Empty
4. Best_OffspringPool = Pareto-ranking(offspring_pool)
5. Best_OffspringPool.Rank = R
6. offspring_pool,Delete(Best_OffspringPool)
7 ranked_pool.Add(Best_OffspringPool)
8 R← R+ 1
9. End while loop
10.Return ranked_pool
4.3.2.7 TERMINATION CRITERIA
Currently, three means of terminating the GA process are available in the present research. The
first and second are user-defined through either a max iteration value being provided, or a
maximum runtime being reached. The third is triggered when the GA reaches three generations
that have the same optimal result, i.e., no quantifiable improvement or difference is evident in
subsequent offspring. At this point, the GA will determine that the design has reached the optimal
set of solutions and terminate the process.
84
4.3.3 H.D.S. BEAGLE IMPLEMENTATION PROCESS
During the technical development period, several hypothetical scenarios are utilized to refine and
determine the workflow of executing H.D.S. Beagle. These scenarios are further introduced in 0.
According to H.D.S. Beagle v20120903, the execution process can be separated into three main
parts: (1) H.D.S. Beagle Installation; (2) Executable Files Preparation; and (3) Execution of H.D.S.
Beagle. The overall implementation processes consisting of these parts, with detailed steps, is
illustrated in Figure 4-10.
FIGURE 4-10: DIAGRAMMATIC IMPLEMENTATION OF H.D.S. BEAGLE, AS PREPARED BY THE AUTHOR.
PART 1: H.D.S. Beagle Installation
H.D.S. Beagle v20120903 is currently a Revit Plug-in and has not been packaged as a self-extracting
executable plugin (.*exe) file that can be automatically installed by one click. As a result, the
installation of H.D.S. Beagle requires the following steps:
a. Unzip the H.D.S. Beagle package. This package contains five files: one Excel template
(*xlxs), one sample Revit model (*.rvt), two *.dll files (USCAppletGBSAPI.dll and
ICSharpCode.SharpZipLib.dll), and the H.D.S.Bealge.addin file.
b. Relocate the USCAppletGBSAPI.dll and ICSharpCode.SharpZipLib.dll file to the desired
location for storing generated data.
c. Modify the *.dll file path in H.D.S.Beagle.addin file. The *.dll file path should be changed
to the location where the *.dll files are placed.
85
d. Place the *.addin file in the Revit Addin folder. For Revit Architecture 2013, the Addin
folder is located at C:\ProgramData\Autodesk\Revit\Addins\2013
e. Run Revit. H.D.S. Beagle should now be available in Revit’s External Tool dropdown list.
It should be noted that once installed this process does not require repetition upon further use of
H.D.S. Beagle.
PART 2: Preparation of Executable Files
Two files are needed in order to execute H.D.S. Beagle—the Revit model (*.rvt) that contains
parametric masses (*.rfa) defined by the user, and the corresponding Excel template (*.xlsx)
containing user-defined variation ranges of each parameter of interest and other constraints and
formula definitions. It should be noted that steps beyond those of typical parametric modeling
are required to ensure compatibility between a prepared model and H.D.S. Beagle. These steps
are outlined below.
Parametric Model Preparation:
a. Design parametric families. The overall building geometry must be designed as a
conceptual mass family (*.rfa) with parameters to control geometric variations.
Simultaneously, driving parameters of interests require designation in the provided Excel
template. It is necessary to setup the parameters which associate with level and zoning
division. These level and zoning related parameters and values will be needed for H.D.S.
Beagle to automate the zoning assignment and level variations during a GA run. The
reasons for individual masses not being individually created are because: 1) All families
must have the same set of parameters, even if certain parameters are not directly
applicable to a specific space. 2) The relationship of each space must be preserved so that
when a parameter value is changed, the resulting geometries will not conflict with each
other. 3) It is imperative that when using H.D.S. Beagle to automatically vary the number
of levels and level to level heights because H.D.S. Beagle uses a mass family’s type
comments and the level name to identify the appropriate zone and assign proper space
type for subsequent energy analyses.
b. Test and flex model geometry. To avoid breakage within the geometry, and thereby
disrupting the GA run, each driving parameter and a subsequent range should be tested
before engaging H.D.S. Beagle. Once confirmed, appropriate ranges of interest for each
driving parameter should be designated in the provided Excel template.
c. Divide mass families according to space type. To designate space programming
configurations divide the mass into subdivision masses. Provide a name for each sub-mass
in the mass’s Type Comment property for identification purposes, so that H.D.S. Beagle is
able to recognize and assign space types accordingly.
d. Load divided mass families and assign levels. In a new Revit project file (*.rvt) load the
divided sub-masses (*.rfa) into the project. Add and define level names according to each
mass’s Type Comments. In this way, each mass has its own set of levels to be associated
with. It is important to have the level number for each mass more than the maximum
value of each mass’s level number of interest range. For example, if in the Excel template
the maximum number of desired parking levels is 12, then there should be a minimum of
12 levels designated in the project. The naming convention for this process is
SpaceType_Level#. In this way, H.D.S. Beagle is able to recognize the level, adjust the
86
level-to-level height, and assign the appropriate space type accordingly. After defining the
level, place each mass on its designated level and associate the mass with its own sets of
desired levels.
e. Assign project location and initial energy settings. Engage the “Enable Energy Model”
function available through Revit’s Energy Analysis menu. To ensure that an energy model
is consistently available during the GA Run. In the “Energy Settings” dialogue box disable
the use of “Core Offset” and “Divide Perimeter Zone”, as these properties should be
controlled through zoning variation in the conceptual mass families. Also in the “Energy
Settings” dialogue box provide project location information along with other relevant
energy information. The detailed setting method and process can be found on Autodesk’s
help website (Autodesk 2012a).
Excel Template Preparation:
f. Input driving parameters of interest. In the provided Excel template for H.D.S. Beagle in
first worksheet, Sheet 1: Geometry Parameter Worksheet (Geometry Param), define the
modifiable parameters by name and type according to the provided format. These
parameter names must match the parameters of interest in the *.rvt file. As any
discrepancies will result in these parameters being excluded during the GA run.
g. Define parameter ranges and constraints. Within the provided Excel template input the
ranges of interest for each parameter. It is recommended that these ranges be tested first
in order to avoid breakage within the design geometry during the GA run.
h. Set non-driving parameters and other formulae. In the remaining six worksheets provided
in the Excel template, provide requested information, formulae, and required
modifications before engaging H.D.S. Beagle. The summary of each worksheet’s
functionality can be found in Table 4-1. A sample of the worksheet format can be found
in Appendix C.4.
Figure 4-11 provides an illustrated summary of the overall process of preparing the needed
executable files for H.D.S. Beagle.
FIGURE 4-11: ILLUSTRATED PROCESS TO PREPARE EXECUTABLE FILES FOR H.D.S. BEAGLE.
87
PART 3: Execute H.D.S. Beagle
The steps necessary to successfully engage H.D.S. Beagle for a GA run are described below.
Currently, H.D.S. Beagle has four user interface windows, allowing users to specify various aspects
of their GA run. However, these windows can be bypassed at any time, resulting in the use of
default settings for any unspecified attributes.
a. Engage the 3D view environment in Revit. This is necessary, as current versions of Revit
will not allow for energy analysis outside of this environment. Confirm the engagement
of the “Enable Energy Model” function to ensure continuous availability of an energy
model during the GA run.
b. Automatically or manually assign individual surface IDs. H.D.S. Beagle v20120903
provides an optional pre-run function to automatically assign ID for each individual
surface. This allows the energy properties of individual surfaces to be explored separately.
In this way, for example, the south-facing wall can have a different Target Percentage
Glazing and Conceptual Construction than the north-facing wall. Therefore, individual
surface properties will be treated as different genes during the GA run. If the individual
surface exploration is desired, this function needs to be executed prior to proceeding with
H.D.S. Beagle’s main function. The surface IDs can also be assigned by users manually, if
only the exploration of a few specific surfaces is of interest. The ID of a surface is in use
mass surface’s Graphic Appearance property and the naming convention of an ID is
MassTypeComment_MassType_#. For example, the identification of four orientation
surfaces of an orthogonal office wall will be Office_MassExteriorWall_0,
Office_MassExteriorWall_1, Office_MassExteriorWall_2, and Office_MassExteriorWall_3.
It should be noted that Step b can be skipped if the individual surface exploration is not
desired.
c. Execute H.D.S. Beagle’s main function. The main function of H.D.S. Beagle can be accessed
from the Revit’s External Tool’s dropdown list.
d. User Interface 1. Once executed, the main function of H.D.S. Beagle, the first user
interface (UI), appears as shown in the left image of Figure 4-12. Through this interface,
the user-defined constraint file (*.xlxs) is loaded. Once the loading is complete, users may
verify and adjust parameters of interest, along with their designated ranges and
constraints, as shown in Figure 4-12 on the right. The necessary information regarding
project size is also entered by users through this UI.
e. User Interface 2. Available energy settings parameters are accessed through UI 2 along
with their ranges of interest and constraints. The initial energy settings are loaded from
the *.rvt file but can be adjusted by users at this time. If not adjusted these range settings
will remain fixed throughout the GA run. UI 2 is illustrated in Figure 4-13.
88
FIGURE 4-12: SCREENSHOT OF H.D.S. BEAGLE V20120903 USER INTERFACE 1 – DESIGN PARAMETER SETTING
UI. (LEFT) THE INITIAL USER INTERFACE; (RIGHT) THE INTERFACE AFTER LOADED EXCEL TEMPLATE.
FIGURE 4-13: SCREENSHOT OF H.D.S. BEAGLE V20120903 USER INTERFACE 2 – ENERGY PARAMETER SETTINGS
89
FIGURE 4-14: SCREENSHOT OF H.D.S. BEAGLE V20120903 USER INTERFACE 3 – INDIVIDUAL SURFACE ENERGY
SETTINGS
f. User Interface 3. If individual surface IDs are assigned in the *.rvt file, their energy settings
can be accessed and adjusted individually in the third UI of H.D.S. Beagle. As illustrated in
Figure 4-14, user can selectively set the surfaces of interest. If no action is taken, or for
surfaces not otherwise designated, energy settings specified in UI 2 will be used instead.
g. User Interface 4. UI 4 provides the opportunity for users to set GA-related parameters for
H.D.S. Beagle, as illustrated in Figure 4-15. These settings include initial population size,
mutation ratio, crossover ratio, maximum iteration number, and selection size. After
completion, H.D.S. Beagle will proceed according to these user settings to breed the
design alternatives. The analyzed result can be found in the folder where user placed the
(*.dll) file.
FIGURE 4-15: SCREENSHOT OF H.D.S. BEAGLE V20120903 USER INTERFACE 4 – GA SETTINGS
90
H.D.S. Beagle’s executing process is summarized in Figure 4-16.
FIGURE 4-16: ILLUSTRATION OF H.D.S. BEAGLE’S EXECUTION PROCESS. DIAGRAM BY THE AUTHOR.
The use of H.D.S. Beagle is separated into four main steps: (1) Prepare design model and
constraint file; (2) Set up H.D.S. Beagle; (3) Execute H.D.S. Beagle; and (4) Obtain data analysis
from H.D.S. Beagle for user evaluation. The overall workflow comprising these steps is illustrated
in Figure 4-17.
91
FIGURE 4-17: SUMMARY OF H.D.S. BEAGLE USE PROCESS MAP.
92
4.4 EEPFD WORKFLOW
The overall EEPFD process of using H.D.S. Beagle, as applied to a typical simulation six-step
processes, is illustrated in Figure 4-18. While the figure presents the process of one GA run, a
project may consist of multiple GA runs, either allowing for multiple initial designs or
accommodating varying GA settings of interest. The resulting detailed workflow for each step is
described below.
FIGURE 4-18: EEPFD’S SIX-STEP PROCESS FOR INTEGRATING DESIGN AND ENERGY SIMULATION.
Step 1 Generate Design
The step includes two subcategories—the generation of the initial design, and the generation of
design alternatives. The initial design is developed through the preparation of the initial
executable design and constraints file, according to the project requirements. At this point, each
design case is provided a specific site, weather and climate information, and overall program
objectives. Parametric ranges of interest are also set in this step, once per GA run. The generation
of design alternatives is then produced through the automated process of exploring combinations
of parametric values within their set ranges and subsequent performance results by the GA.
Step 2 Transfer Model
The integrated platform enables the direct translation of the design geometry and related energy
settings into the energy simulation engine provided by GBS. As a result, an analyzable energy
model can be obtained directly, thereby proceeding with the energy simulation without additional
modification of geometry or energy-related attributes. In this process, H.D.S. Beagle automatically
converts and sends the design alternative to the GBS server to request and obtain a conceptual
energy analysis.
Step 3 Modify Energy Model
The step is bypassed since the model transfer to GBS is automated by H.D.S. Beagle through Revit,
which is unavailable through most non-integrated platforms.
93
Step 4 Run Analysis
This step is executed in two stages. In the first stage, the automatically translated energy model
of the design alternative is sent to GBS for analysis and the results are obtained. This part is
executed through Revit by H.D.S. Beagle after a design alternative is generated. Once the energy
analysis results are available, H.D.S. Beagle proceeds with extracting the relevant information
from the results and the design model to automatically calculate the SPC and NPV of the design
alternative. These scores are then paired with the EUI score provided by GBS for the next
evaluation step.
Step 5 Evaluate Results
Performances of all design alternatives within each generation are automatically evaluated,
ranked, and scored prior the candidates being selected for the following generation by the GA.
Figure 4-19 illustrates an example of the Beagle’s genetically-driven evaluation process, where
the initial user-defined population of 10 was created, followed by 3 subsequent generations, bred
based on the performance of the previous generation. Once the user-requested number of
generations has been reached, the results are provided to the user for manual evaluation. At this
point, the GA run is considered complete.
Step 6 Execute Decision
Once the results are generated, there are two ways to proceed: (1) a designer manually
implements changes in the initial design of the executable design file, based on acquired
simulated results, and reengages the GA run; or (2) a design alternative is selected, based on the
multi-objective tradeoff analysis provided by H.D.S. Beagle, and the design proceeds to the next
stage of development.
FIGURE 4-19: ILLUSTRATION THE GA DRIVEN EVALUATION BY H.D.S. BEAGLE OF HYPOTHETICAL SCENARIO 10
AND A SUBSET OF THE RESULTING SOLUTION POOL.
94
4.5 SUMMARY OF EEPFD DEVELOPMENT
The purpose of EEPFD is to provide easily accessible performance analysis in a tightly coupled
feedback loop with geometric and financial consideration, in order to assist in early design
decision-making with the intent of pursuing lower EUI design. To this end, EEPFD and H.D.S.
Beagle successfully demonstrate the ability to automate and integrate the design, energy
simulation, and financial model platforms through parameterization and algorithmic optimization.
In this way, typical interoperability issues are bypassed and a tradeoff study is provided by
automatically generating, evaluating, and ranking the available design alternatives. Once the
developmental stage is complete, EEPFD is tested and validated through a series of hypothetical,
pedagogical, and real world experiments, as described in the following chapters.
95
CHAPTER 5 HYPOTHETICAL CASE-BASED EXPERIMENTS
5.1 INTRODUCTION OF HYPOTHETICAL CASE-BASED EXPERIMENTS
During the development of EEPFD and H.D.S Beagle, in order to define the process integration
workflow, 12 hypothetical design scenarios are created for testing. After the integration workflow
is defined, these scenarios are continuously used to assess EEPFD and H.D.S. Beagle’s functionality,
limitations, and ability to provide a “designing-in performance” environment that enables the
inclusion of energy performance feedback during the early stages of the design process. Utilizing
these design scenarios, a series of experiments are designed to test computational and
automation affordances. In addition, these experiments emulate the early energy simulation
process, as used through EEPFD, in order to observe the enabling and disruptive properties of
these components and their effect on the overall design process.
The summary of the 12 scenarios is provided in Figure 5-1. Each scenario represents a design
problem defined through a series of site constraints, program requirements, and geometrically
driving, fixed, and driven parameters. The geometric complexity of each scenario is represented
by the energy model surface count of the scenario’s initial design. The detailed definition of each
scenario can be found in 0.
The following is a summary of the experimental sets based on these hypothetical scenarios:
1. Technology Affordance: Ascertain the technology affordance of EEPFD and H.D.S. Beagle.
2. GA Validation: Validate the framework against the identified criteria for the early stage
design.
3. Complexity vs. Performance I: Understand how EEPDF can help resolve issues pertaining
to the inclusion of energy performance with increasingly complex projects.
4. Complexity vs. Performance II: Support the hypothesis that increased geometric
complexity leads to the increased relevance of utilizing EEPFD.
5. Best Practice of EEPFD: Further discuss and observe the potential use of the framework
as a recommended best practice of utilizing EEPFD for including energy performance
feedback during the early stages of the design process.
96
FIGURE 5-1: SUMMARY OF THE 12 HYPOTHETICAL DESIGN SCENARIOS, INCLUDING DESIGN PARAMETER
PROBLEM SCALE, COUPLING, AND GEOMETRIC COMPLEXITY. THE TABLE ILLUSTRATES THE AVERAGED
MEASUREMENTS OF SURFACE TESSELLATION COUNT, I.E., GEOMETRIC COMPLEXITY AND TIME
REQUIRED FOR AUTOMATED ENERGY ANALYSIS TO ROUND TRIP FOR THE AVERAGE OF ALL
OFFSPRING IN ALL GENERATIONS FROM EACH SCENARIO.
97
5.2 HYPOTHETICAL CASE-BASED EXPERIMENT I: TECHNOLOGY AFFORDANCE
5.2.1 THE TECHNOLOGY AFFORDANCE EXPERIMENT OBJECTIVE
This experiment set is designed and performed with the aim of understand the technology
affordances of H.D.S. Beagle based on the technologies available at the time of the release of
H.D.S. Beagle v20120903 and on the observations made during technological development period.
The purpose of this experiment set is to account for current technology affordances, gauge the
potential for future development, and assess the impact on the resulting EEPFD workflow.
5.2.2 THE TECHNOLOGY AFFORDANCE EXPERIMENT DESCRIPTION
The first hypothetical experiment set is used to test H.D.S. Beagle and EEPFD during the
technology development period. It is an iterative process that utilizes the hypothetical cases to
adjust the functionality and workflow of H.D.S Beagle and EEPFD, as well as assess the current
technology affordance. The general experimental process is illustrated in Figure 5-2 below:
FIGURE 5-2: HYPOTHETICAL CASE-BASED EXPERIMENT I - ILLUSTRATED EXPERIMENTAL PROCESS.
Each scenario is created and numbered in chronological order. The first scenario was designed to
determine a functioning workflow that allowed for utilization of H.D.S. Beagle. In addition, this
scenario was used to explore the method by which to correlate generated design geometry with
energy settings. Correct correlation is a critical component with regards to energy calculations
and space programing requirements since the occupancy type and associated energy assumptions
are essential to these types of calculations.
To further this interest, the first scenario is defined as a set of simple geometry with a courtyard
offset from a theoretical site. In order to parameterize the design problem with respect to
correlated energy calculations, several key components needed to be addressed, namely (1) the
ability to associate individually assigned space types to their corresponding occupancy type and
assumed energy consumption; (2) the ability to vary geometry and reflect these variations in
98
subsequent energy calculations; and (3) the ability to run these simultaneously through H.D.S.
Beagle.
Scenarios 2 through 6 are used to test H.D.S. Beagle’s ability to handle complex geometry and
identify the tool’s current limitations. The geometric complexity of the test scenario ranges from
a simple orthogonal box, to towers with double curvature and twisting factors. Complexity within
program requirements is also accounted for to include scenarios with single use requirements,
such as an office building, to mixed-use space requirements, including underground parking, retail,
hotels, etc.
After identifying the Beagle’s limitations, Scenario 7 was created to verify the viability of the
utilized custom genetic algorithm. This scenario employs a simple orthogonal box, thereby
minimizing expected run times, in order to verify the algorithm’s ability to generate potential
design solutions with improved performance expectations as defined by the user. Different user
defined GA settings were tested to confirm the validity of the algorithm. The observations
regarding the GA validation process are further discussed in Section 5.3.
During these hypothetical experimental runs, the technical affordance of the current framework
was summarized by analyzing the documented results in detail, according to the measurement
metrics established in Section 0 for all 12 scenarios.
5.2.3 RESULTS & OBSERVATIONS OF THE EEPFD’S TECHNOLOGY AFFORDANCE
Throughout the course of the research project, ending with October 5
th
, 2013, 1,310 GA runs were
documented. Each of these scenarios has been subjected to multiple GA runs with various GA
settings and parameter combinations. The summary of these experimental results is given in Table
5-1. It should be noted that results in Table 5-1 are the selected representative runs for each
scenario.
99
TABLE 5-1: SUMMARY OF THE HYPOTHETICAL CASES MEASURES.
Categories/Measures
Scenario no.
1 2 3 4 5 6 7 8 9 10 11 12
Design Problem Measures
Project complexity
Project Size (ft²) 167680 84680 167680 16500 31220 51000 3000 86000
Space type no. 4 4 4 4 1 2 4 1 1 3 1 1
Design complexity
Initial energy model surface no.
i
794 1587 1042 824 7517 2451 270 1060 28 2086 197 338
Explored parameter no. (Design/Energy) 6/61 12/3 16/3 23/3 7/27 8/0 10/3 13/3 0/12 7/4 9/1 6/21
Process Measures
Speed
Time spent to run energy analysis
(minutes)
ii
1.46 3.25 2.95 2.96 28.64 8.36 1.01 2.90 0.53 4.15 0.91 1.70
Feedback method measures
Performance feedback time per result
(minutes)
iii
2 7 8 8 41 40 2 3 1 6 1 4
GA Setting
Initial Population 20 10 10 10 20 10 10 10 40 10 10 20
Crossover Ratio 0.5 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6
Mutation Ratio 0 0 0 0 0 0.006 0 0 0.006 0 0 0.006
Population Size 40 20 20 20 40 10 40 20 40 20 20 40
Selection Size 30 20 20 20 30 10 30 20 20 50 20 10
Maximum Iteration 20 0 0 20 1 10 40 20 20 50 5 10
Product Measures
Feedback quantity
Feedback no. per 8 hours 240 34 30 30 5 6 120 80 240 40 240 60
Feedback quality
(Initial/Solutions Space range)
NPV (Million USD)
528
155-754
71
16-71
92
N/A
132
74-525
538
142-555
(-94)
(-516)-84
738
76-741
565
113-769
(-73)
(-74)-(-71)
(-41)
(-40)-834
(-3)
(-4)-(-2)
34
(-57)-178
EUI (kBtu/ft²/yr)
55
45-68
56
51-77
63
N/A
62
43-83
65
55-88
57
52-67
48
42-88
61
49-79
56
53-104
173
56-233
64
51-99
54
47-99
SPC
75
24-94
6
0-50
3
N/A
5
5-88
88
38-99
54
(-76)-71
31
31-95
83
3-100
100
N/A
10
(-404)-88
99
46-100
99
48-99
Run ID
GBS_Run_20121008_1006
GBS_Run_20130403_1013
GBS_Run_20130405_1540
GBS_Run_20120903_2232
GBS_Run_20121018_1517
GBS_Run_20111010_0532
GBS_Run_20120903_2339
GBS_Run_20120902_1929
GBS_Run_20120202_0843
GBS_Run_20121028_0026
GBS_Run_20121125_1953
GBS_Run_20111205_1020
Note:
i. The surface count is according to the energy model of the initial design geometry. During the GA process varying design options will have varying surface counts.
ii. These time measurements were according to generating the initial masses’ energy models and include the time required to both transfer to and receive results from Green Building Studio
through the Internet.
iii. These recorded times include time needed by H.D.S. Beagle to update new design options, calculate three objective functions evaluating each design option and proceed with tournament
selection.
100
5.2.4 SUMMARY OF THE EEPFD’S TECHNOLOGY AFFORDANCE
The summary of current technology affordances of H.D.S. Beagle based on the technologies
available at the time of the release of H.D.S. Beagle v20120903, as determined through the
hypothetical case-based experiment set I, is below.
Level of Details of the Model
As the selected parameterization platform, Revit is able to contain abundant building information.
However, due to the research scope and the utilized functionalities in Revit, the level of detail of
the design model in this research is confined to the level of detail available through Revit’s
massing study. Therefore, the building components, such as walls, floors, glazing, and roofing,
are represented by surfaces with availability being limited to those available directly through
Revit’s CEA. Table 5-2 lists the 11 contributing energy model components utilized in this research.
Regarding the energy model properties attached to the design model, these options are limited
to the available selection provided directly by Revit’s CEA platform. For example, owing to these
limitations, a whole building can only be assigned one type of HVAC system. A detailed list of
components currently available that are within the scope of interest for this research, along with
their attached attributes and options, is provided in Table 5-2.
TABLE 5-2: H.D.S. BEAGLE’S LEVEL OF DETAIL - AVAILABLE BUILDING COMPONENTS & ATTRIBUTES ACCORDING
TO THE UTILIZED FUNCTIONALITY OF REVIT AND REVIT’S CEA BY THIS RESEARCH.
Massing
Component
Attribute Category Attribute Attribute Value/Options
Mass Geometry Components & Attributes
Mass Families Identity Data Mass: Family and Type User defined
Mass: Type Comments User defined
Dimensions User defined design
parameters
User defined
Overall Project Energy Attributes
Building
Energy
Common Building Type 33 Options
2
Settings Location User defined project location
Ground Plan User defined level
Detailed Model
1
Export Category Room
Export Complexity Simple with Shading Surfaces
Include Thermal
Properties
Yes
Project Phase New Construction
Sliver Space Tolerance 1’
Energy Model - Building
Services
Building Operating
Schedule
11 Options
2
HVAC System 12 Options
2
Outdoor Air Information User defined
Energy Model Components & Attributes
Mass Exterior
Wall
Material and Finishes Graphical Appearance User defined
Dimensions Mass Exterior Wall Area Automatically calculated according to the
energy model
101
Massing
Component
Attribute Category Attribute Attribute Value/Options
Energy Model Target Percentage
Glazing
0~1
Target Sill Height 0~user defined
Shade Depth 0~user defined
Conceptual
Construction
9 Option
3
Mass Interior Material and Finishes Graphical Appearance User defined
Wall Dimensions Mass Interior Wall Area Automatically calculated according to the
energy model
Energy Model Conceptual
Construction
2 Options
3
Mass Exterior Material and Finishes Graphical Appearance User defined
Wall –
Underground
Dimensions Mass Exterior Wall Area Automatically calculated according to the
energy model
Energy Model Conceptual
Construction
4 Options
3
Mass Roof Material and Finishes Graphical Appearance User defined
Dimensions Mass Roof Area Automatically calculated according to the
energy model
Energy Model Target Percentage
Skylight
0~1
Skylight Width & Depth >0~user defined
Conceptual
Construction
7 Options
3
Mass Floor Material and Finishes Graphical Appearance User defined
Dimensions Floor Perimeter Automatically calculated according to the
energy model
Floor Area Automatically calculated according to the
energy model
Exterior Surface Area Automatically calculated according to the
energy model
Floor Volume Automatically calculated according to the
energy model
Level Automatically calculated according to the
energy model
Energy Model Conceptual
Construction
4 Options
3
Mass Slab Material and Finishes Graphical Appearance User Defined
Dimensions Floor Area Automatically calculated according to the
energy model
Identity Data Subcategory Mass Floor
Energy Model Conceptual
Construction
3 Options
3
Mass Glazing Material and Finishes Graphical Appearance
Dimensions Mass Window Area Automatically calculated according to the
energy model
Identity Data Subcategory Mass Glazing
Energy Model Conceptual
Construction
11 Options
3
Mass Skylight Material and Finishes Graphical Appearance User defined
Dimensions Mass Skylight Area Automatically calculated according to the
energy model
Identity Data Subcategory Mass Skylight
102
Massing
Component
Attribute Category Attribute Attribute Value/Options
Energy Model Conceptual
Construction
10 Options
3
Mass Shade Material and Finishes Graphical Appearance User defined
Dimensions Mass Shade Area Automatically calculated according to the
energy model
Mass Opening Material and Finishes Graphical Appearance User defined
Energy Model Conceptual
Construction
Air
Mass Zone Material and Finishes Graphical Appearance User defined
Dimensions Mass Zone Volume Automatically calculated according to the
energy model
Mass Floor Area Automatically calculated according to the
energy model
Energy Analysis Space Type 125 Options
2
Condition Type 6 Options
Note:
1. Adjustable values according to user preference, however these are the consistently fixed values used by this
research.
2. Available options with related assumption values used as found in the CAE reference (Autodesk 2012b) and
organized in Appendix C.1 and Appendix C.2.
3. Available options and corresponding thermal properties sourced from the CEA energy setting (Autodesk 2012c)
and organized in Appendix C.3.
Geometric Complexity
Regarding the geometric complexity of the designs, the complexity of a project is limited by GBS’s
energy simulation engine, rather than any perceived limitations within Revit. Despite Revit’s
ability to successfully translate complex geometric forms into an energy model, GBS can currently
only analyze 8192 exterior surfaces, 8192 interior surfaces, 8192 underground surfaces, 1024
shade surfaces, 8192 openings, and 4096 spaces (Autodesk 2012a). As a result, if the energy
model of the designed geometry exceeds any of these limits, the automation loop is disrupted. In
addition, an issue repeatedly encountered with respect to the process of translating the design
geometry to an energy model must be considered. More specifically, when H.D.S. Beagle is
engaged, this process is perform automatically, thereby a user is not required to provide any
further manipulation in order to generate an analyzable energy model from the design geometry.
As a result, a designated geometric element in the design geometry, such as an exterior wall,
should be recognized as such while maintaining its attributes accordingly within the energy model.
However, when a design model possesses high degrees of curvature, as is explored in Scenario 2
and 3 mistranslation can potentially occur. For example, a curved roof surface might be
recognized as an exterior wall surface during this transfer process. In this case, in the energy
model, the surface would have the attributes of the exterior wall, instead of those assigned to the
roof material. This mistranslation, in turn, affects the accuracy of the analyzed results that are
dependent on these attributes to execute the energy performance calculations. The presence of
these types of mistranslations can be manually identified and corrected by examining the
resulting energy model. However, during the automated GA process, it is impossible to manually
correct any identified issues in the generated energy models for each design alternative. The
extent of the impact this type of error has on the accuracy of the generated results and how this
103
inaccuracy affects the GA process is in need of further investigation. However, the impact is
believed to be statistically not significant, as the number of surfaces this mistranslation effects
has been observed to be minor in the context of complex geometric scenarios. Despite these
issues, the Beagle has the capacity successfully enable the exploration of more complex geometry
than typically modeled in common energy simulation programs, as demonstrated by Scenarios 2,
3, and 5.
FIGURE 5-3: MAPPED EXPERIENCED ENERGY ANALYSIS TIMES WITH CORRELATED GEOMETRY SURFACE
QUANTITIES, OVERLAID WITH 3D VISUALIZATIONS OF EACH SCENARIO’S TESSELLATED ENERGY
MODEL, INITIAL DESIGN ENERGY MODEL SURFACE COUNTS, AND AVERAGED RUN TIMES. DIAGRAM
BY THE AUTHOR.
Speed
The term “speed” in the context of this section refers to the time needed to generate a new design
alternative and obtain analyzed energy results. When utilizing H.D.S. Beagle, there is a direct
correlation between the documented run time and the calculated surface count of the energy
model. This linear relationship is illustrated in Figure 5-3. Currently, the averaged time required
for calculating each surface is approximately .0036 seconds. While the correlation between the
surface count and the run time is evident, there is no observed correlation between the time
required for the energy analysis and the quantity of parameters, e.g., scale of the design problem
intrinsic to each design scenario. However, there may be a correlation between the quantity of
parameters and the overall time required to reach the mathematically converging optimal design
104
solution pool. This potential correlation is in need of further exploration. Furthermore, according
to the present data, no correlation is observed between the project size and the run time;
however, further observation is required to confirm this finding. While the geometric complexity
of the design is still the dominant factor affecting the process speed, this limitation is determined
not by the tool itself, but by the technology available to operate H.D.S. Beagle. Consequently, time
constraints are still a necessary stopping point criteria, rather than optimization of design options,
or user-defined GA iterations. However, as technology and computing power advances, this issue
will become less relevant.
Accuracy
The term “accuracy” is used in this context with respect to two major concepts. First, it refers to
the accuracy of the provided energy use calculations compiled through Revit’s CEA and GBS’s
simulation engine. Second, it describes the accuracy of information provided by H.D.S. Beagle,
which is subsequently used by Revit’s CEA and GBS accordingly.
In terms of the energy simulation services adopted in this research, GBS is not listed among the
tools officially validated through the BESTEST (DOE 2011a) at the time of this research. However, ,
DOE-2, the simulation engine utilized by GBS, has been widely reviewed and used to develop state,
national, federal, and international building energy efficiency standards, such as ASHRAE90.1 and
California Title 24 (Hirsch 2009, LBNL 2008). According to the Revit’s CEA development team, the
service they provide has been validated through an independent consultant, whose analysis of
energy use against regional and national commercial building benchmark data determined that
Revit’s CEA was within acceptable standards of practice for DOE-2 energy models (Smith,
Bernhardt, and Jezyk 2011). In addition, this service is also recognized by preceding research as
being suitable for early stage design for Net Zero Energy Building Design with a high accuracy
(Attia and Herde 2011).
The second concept of concern pertains to the accuracy of information provided for analysis by
H.D.S. Beagle to Revit’s CEA. Currently, assumptions and generalizations are made during
conceptual analysis that may compromise the actual absolute accuracy of the results. In order for
H.D.S. Beagle to explore design alternatives with varying associated program, instead of utilizing
an overall building designation of “office” or “hotel”, a localized occupancy designation is used
instead. This means that low impact areas, such as hallways or storage areas, which are accounted
for in building designations, are not included in the localized space type designations used by
H.D.S. Beagle. Table 5-3 lists these differentiating assumption values of common building types
and their corresponding space types. It can be observed that the designated space type is
consistently assigned a higher load density than the overall building type. This results in an entire
floor being designated as a high impact area, which leads to an increased load density being used
to calculate the expected EUI.
However, the absolute accuracy of the EUI calculation is not significantly relevant to the overall
use of H.D.S. Beagle or EEPFD. Since these results are used as a basis for comparison, as opposed
to providing expected performance levels of the actual realized design, the ability to provide the
means to accurately gauge the impact of a design decision on the overall design performance
becomes more relevant. However, while the resulting difference may not be crucial enough to
impact the final optimal solutions, this issue should be considered as a potential topic for further
exploration.
105
TABLE 5-3: LOAD COMPARISON BETWEEN OVERALL BUILDING TYPE AND THE COMMON CORRESPONDING SPACE
TYPE AS SUGGESTED BY AUTODESK. INFORMATION PRESENTED IS ORGANIZED ACCORDING TO THE
ASSUMPTION VALUES PROVIDED BY CEA’S REFERENCE (AUTODESK 2012B).
Building Type: Office Space Type: Office –Open Plan
People/100 m² 3.5 5
Lighting Load Density (W/m²) 10.76 11.84
Power Load Density (W/ m²) 13.99 16.10
Building Type: Retail Space Type: Supermarket Sales
Area Retail
People/100 m² 10 8
Lighting Load Density (W/ m²) 16.14 22.60
Power Load Density (W/ m²) 20.44 10.8
Building Type: Hotel Space Type: Living Quarters
Hotel
People/100 m² 2.5 10
Lighting Load Density (W/ m²) 10.76 11.84
Power Load Density (W/ m²) 18.29 5.8
Building Type: Parking Garage Space Type: Parking Area
Attendant Only Parking Garage
People/100 m² 2.5 5
Lighting Load Density (W/ m²) 3.23 2.00
Power Load Density (W/ m²) 3.23 3.20
Building Type: Single Family Space Type: Living Quarters
Dormitory
People/100 m² 0.945 10
Lighting Load Density (W/ m²) 10.76 11.84
Power Load Density (W/ m²) 10.76 5.8
Building Type: School or
University
Space Type: Classroom or
Lecture Training Penitentiary
People/100 m² 25 65
Lighting Load Density (W/ m²) 12.91 13.99
Power Load Density (W/ m²) 16.14 5.8
GA Continuation
H.D.S. Beagle is a prototype that is still under development. As a result, the current version is
unable to compensate for any interruptions experienced in Internet access, hardware, or software
malfunctions. When an interruption is encountered, H.D.S. Beagle currently requires manual
intervention to restart the GA run, beginning from the most recently completed generation. It is
intended that, through further development of H.D.S. Beagle, a solution to this issue be found to
ensure the stability of the automated process.
Data Analysis & Visualization
Currently, the Beagle is able to automatically record and store performance data for each design
alternative, along with 3D visualization based on user specification. In addition, a summary table
of all generated design alternatives, gene vales, objective function scores, and genealogy is
available for review as an Excel file. However, the final Pareto ranking and the solution space
106
analysis still requires manual matching of the performance results with each design alternative’s
3D visualization. Since this ability is not provided through H.D.S. Beagle, MATLAB® was employed
in this research, in order to facilitate the process. The developed MATLAB® code extracts data
from the Excel file containing the summary results before generating four color-encoded plots.
One plot is a 3D visualization of all three objective scores of each design alternative, color-coded
according to Pareto rank. The other three plots are 2D visualizations of two objective score
combinations, also color-coded according to Pareto ranking between the two objective scores of
interest per 2D plot. Figure 5-4 depicts a sampling of these plot types. Moreover, MATLAB®
currently provides ranking of design alternatives according to the overall available solutions space
for design decision-making. It is the intention that a future version of H.D.S. Beagle will generate
this analysis information and visualizations without requiring the use of MATLAB® or manual
intervention. Since this function is not currently included in H.D.S. Beagle, steps are included in
EEPFD for users to analyze and visualize the generated data.
FIGURE 5-4: SAMPLE OF THE DATA PLOTS GENERATED BY MATLAB® THROUGH A CUSTOMIZED CODE
DEVELOPED BY THE RESEARCH TEAM. THE PLOTTED DATA PERTAIN TO EXPERIMENTAL
RUN_20120903_2339.
107
5.3 HYPOTHETICAL CASE-BASED EXPERIMENT II: GA VALIDATION
5.3.1 THE GA VALIDATION EXPERIMENT OBJECTIVE
The GA validation experiment is conducted during the prototype development period to ensure
that the algorithm can provide:
1. Provide a solution space with an improved performance, across the multiple competing
objective functions.
2. Be adaptable to a wide spectrum of design scenarios, both in typology and geometric
complexity.
It should be noted that, in contrast to other MDO applications, this research did not designate the
ability of the GA to identify an optimal solution through the mathematically defined ideal
convergence as one of the validation requirements. This is due to the issue of time constraints, as
the key determinants of the stopping point of the early design exploration process and the
inherent nature of the design decision-making being based on trade-offs and often subjective
choice. Considering the issue of time, the goal of EEPFD is to provide a design alternative pool
with improved performance by which to support informed design decision-making. Therefore, in
this research, success is defined as the observation that EEPFD consistently provides a design
alternative solution pool characterized by a measurable performance improvement within the
time allowed.
5.3.2 THE GA VALIDATION EXPERIMENT DESCRIPTION
This experiment set uses the same collection of 12 scenarios, as tested through the previously
defined six step simulation process, and illustrated in Figure 4-18. For this research, these
scenarios are compiled in order to represent a geometric complexity range, from a simple
orthogonal box, to towers with double curvature and twisting factors. Complexity within program
requirements is also accounted for to include scenarios with single use requirements, such as an
office building, to mixed-use space requirements, including underground parking, retail, hotels,
etc. This spectrum is compiled by the author, as it emulates the types of complexity that real world
design problems might present as challenges to architects, or might be encountered in a design
studio setting.
In order for EEPFD to be considered valid for early stage design the following two criteria must be
met:
1. The ability to generate a better performing solution space indeterminate of the stopping
point.
2. The ability to generate a better performing solution space irrespective of varying
complexity.
108
In response, a series of tests are performed with varying gene combinations, GA settings, and
designated stopping points. The obtained results are examined in detail by analyzing the
performance of each solution space of each generation of each GA run.
The measurement of improvement is assessed by Pareto ranking the solution space based on the
three objectives functions (EUI, SPC and NPV). If EEPFD can continuously generate new Pareto
solutions with the progression of each generation then EEPFD is determined capable of generating
improved solutions. In addition, each objective’s performance boundaries of the solution space
are used as an additional improvement indicator. According to the measurements established by
this research, the three objective scores of the initial design serve as the basis for comparison,
whereby the performance ranges of three objective scores are used to represent the performance
of each scenario’s solution pool.
5.3.3 RESULTS & OBSERVATIONS OF THE GA VALIDATION EXPERIMENT
The summary results of all 12 scenarios are presented in Table 5-1. This table shows that, despite
each scenario’s different complexity and GA settings, combined with different stopping points,
the resulting solution space boundaries presented superior results when compared to the initial
design’s performance. This demonstrates that EEPFD is able to generate better performing
solutions for each objective. In this section, the results of three scenarios are presented for an in-
depth discussion. These three are selected from the original twelve, based on their reflectance of
the design problem complexity spectrum provided by the original set of generated scenarios.
Scenario 7 employs a simple orthogonal box with an initial energy model surface count of 270
faces, thereby minimizing the expected run times, while still including four program space types
that contribute to the overall design problem complexity. This scenario serves as the base case to
ensure the accuracy of the automatically calculated results for each objective function by
comparing them with manually calculated results. This scenario is also used to verify that the
customized algorithm, i.e., population, selection, and Pareto Rank evaluation, have been properly
encoded by examining the automation loop in a breakdown fashion. With this simple geometry,
H.D.S. Beagle is able to generate 1610 offspring according to the user defined GA settings, as
recorded in Table 5-1, with an average speed of 1.01 minutes per result. Through these
generations, measurable improvements can be observed by comparing the initial design’s
performance with the generated solution space. According to Table 5-1, SPC improves from 31 to
95, EUI from 48 to 42 kBtu/ft²/yr, and NPV from 738 to 741 million USD.
Scenario 8 is selected as a singular program requirement, i.e., office space, and yet with a broader
range of driving geometric parameters, and hence, a different design complexity formulation. The
scenario is limited to geometrically orthogonal elements of moderate complexity, resulting in an
energy model surface count of 1,060 faces—an approximately fortyfold increase when compared
to the Scenario 7. However, it is not limited to a single volumetric extrusion and therefore
introduces volume-to-volume interaction into the analytical process. Based on the explored 13
geometric parameters, H.D.S. Beagle is able to generate potential design solutions that exhibit
improved performance of the defined objectives: SPC from 83 to 100, EUI from 61 to 49
kBtu/ft²/yr, and NPV from 565 to 769 million USD.
109
Finally, Scenario 5 is selected due to its geometric complexity in order to observe EEPFD’s
performance when required to address a more geometrically complex problem, with an energy
model surface count of 7,517 faces. Scenario 5 is modeled after an existing design studio project,
in order to reflect the type of geometrically complex problems that are encountered in real world
design or design studio applications. While an increased run time per iteration of 28.64 minutes
is recorded, H.D.S. Beagle is still able to provide a design alternative solution pool with improved
performance—SPC from 44 to 95, EUI from 66 to 55 kBtu/ft²/yr, and NPV from 440 to 514 million
USD.
To determine if EEPFD provides such a continuing improvement in solution pool performance, the
design alternatives for the three designated scenarios are recorded over 20 generations, as shown
in Table 5-4. These results indicate that, in each subsequent generation, new Pareto solutions are
established, resulting in a continuous improvement in the performance. Here, this is understood
as better-fit compromises for the set objective function boundary conditions, i.e., those shown in
Table 5-1, of the generated design alternatives. Therefore, it can be confirmed that, if time
constraints dictate the stopping point of the generation of design alternatives, the Pareto
solutions of the latest generation will provide an improvement over the previous generation. As
a result, irrespective of the determined stopping point, EEPFD provides a solution pool with
improved performance for consideration by the designer during the decision making process.
110
TABLE 5-4: SCENARIO 7, 8 AND 5’S SOLUTION SPACE PERFORMANCE PER GENERATION
Scenario 07 - GBS_Run_20120903_2339
Generation No. Initial 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
New Pareto
Solution No.
N/A 4 11 8 15 13 17 16 14 15 21 9 17 16 12 12 8 13 11 13 9 11
Pareto Solution
(%)
N/A 40.0 27.5 20.0 37.5 32.5 42.5 40.0 35.0 37.5 52.5 22.5 42.5 40 30 30 20 32.5 27.5 32.5 22.5 27.5
Solution Space Improvement*
NPV
(Million USD)
738.0 472.8 374.6 0.0 69.7 0.0 0.0 7.7 8.0 0.5 0.0 15.3 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
EUI
(kBtu/ft²/yr)
48.0 5.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
SPC 31.4 38.4 9.7 6.3 8.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Scenario 08 - GBS_Run_20120902_1929
Generation No. Initial 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
New Pareto
Solution No.
N/A 9 7 8 12 15 16 17 15 17 17 16 14 16 13 15 15 12 18 13 12 13
Pareto Solution
(%)
N/A 90.0 35.0 40.0 60.0 75.0 80.0 85.0 75.0 85.0 85.0 80.0 70.0 80.0 65.0 75.0 75.0 60.0 90.0 65.0 60.0 65.0
Solution Space Improvement*
NPV
(Million USD)
565.3 28.7 2.0 47.6 48.5 35.6 41.6 0.0 0.0 0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
EUI
(kBtu/ft²/yr)
61.0 9.3 1.6 0.0 0.5 0.0 0.4 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
SPC 83.2 12.9 1.3 1.1 1.2 0.0 0.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Scenario 05 - GBS_Run_20121018_1517
Generation No. Initial 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
New Pareto
Solution No.
N/A 2 2 5 5 4 7 5 6 8 6 5 7 7 5 4 1 2 3 2 0 3
Pareto Solution
(%)
N/A 20.0 16.7 41.7 41.7 33.3 58.3 41.7 50.0 66.7 50.0 41.7 58.3 58.3 41.7 33.3 8.3 16.7 25.0 16.7 0.0 25.0
Solution Space Improvement*
NPV
(Million USD)
439.5 171.8 90.4 77.7 3.5 28.3 38.5 0.0 0.0 0.2 0.0 0.0 0.0 8.0 0.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0
EUI
(kBtu/ft²/yr)
66.3 8.9 0.0 0.0 1.3 0.0 0.0 0.0 1.4 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
SPC 43.9 23.6 8.6 16.3 0.0 0.5 0.0 1.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
*Solution space improvement is measured according comparison with prior generation.
111
5.3.4 SUMMARY OF THE GA VALIDATION EXPERIMENT
The two critical components for validating EEPFD as a valid design methodology for further
evaluation during the early stage design are (1) the ability to provide an improved solution space
within the available or comparable time limit, and (2) to demonstrate adaptability to a wide
spectrum of design scenarios, i.e., geometric complexity. Through the presented experimental
runs, EEPFD demonstrates an ability to fulfill both of these critical components. As illustrated in
Table 5-2, the framework was able to provide an improved solution space from one generation to
the next, thereby ensuring an improved solution space independent of the user defined cutoff
time, assuming that at least two generations (Gen 0 and Gen 1) are allowed to be populated. As
illustrated in Figure 5-1 and Table 5-1, EEPFD demonstrates the ability to be adaptable to a wide
spectrum of design scenarios, from simple boxes to complexly curving and interacting towers,
while providing a solution space with an improved performance for each. This trend of
improvement is observed throughout the twelve hypothetical scenarios tested through EEPFD to
date. Therefore, since both critical components are addressed, this research determines that
EEPFD can be considered suitable for further study. Subsequently, this research proceeds to
investigate EEPFD’s usability by designers outside of the original research team to establish
whether designers can actually incorporate and utilize the generated data and design alternatives
to support their design decision-making during the early stages of the design process.
112
5.4 HYPOTHETICAL CASE-BASED EXPERIMENT III: COMPLEXITY VS. PERFORMANCE A
5.4.1 THE COMPLEXITY VS. PERFORMANCE EXPERIMENT A OBJECTIVE
The purpose of this experiment set is to observe the impact of complex geometry on performance,
i.e., NPV, EUI, and SPC, and to demonstrate the potential value of the results generated by EEPFD
to support design decision-making.
5.4.2 THE COMPLEXITY VS. PERFORMANCE EXPERIMENT A DESCRIPTION
Scenario 5, with an initial design composed of a twisting double tower, is selected due to its
geometric complexity in order to observe the performance of the established framework when
confronted with a more geometrically complex problem. This scenario is modeled after an existing
design studio project, in order to reflect the type of geometrically complex problems that might
be encountered in real world design applications. This hypothetical scenario is defined as
possessing a twisting double tower divided by a wind canyon atop a shared plinth. The parametric
model is composed of four program space types—parking, retail, office, and hotel—along with
nine geometric driving parameters, as shown in Appendix D.5. Once the design problem is
formally defined, the initial model is used by the Beagle to generate and breed design alternatives.
In addition to the defined nine driving geometry parameters, the parameters available for
exploration include window opening percentage, shade depth, and skylight percentages. Figure
5-5 summarizes the explored parameters and parametric ranges of the generated data set.
Run ID
GBS_Run_20121106_0644
GA Settings
Initial population = 10
Population size = 12
Cross over ratio = 0.6
Mutation ratio = 0
Max. iteration no. = 50
Selection size = 12
Actual iteration no. = 30
Explored parameters & ranges
Level#AboveGround [11,18]
Level#UnderGround [1,3]
TwistAngle [0,90]
TopSetback [1,10]
ScaleFactor[0.8,1.25]
CanyonWidth[12,30]
BaseSetback[6,15]
TargetPercentageGlazing[0.2, 0.83]
ShadeDepth[0, 4.5]
TargetPercentageSkylights[0, 0.45]
FIGURE 5-5: SUMMARY OF THE GA RUN FOR SCENARIO 5 INCLUDING PARAMETERS, EXPLORED RANGES, AND
USER DEFINED GA SETTINGS.
113
5.4.3 RESULTS & OBSERVATIONS OF THE COMPLEXITY VS. PERFORMANCE EXPERIMENT A
The average time needed to obtain the analysis results for a single design alternative was
approximately 28.64 minutes. This measurement includes the time required for updating the
geometry, sending the updated geometry to the cloud based GBS server for energy analysis,
receiving the requested energy analysis results, extracting necessary data from the updated
geometry, and calculating the three objective scores based on the analysis results and the
extracted data. After a run consisting of 20 generations, the measured improvement in the
solution space performance in relation to that of the initial design model was observed, as
indicated by the following fidnings: the NPV improved from 440 to 514 million USD, the EUI
improved from 66 to 55 kBtu/ft²/yr, and the SPC improved from 44% to 99% compliance. At the
end of the run, 63 Pareto solutions were identified out of the 250 generated design alternatives
during the 120 hours of active runtime. Each design alternative is provided with a 3D image
visualization along with three analyzed objective scores and three trade-off data plots, which
indicates the performance of the alternative, as illustrated in Figure 5-6. These results
demonstrate that the Beagle can provide the opportunity for designers to explore a broader range
of alternatives with more data and performance feedback support than conventionally available.
Further examination of the generated data revealed some trends and relationships between the
selected parametric values and the performance scores. One observed point of interest is that
two of the explored driving parameters, CanyonWidth and TwistAngle, demonstrate a reduced
impact on estimated EUI values compared to TargetGlazingPercentage. Moreover, it can be noted
that, in this specific scenario, SPC possesses a noticeable sensitivity to the value of FloorNumber.
This indicates that each modifiable parameter has an uncoupled but significant effect towards
each defined objective. As a result, the research suggests that the uncertainty can be decreased
by conducting a sensitivity analysis of each parameter. Another finding is that offspring which
have a better EUI value exhibited glazing area ratios of around 10-20%. These results demonstrate
that the availability of solution space and solutions’ performance for comparison and sensitivity
analysis has the potential to enable designers to understand the impact of each parameter on the
specific performance value. As such, this data can assist designers in making informed decisions.
Figure 5-7 illustrates a subset of Scenario 5, which includes the highest-ranking design alternatives
for each objective from the overall solution space. The first column shows the eight design
alternatives that have the highest NPV value; those with the lowest EUI value are shown in the
second, and third presents those with the best design score. The last column shows the design
alternatives which have the best ranking when considering all three objectives according to the
Pareto Ranking method. This is another example confirming that EEPFD is able to provide the
context for designers to make informed decisions based on the project objectives. For example,
if the design selected from the solution pool is Offspring ID 0448, it can be done with the
knowledge that the tradeoff for the superior NPV score comes with an approximate 10% increase
in the EUI versus the lowest observed EUI of Offspring ID0457.
114
FIGURE 5-6: PARALLEL GEOMETRIC AND ANALYTICAL DATA VISUALIZATION ILLUSTRATING 3 2D PLOTS OF DATA
SET FOR 3 OFFSPRING OF SCENARIO 5.
FIGURE 5-7: SUBSET OF SCENARIO 5 DATA ILLUSTRATING THE HIGHEST-RANKING DESIGN ALTERNATIVES FOR
EACH OBJECTIVE FROM THE OVERALL SOLUTION SPACE.
115
5.4.4 SUMMARY OF THE COMPLEXITY VS. PERFORMANCE EXPERIMENT A
The ability of EEPFD to provide feedback when confronted with complex problems can be
considered critical to the overall effectiveness of the framework when applied to the architectural
design process. Since mental rules of thumb or simple simulation calculations are often less
effective when applied to more complex design problems, the ability to measure the impact of
geometric variations on the NPV, EUI, and SPC of alternative designs is typically minimized.
Thereby, in this context, EEPFD provides the potential ability of exceeding conventional means of
generating and analyzing design alternatives. In addition, this experiment demonstrates that the
generated results can provide the context for designers to make informed decisions during the
early stage of the design process. Figure 5-8 illustrates a subset of Scenario 5’s Pareto solution
space. The highlighted scenarios represent the designer’s final selection. While the selection of a
particular design alternative for the final scenario may not be based solely on the optimized
performance for any or all of the three objectives, EEPFD provides an opportunity for
understanding the impact of a particular decision on the overall expected performance of a
chosen design. As a result, complex geometric design alternatives can be analyzed both
aesthetically and with respect to their performative properties through EEPFD.
116
FIGURE 5-8: AN EXAMPLE OF A SUB SET OF THE SOLUTION SPACE FOR SCENARIO 5 IN WHICH THE MULTI-
OBJECTIVES —EUI, NPV, AND SPC—ARE CALCULATED, RANKED, AND VISUALIZED FOR EASE OF
MANUAL DECISION MAKING.
117
5.5 HYPOTHETICAL CASE-BASED EXPERIMENT IV: COMPLEXITY VS. PERFORMANCE
EXPERIMENT B
5.5.1 THE COMPLEXITY VS. PERFORMANCE EXPERIMENT B OBJECTIVE
The second set of experiments pertaining to the issue of complexity vs. performance aims to
demonstrate the potential value in the ability to explore increasingly complex geometry. It also
assesses the impact of such geometry on the overall expected performance of the generated
design solution space, with particular interest regarding the impact on calculated EUI
performance.
5.5.2 THE COMPLEXITY VS. PERFORMANCE EXPERIMENT B DESCRIPTION
Scenario 10 is chosen as the test scenario to explore the impact of introducing geometric
complexity on objective performance, i.e., NPV, EUI, and SPC. The experiment begins with the
creation of initial designs with varying degrees of complexity. As illustrated in Figure 5-9, Model
(a) is the base model formed by extruding square profiles from the bottom of the site plane. The
width of the square profile, W, is driven by the SiteSetback parameter. For Model (b), there is an
additional top profile to define the building form. The top profile is driven by the bottom profile
through a scale parameter, ScaleTop. Model (c) is formed by increasing the complexity from
model (b) by adding a middle profile which is driven by base square through another scale factor,
ScaleMid. Lastly, Model (d) is the original Scenario 10, in which an additional rotation parameter,
R1, is added to Model (c) to define the rotation angle of the middle and top profiles.
After the model preparation, all aforementioned models are tested through EEPFD, using the
same GA settings, energy parameters, and variation ranges. The only difference is the increasing
number of their corresponding driving geometric parameters. The settings of each run are
summarized in Table 5-5.
5.5.3 RESULTS & OBSERVATIONS OF THE COMPLEXITY VS. PERFORMANCE EXPERIMENT B
Figure 5-10 illustrates the results obtained from this experiment set. The base run of Model (a)
provides a benchmark energy performance EUI level of 145 kBtu/ft²/yr. The solution space range
for Model (a), when only varying the three energy-related parameters, was 101 to 236 kBtu/ft²/yr.
Following the introduction of geometry-based parameter variations, this range was reduced to
EUI ranging from 46 to 132 kBtu/ft²/yr. Subsequently an increase in the complexity of the
geometry was introduced in Model (b), (c), and (d). The results depicted in Figure 5-10 indicated
that a solution space of Model (d), the most geometrically complex of the four models, provides
an improved energy performance through the calculated EUI values. Model (d) also provides a
solution space that possesses improved performance for the NPV and SPC objectives beyond the
levels reached with the first three models.
118
FIGURE 5-9: ILLUSTRATION OF SCENARIO 10’S MODELS A, B, C, AND D WITH INCREASINGLY GEOMETRICALLY
COMPLEX INITIAL DESIGNS, THEIR PARAMETERS, AND EXPLORED PARAMETRIC RANGES.
119
TABLE 5-5: EXPLORATION SETTINGS OF COMPLEXITY VS. PERFORMANCE EXPERIMENT B
Site Location 8788 W Sunset Blvd, West Hollywood, CA 90069, USA
Site Constraints Lot Area = 22,500
FAR=6.5
Height=0~180ft
Program Requirement Office = 20,000
Hotel = 20,000
Retail = 11,000
Explored Parameters Base Run Model (a) Model (b) Model (c) Model (d)
Type/Unit Range Range Range Range Range
Target Percentage Glazing Number [0.2, 0.83] [0.2, 0.83] [0.2, 0.83] [0.2, 0.83] [0.2, 0.83]
Shade Depth Length/ft [0, 4.5] [0, 4.5] [0, 4.5] [0, 4.5] [0, 4.5]
Target Percentage Skylight Number [0, 0.45] [0, 0.45] [0, 0.45] [0, 0.45] [0, 0.45]
RetailFr# Integer N/A [1, 5] [1, 5] [1, 5] [1, 5]
OfficeFr# Integer N/A [2,5] [2,5] [2,5] [2,5]
HotelFr# Integer N/A [2,5] [2,5] [2,5] [2,5]
SiteSetback Length/ft N/A [15, 60] [15, 60] [15, 60] [15, 60]
ScaleTop Length/ft N/A N/A [0.5, 2] [0.5, 2] [0.5, 2]
ScaleMid Length/ft N/A N/A N/A [0.5, 2] [0.5, 2]
R1 Angle/° N/A N/A N/A N/A [0, 180]
5.5.4 SUMMARY OF THE COMPLEXITY VS. PERFORMANCE EXPERIMENT B
Through this experiment it can be observed that, despite Figure 5-10 Model (d) possessing the
most complex geometry, it also provides the solution with the best energy performance result
(EUI = 44kBtu/ft²/yr). This preliminary experiment demonstrates the potential benefit of exploring
increasingly complex geometry in order to identify both potentially higher and lower performing
design alternatives as the basis of decision-making. Another possible application of EEPFD in need
of further study stems from the observations of distinct needs of early stage design versus other
industries or later design phases. When design requirements and energy parametric settings were
identical, but conceptual designs varied considerably, a wide range of resulting performance
boundaries was observed. This implies a direct relationship between the initial conceptual design
with its set variations and the resulting performance boundaries outlying the potential
performance levels of the generated design iterations. While the application of MDO to other
fields may be with the intent of optimizing a single design, early conceptual architectural design
demands diversity. Therefore, the ability of EEPFD to rapidly determine the performance potential
of multiple competing conceptual designs for the same design requirements may be more
applicable than pursuing a single optimized solution space. However, the full impact of this
observation on EEPFD is in need of further study.
120
FIGURE 5-10: OBSERVED IMPROVEMENTS FOUND IN THE ENERGY PERFORMANCE FEEDBACK FOR SCENARIO 10 AS
THEY RELATE TO THE GEOMETRIC COMPLEXITY OF THE INITIAL DESIGN.
121
5.6 HYPOTHETICAL CASE-BASED EXPERIMENT V: THE EEPFD BEST PRACTICE
5.6.1 THE EEPFD BEST PRACTICE EXPERIMENT OBJECTIVE
The GA validation experiment revealed that, during the GA run, the optimal performance
boundaries of each objective score can typically be obtained after only a few generations. This
implies that the performance potential for each design scenario could be identified prior to
reaching an optimal solution defined by mathematical convergence. If this supposition is true, it
could result in providing the desired context in which to gauge any individual solution’s
performance by the designer. Moreover, it may potentially be more relevant to supporting early
design decision-making than providing the often over populated Pareto solution pool. As a result,
this experiment set focuses on observing the timing of obtaining the optimal performance
objective score boundaries for each design problem.
5.6.2 THE EEPFD BEST PRACTICE EXPERIMENT DESCRIPTION
This experimental process is identical to the previously described GA validation procedure
involving the hypothetical design scenarios; however, the observational focus is shifted to
identifying the answer to the question:
When are the maximum EUI, NPV, and SPC reached for different design problems during the GA
run?
To answer this question, and observe the necessary timing of obtaining the performance
boundary conditions, each test scenario is run until the maximum defined generation is reached,
or the previously defined termination criteria are met.
5.6.3 RESULTS & OBSERVATIONS OF THE EEPFD BEST PRACTICE EXPERIMENT
This research first examines the results from the GA validation experiment, as shown in Table 5-4.
As can be observed, the optimal performance boundary, or maximum potential, for each domain
can be obtained in less than 10 generations through the GA. For example, in Scenario 5, the EUI
and SPC show no measurable improvement after the 8
th
and the 6
th
generation, respectively. After
these boundaries have been established, the Pareto curve is then more densely populated over
the subsequent generations. As Figure 5-11 illustrates, using Scenario 7, 8, and 5 as examples, the
most significant percentage improvement can be typically observed in the first five generations.
122
FIGURE 5-11: THE PERCENTAGE IMPROVEMENT TREND OF SCENARIOS 7, 8, AND 5’S SOLUTION SPACE RANGE
OVER EACH GENERATION, CALCULATED WITH RESPECT TO THE PRIOR GENERATION.
To further confirm this finding, a sample of the translated conceptual designs, initially generated
by students from Pedagogical Experiment I Part B, was employed in retesting these scenarios with
a focus on recording the generation number at which the optimal performing boundary can be
identified. It should be noted that these initial designs were developed in response to identical
design requirements and quantity of allowable design parameters. Further descriptions and
conditions of Pedagogical Experiment I, including the original design problem’s definition, are
provided Section 7.2.
123
Table 5-6 provides a summary of the scenarios used as the test sample in this experiment. The
results show that, for most of the tested scenarios, the performance boundary can be obtained
before the 15
th
generation, with the exception of Scenario ST12, in which the EUI’s performance
boundary is reached at the 22
nd
generation. However, following an in-depth examination of the
data for Scenario ST12, the solution with EUI = 41.82 kBtu/ft²/yr can be found at the 4
th
generation,
while the boundary condition identified at the 22
nd
generation is EUI = 41.80 kBtu/ft²/yr. This
indicates that the boundary obtained at the 22
nd
generation might not be significant enough to
warrant more than four generations. As a result, current testing results can be interpreted as
supporting the previously identified findings that the performance boundaries can be identified
in less than 10 generations. However, further research and analysis need to be conducted in order
to understand whether the explored parameter numbers and the generation number at which
the boundaries can be identified are correlated.
TABLE 5-6: SUMMARY OF THE BEST PRACTICE EXPERIMENT RESULTS.
124
During the early stages of the design process a variety of initial conceptual designs are typically
explored, ranging considerably with respect to geometric compositions. Figure 5-12 provides a
sample of the type of range that may be explored during the early stages of design for a particular
design problem. Unfortunately, typical optimization methods are only used to optimize one of
these solutions, once a conceptual design has been determined to be of interest and in need of
further development. However, a significant difference in the potential performance of varying
conceptual designs generated for the same design problem has been observed, as demonstrated
in Table 5-6. While, using MDO, an improvement from the initial design’s performance was
achieved in each case, a limit to the potential for further improvement was noted. For example,
the highest performing design for ST01 is only capable of reaching an NPV score of 275 million
USD while the highest performing NPV for ST03 is 461 million USD. Therefore, it can be
extrapolated that a medium performing design solution for ST03 could be on par with a high
performing design solution for ST01. These findings further support the potential relevance of
using EEPFD during the early stages of design, since EEPFD can provide the ability to rapidly
explore a wide variety of geometrically diverse conceptual designs and generate the context in
which the low or high performance potential of designs can be identified and pursued in
accordance with the design problem’s objectives.
FIGURE 5-12: A SAMPLE OF THE GEOMETRIC DIVERSITY OF POTENTIAL DESIGN SOLUTIONS FOR A SINGLE DESIGN
PROBLEM. IMAGES PROVIDED BY DAVID GERBER.
5.6.4 SUMMARY OF THE EEPFD BEST PRACTICE EXPERIMENT
Based on the current research results, the limits on the performance potential of a solution space
can be identified in less than 10 generations, as opposed to reaching the mathematically
convergent criteria, which typically requires run times of up to 1000 generations. Consequently,
the performance boundaries of a solution space are much more rapidly available to provide the
context in which to gauge any individual solution than otherwise possible. In addition, these
boundaries can be considered more relevant than the potential of any individual design solution
125
to the early stage design process, when various conceptual designs are being explored. Even
under the strict conditions of Pedagogical Experiment I Part B, which limited conceptual designs
to a single volume with a fixed number of allowable design parameters, a wide range of the
conceptual design performance boundaries was observed. Therefore, there is the potential that,
with a greater diversity between explored conceptual designs, there would be an even more
dramatic range in the performance potential between competing conceptual designs. This
potential in performance would be a new variable that EEPFD would be able to add to the design
decision-making process, which has previously not been offered through current tools and
methods. However, this subject is in need of further investigation, research, and definition of
practical application to real world design scenarios.
5.7 SUMMARY OF HYPOTHETICAL CASE-BASED EXPERIMENTS
The purpose of EEPFD is to use MDO to provide the context in which easily accessible energy
performance feedback can be included in the constant trade-off analysis inherent to the early
stages of the design process. Through the discussed experimental runs Beagle repeatedly
demonstrates the ability to provide a greater number of design alternatives for consideration,
with continuing improved performance, than typically available through conventional methods.
In this way, this research successfully improved the design cycle latency between the design and
energy analysis domain by automating and integrating the platforms of the design, energy
simulation, and financial models. The availability of the analysis pertaining to these three domains
provides the means by which to perform an extensive trade-off study otherwise unavailable at
the early stage of design.
In addition, this research demonstrates that, as parametric design problems increase in their
complexity, so do the benefits of using EEPFD. As the pool of potential design solutions increased
exponentially, H.D.S. Beagle was able to provide a sampling means that extended beyond
conventional manual approaches. As a result, an improvement in the performance of the
generated solution space for geometrically complex design problems is observed.
Overall, the tested cases demonstrate a successful generation of solution pools according to the
designer-defined exploration method. H.D.S. Beagle was able to successfully handle broken
geometry and modify space programming within the envelope while still varying the building
envelope itself. However, technical issues regarding run time, allowable surface count, and other
concerns, as discussed in Section 5.2, still restrict the application of H.D.S. Beagle. These issues
are expected to be resolved with the advancement of available technology.
H.D.S. Beagle is still in the beta phase. As such, there are certain issues that have yet to be resolved.
One pertains to the current notion of “optimization”, which typically refers to a mathematical
convergence between competing objectives (Goldberg 1989). However, the concept of
optimization can originally be described as the ability “to attain what is Best once one knows how
to measure and alter what is Good or Bad” (Choi and Loftness 2012). H.D.S. Beagle is not currently
able to provide the solution space representing the mathematical convergence of objectives, but
rather provides a means of measuring and altering what is “Good or Bad” through repeated use
of the customized GA in order identify overall improved performance potential. The intent is to
126
provide an expanded solution pool by which to support informed design decision-making under
the assumption that higher performing design is an overall goal. However, it should be noted that,
with the improvement of technology, the identification of convergence solution pools, or the
Pareto front solutions, is possible.
Future avenues for research on EEPFD include the handling of parameters and optimization of the
GA for architectural design problems. Of particular interest is the process of translating design
problems into formulaic parameters that can be explored through EEPFD. Since some elements
of design are resistant such a translation, a full investigation of these elements is underway in
order to maximize potential benefits and to more accurately gauge the relevance of EEPFD to the
design process. A second continuation of the research presented here includes further
development of the optimization algorithm, data visualization, and designer interfaces.
Overall, this series of hypothetical case-based experiments validate the GA used by EEPFD and
Beagle and demonstrate EEPFD’s ability to be applied to a wide spectrum of design problems. In
addition, these experiments validate the premise on which a proposed best practice application
of EEPFD to early stage design is based. Since all validation and critical component criteria, as
previously defined, are met, EEPFD can be considered suitable for further study regarding its
usability by designers and relevance of generated energy performance feedback to the early
stages of the design process.
127
CHAPTER 6 DESIGN PROFESSION CASE-BASED EXPERIMENT
6.1 INTRODUCTION OF THE DESIGN PROFESSION CASE-BASED EXPERIMENT
After the determination of EEPFD as suitable for further exploration by designers during the early
stages of the design process, inspired by the work of Gerber (2007), this research continues to
evaluate EEPFD through a design profession case-based experiment, where design professionals
as the primary driver of the design process. The purpose of this experiment is to address the gap
identified in precedents of MDO applications, which have previously been primarily used by
engineers and research teams. Furthermore, this experiment is intends to support the research
aim of understanding whether the energy performance feedback provided through EEPFD is able
to influence and support designers’ decision-making. Therefore, the professional case-based
experiments are conducted as a means to initially observe the interaction between EEPFD and
designers and to address the third research objective, as described in Chapter 1, namely “to
evaluate the impact of the proposed framework and availability of energy performance feedback
on the early stage design process.”
As briefly described in Section 3.6, the design of the professional case-based experiment utilizes
a series of semi-structured interviews and consistent experiment steps, thus allowing all the
collected cases and data to be compared and contrasted against each other. While only one
professional case-based experiment is conducted during this research period, the same method
and process can be repeated to collect additional data in the future, which would provide further
support for the present findings. Fundamentally, the drivers of the overall design of the
experiment are the following research questions:
- What is the impact of the proposed framework on the early stage decision process?
- How do EEPFD and the provided energy performance feedback support designers’
decision-making?
The case presented in this section is a design competition project for a net zero energy school
design. During the competition, the designers utilized three approaches to acquire energy
performance feedback for the purpose of assisting in their design decisions. The three approaches
employed by the designers were in-house energy analysis, collaboration with MEP consultants,
and the use of EEPFD. Through a comparative study of these three approaches adopted by the
designers, the applicability and impact of EEPFD during the early stages of the design process is
presented.
128
6.2 METHOD AND PROCESS OF THE DESIGN PROFESSION CASE-BASED EXPERIMENT
In order to provide a consistent basis for comparison, the six-step simulation process, as described
in Chapter 1 and 3, between the design and energy simulation domains is applied to all simulation
processes throughout this experiment. In addition, this six-step process serves as the driving
factor in determining the data collection method and the analysis approach during the design
profession case-based experiment.
The target cases for the design profession case-based experiment are cases from architectural
design firms that have previously utilized various means to obtain energy performance feedback
for design their decision-making. While the design phase in which the energy simulation is utilized
is not restricted to the early design stage, it is preferable. In addition, the ideal design
professionals should also be able to participate in, and adopt, the new approach, EEPFD, to their
existing cases or to their new project. If design firms and their projects meet these requirements,
they can be selected for participation in this experiment. Once the participants are selected, the
design profession case-based experiment can proceed, based on the case study structure, as listed
in Appendix B.1, through the three stages described below:
1. Pre-EEPFD
This stage is conducted in order to understand the typical design process of the subject firm when
involving energy performance feedback prior to the introduction of EEPFD. To do so, a request for
information (RFI) letter is drafted with a brief description of the research intent and a non-
disclosure form, prepared to avoid inclusion of any sensitive materials. In addition, a case study
framework and a list of required information, as listed in Appendix B.1, are structured as
guidelines for ensuring that all relevant data is collected. The RFI comprises two categories, one
for general firm and project information, and the other for research-specific information. The
general firm and project information include firm size, project typology, project location, design
team structure, collaboration method, etc. The research-specific information includes design
intent, project graphics, digital models, and applied technologies. The information is designed to
be collected through semi-structured interviews with the design teams. A full list of all requested
information for both categories can be found in Appendix B.1.
2. EEPFD
This stage is performed in order to help the design team incorporate EEPFD into their design
process. The actual EEPFD implementation method depends on firm’s background, in terms of
familiarity with Revit, parametric design, and energy simulation. During the implementation
period, a member of the research team, who is an expert at implementing EEPFD, serves as a
consultant, helping the firm to resolve any difficulties. The author served as a consultant for the
experiment conducted during this research period, which also facilitated collection of all relevant
data during the EEPFD implementation process. The EEPFD implementation process can be
broken into four steps: (1) to present and educate the design team in understanding the
mechanism behind EEPFD; (2) to help the design team identify appropriate applications to the
selected project and determine each application’s approach; (3) to provide the necessary support
during the design team’s implementation of EEPFD, such as assistance in formulating their design
problem into a compatible format with EEPFD; and (4) to help the team utilize the generated data,
as well as understand whether the EEPFD-generated data can support their decision-making, and
129
whether their decision-making is influenced by the energy performance feedback through EEPFD.
Through these steps, the applicability of EEPFD by designers can be observed.
3. Post-EEPFD
After the EEPFD implementation stage, another semi-structured interview is conducted with the
design team members, in order to collect their feedback regarding EEPFD and any final thoughts
they have regarding the process they have experienced. The information collected at this stage
can serve as the means for evaluating the framework and provide suggestions for future
improvements. The outline of the structured questions can be found in Appendix 0
6.3 DESIGN PROJECT BACKGROUND
This section presents a professional case-based experiment of a design competition project for an
educational net zero energy building (ZEB) design. ZEB can be more strictly defined in the context
of site energy, source energy, energy cost, or energy emissions (Torcellini et al. 2006). However,
irrespective of the definition or metrics used, the design concept of a net ZEB is to design a
building that maximizes energy efficiency in order to minimize overall energy demands which are
then met through renewable energy resources. Therefore, the maximum reduction in energy
demand achievable through building design can be considered as the fundamental first priority
design criterion for ZEB projects. For this reason, the contribution of energy analysis tools takes
on a more prominent role in the process of designing a net ZEB (Hayter et al. 2001) and should
thus be integrated early in the process, when design decisions have the greatest impact on the
overall expected building performance. In this way, maximizing efficiency opportunities can be
developed prior to the exploration of renewable energy plans. This need is in accordance with the
experiment participants’ requirements outlined in Section 6.2, designating this project as a
suitable experiment subject for the EEPFD implementation.
During the competition, the designers utilized three approaches to acquire energy performance
feedback for the purpose of assisting in their design decisions. The three approaches employed
by the designers were in-house energy analysis, collaboration with MEP consultants, and the use
of EEPFD. Through a comparative study of these three approaches adopted by the designers, the
applicability and impact of EEPFD during the early stages of the design process is observed and
presented.
The initial competition design requirements were to provide a K-12 school design with
approximately 30,000 square feet of usable program space using a method that facilitates easy
adaptability to multiple sites throughout the greater Los Angeles area. While a specific site was
still designated as the intended project site, flexibility was insisted upon, not just due to the
multiple site adaptability requirement, but also to allow for future reconfiguration for various
educational uses, such as library, media center, open education space, multi-purpose room, or
food services. In addition to these requirements, the designers decided to expand upon the
provided sustainability considerations to pursue a net ZEB configuration for each site.
Based on the designers’ timeline, the development of the project spanned through two successive
competition entries that can be separated into three major stages: (I) schematic conceptual
130
proposal; (II) schematic design to design development; and (III) design enhancement, as
illustrated in Figure 6-1. The first two stages of the design development were intended to meet
the requirements of the initial competition, as previously outlined. The third stage of the design
development was used to refine the design for entry to a global competition. During design stage
I, the designers proposed a concept of assembling a building with pre-engineered readymade kit-
of-parts core and shell components to provide the flexibility necessary to accommodate different
site conditions or space requirements, as shown in Figure 6-2. This kit-of-parts concept permits
flexible façade configurations to achieve the best building performance based on different site
locations and orientations. Figure 6-3 provides a sample of possible design variations that
demonstrate how the kit-of-parts concept can accommodate different site orientations, space
type requirements, and solar screen patterns. After being selected as a finalist the designers
proceeded to advance their design towards achieving net ZEB. At this point in the design
development, energy performance simulation was introduced as a part of the design process to
facilitate the demonstration of how the proposed kit-of-parts concept can achieve ZEB in practice.
As illustrated in Figure 6-1, three approaches were adopted by the designers to acquire energy
performance feedback regarding the design’s overall energy consumption through an estimated
energy use intensity calculation. Other building environmental performance components, such as
natural daylighting and thermal comfort, were included for consideration as a part of the final
design evaluation in pursuit of the net ZEB goal. The first two approaches, in-house analysis and
MEP consultant collaboration, were conducted as a part of the design stage II, while EEPFD
implementation was included in the design stage III.
Despite the introduction of energy performance feedback during the schematic phase of the
design, its incorporation was not utilized to explore form configuration. Instead, energy
performance feedback was limited to assisting in façade configuration through combining
desirable kit-of-parts components, such as skylights, solar screens, light shelves, etc., along with
optimal compositions of these components for varying site conditions. As a result, all three energy
performance feedback approaches focused on one standard classroom unit as the analysis target,
instead of including the whole building analysis as a part of the exploration process. The whole
building energy analysis was conducted at the end of the design stage to ensure code compliance
and to establish whether the final design met the design goals.
During this comparative, study the energy simulation process was broken down into six steps,
which were used in all three approaches in order to provide a consistent basis for comparison.
The data required for qualitative and quantitative analysis regarding the design problem, process,
and product was collected and compiled into the established metrics, previously detailed in
Chapter 3.4. Table 6-1 summarizes the data recorded during these exploration processes.
FIGURE 6-1: TIMELINE OF THE COMPETITION PROJECT AND THE CORRESPONDING IMPLEMENTATION STAGE FOR
EACH OF THE THREE ENERGY PERFORMANCE FEEDBACK APPROACHES.
131
FIGURE 6-2: THE PRE-ENGINEERED KIT-OF-PARTS PROTOTYPE DESIGN CONCEPT OF THE NET ZEB SCHOOL
DESIGN. IMAGE COURTESY OF SWIFT LEE OFFICE (2012). THE FULL SIZE IMAGE CAN BE FOUND IN
THE POSTER PREPARED FOR THE COMPETITION (SWIFT LEE OFFICE 2012).
FIGURE 6-3: EXAMPLES OF VARIOUS KIT-OF-PARTS CONFIGURATIONS COMPILED FOR DIFFERENT SITE
ORIENTATIONS, SPACE TYPES, SPACE ARRANGEMENTS, AND SOLAR SCREEN PATTERNS. IMAGE
COURTESY OF SWIFT LEE OFFICE (2012).
132
TABLE 6-1: SUMMARY OF COLLECTED DATA FOR DESIGN PROFESSION CASE-BASED EXPERIMENT.
Recorded Data Data Type
Design Problem Measurement
Project
Complexity
1. Project size
2. Space type number
ft²
Number
Design
Complexity
1. Energy model surface count
2. Explored parameter numbers
Number
Number/
descriptive
Design Process Measurement
Speed 1. Time spent to create design geometry
2. Time required to transfer to energy model
3. Time required to clean up energy model
4. Time required to run energy analysis
5. Performance feedback time per result
Minutes
Minutes
Minutes
Minutes
Minutes
Design Product Measurement
Feedback
method
1. Feedback number per day
2. Feedback information
Numbers
Descriptive
Actor
Actor
Experience
1. Parametric model experience
2. Energy simulation domain experience
Descriptive
Descriptive
6.4 ENERGY SIMULATION FEEDBACK APPROACHES & COMPARISON
6.4.1 IN-HOUSE ANALYSIS PROCESS
During the in-house analysis process, the designers explored different design ideas through
varying mediums and platforms, including hand sketches, Rhino, and CAD. During this design stage,
the author worked as the in-house specialist, assisting in conducting building performance
analyses of various design scenarios, as provided by the designers. Two major building
performance analyses were performed by the author for the purpose of providing performance
feedback and supporting designers’ decision-making, namely: (1) daylighting analysis and (2)
whole building energy analysis. For each design alternative provided by the designers, the author
first created a 3D Revit model base of the supplied 2D CAD drawings from which to run the desired
simulations. For daylighting analysis purpose, a *.dxf format 3D model was exported through the
built Revit model (*.rvt) and imported into Autodesk® Ecotect® 2011, as shown in Figure 6-4. For
the whole building energy simulation, a simplified 3D model of Revit with space properties
defined was exported as a *.gbXML format energy model through Revit and imported into eQuest
for the subsequent energy analysis.
133
Revit Model
Ecotect Daylight Analysis
Average Performance
Ecotect Daylight Analysis
March 20
th
9AM
Light Shelves Only
40% Perforated Metal Screen
FIGURE 6-4: EXAMPLES OF DAYLIGHTING ANALYSES FOR DIFFERENT FAÇADE STRATEGIES, SUCH AS LIGHT
SHELVES, LOUVERS, AND PERFORATED METAL SCREENS.
For both lighting and energy analysis, the initial analysis model required manual rebuilding of
necessary building components in Revit from scratch. In addition, energy/lighting simulation
attributes were manually entered into both Ecotect and eQuest individually, since there is no
interoperability between the utilized design platforms and the energy simulation platforms. This
process required approximately six hours to complete as a part of the initial analysis. Once the
initial analysis was obtained, the design was manually altered in both design and energy
simulation platforms according to designers’ directions. Once adjustments were made, another
analysis was run in order to determine any measurable differences between the designs. The time
required to generate the design alternatives and their energy models was dependent on the type
and extent of modifications to the initial design requested by the designers. According to the
recorded data, on average, the modification process required approximately four hours per
design alternative. Due to the time-consuming nature of this process, only four scenarios were
fully analyzed. As a result, the in-house process was unable to keep up with the pace of design, or
provide the ability to isolate direct cause and effect of the design changes to the expected
performance in order to assist with design decision-making. Therefore, the in-house analysis
could only be used in validating design decisions, as opposed to expanding the set of explored
design alternatives.
134
6.4.2 MEP CONSULTANT COLLABORATION PROCESS
Following the in-house analysis process, an MEP engineer was employed to assist in generating a
more efficient design configuration using the designer-proposed adaptable module. The
assumption was that the MEP engineer would be able to provide a more efficient starting point
for the design, assist in providing suggestions for optimizing the design’s geometric configuration,
and finally provide the necessary HVAC strategies for achieving a ZEB design. During the process,
the designers provided the MEP consultant with only one design scenario, including basic space
layout and the schematic space programming composition. The MEP engineer then extracted the
relevant data before proceeding with the energy calculations using a proprietary spreadsheet
system. This proprietary spreadsheet system was the MEP engineer’s major design iteration tool
in understanding the impact of design alternatives on energy performance and the relationship
between different parameters, such as thermal comfort, solar heat gain, natural ventilation, and
natural daylighting. In this case, the spreadsheet system served as the energy model, with building
geometry recorded in a text format. According to the engineer, the initial data transfer from the
designer-supplied scenario to the spreadsheet with code-compliant energy property setup
required approximately three hours to complete. For subsequent design alternatives, the
manipulation of the single classroom’s energy-related parameters in the spreadsheet took about
15 to 20 minutes per iteration. After approximately 20 iterations were explored, the MEP engineer,
based on their accumulative experience, compiled the feedback for the designer in the form of
design guidance regarding building components’ thermal properties, window area ratio, shade
depth, skylight placement, and HVAC systems, as illustrated in Figure 6-5.
FIGURE 6-5: DESIGN GUIDANCE FOR ACTIVE AND PASSIVE DESIGN STRATEGIES SUMMARIZED BY THE DESIGNERS
BASED ON THE INFORMATION PROVIDED BY THE MEP CONSULTANT. IMAGE COURTESY OF SWIFT
LEE OFFICE (2012). FULL SIZE IMAGE CAN BE FOUND IN THE POSTER PREPARED FOR THE
COMPETITION (SWIFT LEE OFFICE 2012).
135
During this collaboration process, the MEP engineer was able to provide suggestions for
optimizing systems within a single design configuration within a day, whereby the guidance was
based on designers’ design strategies. While the provided guidance assisted designers in meeting
the design goal, it could not support the designers’ understanding of the impact of their design
decisions, especially when confronted with a new design configuration, not included in the
provided guidance coverage. Instead, the new configuration required another round of iterations
with the MEP engineer and another set of design guidelines. Due to the level of detail provided in
each set of design guidance and the inclusion of the expert domain of the MEP engineer, the
designers were able to advance the design to the next stage of development. Therefore, in
comparing the in-house analysis with that of the guidance and support from the MEP engineer,
the latter was observed to be more thorough and complete for achieving the net ZEB design goal.
6.4.3 EEPFD PROCESS
The implementation of EEPFD was introduced to the designers during the third stage of their
design development. While energy performance feedback was made available through the prior
two approaches, the ability of these approaches to provide relevant information at the speed
necessary for supporting the designers’ rapid determination of optimal configurations for
different site conditions was still undetermined. As a result, the implementation of EEPFD was
explored and researched by the designers and the research team to understand whether EEPFD
could provide a suitable alternative approach.
Two experiments were implemented through EEPFD during this stage, namely (1) the overall
façade configuration vs. orientation, and (2) the detail façade configuration of a standard
classroom unit. For both explorations, the overall design form remained fixed. As a result, the
optimization algorithm was used to find the best compromise between minimizing energy use
intensity and maximizing the net present value.
The first experiment was conducted by the authors as an exemplary showcase to demonstrate
the potential benefit of the tool to the designers. During this experiment, EEPFD was utilized to
explore the optimal configuration of each building side’s glazing area ratio, sill height and depth
of shading devices for five different overall site orientations—0, 22.5, 45, 67.5, and 90 degrees
from true north. In this experiment, a minimum of 500 design configuration alternatives were
generated for each site orientation within six hours, as opposed to the previous average analysis
time of four hours per iteration required, on average, by the in-house process.
Following the showcase experiment, the second experiment was conducted in order to emulate
the process of EEPFD’s implementation by the designers. The aim was to obtain optimal
configurations of the kit-of-parts module for the project’s future development. During this
process, the designers were first required to formally define their design problem in the form of
a parametric model. As with the previous two approaches, the design problem definition was
limited to optimizing one standard classroom unit using the defined kit-of-parts. As parametric
design had not been a part of the designers’ practice prior to this experiment, the author served
as the consultant to assist in the translation of the design into a parametric model. Due to
unfamiliarity with parametric modeling, the Revit design platform, and the inherent limitations of
both, a week and four iterations were needed before the parametric model could be finalized.
136
The parameters explored for the façade configuration pertained to customized opening sizes,
solar screen depth, density, and mounting distance from the building, as illustrated in Figure 6-6.
Following the completion of the parametric model, necessary supplemental information
regarding financial estimates, material properties, etc., was compiled by the author. In order to
closely emulate the future implementation process, the financial model of this experiment was
calibrated according to the actual project cost estimation. In addition, the material and HVAC
assignments were based on the guidelines provided by the MEP consultant. Figure 6-7 illustrates
the entire EEPFD implementation process.
FIGURE 6-6: THE FINAL PARAMETRIC MODEL OF THE DESIGN PROFESSION CASE-BASED EXPERIMENT.
FIGURE 6-7: ILLUSTRATION OF THE EEPFD IMPLEMENTATION PROCESS. IMAGE COURTESY OF SWIFT LEE OFFICE
(2012). FULL SIZE IMAGE CAN BE FOUND IN THE POSTER PREPARED FOR THE COMPETITION (SWIFT
LEE OFFICE 2012).
137
Using EEPFD, 12 GA runs with varying parametric ranges, GA settings, and stopping points were
completed. As a result, 2,082 design alternatives were generated, each requiring less than one
minute to complete. The solution space improved from the initial EUI = 70.08 to 69.30 kBtu/ft²/yr
and NPV from -0.51 to -0.48 million USD. After the completion of the runs, the author provided
the final trade-off analysis (as illustrated in Figure 6-7), along with 3D design images, assisting the
designers in their final decision-making. According to the generated data, while an improved
quantity of feedback over the prior two approaches was evident, the designers requested more
guidance for discerning desirable results from the abundantly populated solution pool. However,
the designers indicated a positive response to the inclusion of 3D imaging of all the design
alternatives, along with the energy performance feedback, which was not available through either
the in-house analysis or through the MEP engineer. As a result, the designers were able to include
aesthetic preference in their trade-off analysis when examining the generated results.
6.5 SUMMARY OF THE DESIGN PROFESSION CASE-BASED EXPERIMENT
This chapter presented a design profession case-based experiment of three approaches to include
energy performance feedback during a prototypical school design with the goal of net ZEB. While
these three approaches of in-house analysis, collaboration with a MEP engineer, and EEPFD
cannot necessarily be considered as comparable based on their respective inputs/outputs, several
trends can be identified. First, the level of detail necessary for each approach was found to
influence the applicability of each to the early stages of design and its ability to support design
decision-making. The feedback received from the MEP engineer was considerably more detailed
and narrower in scope than that of the in-house analysis or EEPFD. However, the MEP engineer
provided general design guidance, rather than performance analysis of a specific design iteration.
In this respect, only EEPFD provided feedback specific to a particular design alternative in a
manner enabling the designers to gauge the impact of their design decisions on the design’s EUI.
Second, the time necessary for the implementation of any of these approaches must be
considered. The in-house analysis required approximately four hours per design alternative, while
EEPFD required less than a minute. While MEP results were received from the engineer within a
day, they were only applicable to a single base design with limited available alternatives. Finally,
only EEPFD provided an automated trade-off analysis among competing objectives. In addition,
despite the MEP engineer’s guidance, the actual components of the design were left to the
designer to optimize. EEPFD, however, provided the ability to analyze and explore various
combinations of these components rapidly in order to isolate desirable configurations. Figure 6-8
summarizes the quantifiable measurements collected during this design profession case-based
experiment.
Based on these trends, EEPFD presents the most potential in the ability to provide the necessary
rapid performance feedback with maximum reduction in energy demand achievable through
conceptual design exploration, defined as the first priority design criterion for ZEB projects.
However, the prototype currently used by EEPFD is limited in scope with respect to the inclusion
of performance considerations relevant to net ZEB design. For example, natural daylighting is not
included in the objectives, or as contributing to the EUI calculation. If these additional
138
performance considerations can be included, EEPFD’s rapidly provided feedback could support
design decision-making in pursuit of net ZEB design.
It should be noted that EEPFD is intended to be implemented during the conceptual phase of
design. However, in this design profession case-based experiment, EEPFD was introduced during
the design development stage, at the time when the range of exploration of interest had already
been previously narrowed with the guidance of the MEP engineer. As a result, in this case, the
EEPFD implementation is considered as a means of fine-tuning a previously optimized design.
Thus, it would be interesting to observe the implementation of these three explored approaches
for including energy performance feedback concurrently, rather than consecutively, as this would
allow assessing the effect of each on the early stage design process and the yielded design
performance results.
139
FIGURE 6-8: PROJECT COMPARATIVE PROCESS MAPS WITH INITIAL OBSERVATIONS.
140
CHAPTER 7 PEDAGOGICAL CASE-BASED EXPERIMENTS
7.1 INTRODUCTION OF PEDAGOGICAL CASE-BASED EXPERIMENTS
This chapter continues to address the gap found in the precedents of MDO applications. As these
have previously been limited to primary use by engineers or research teams, the incorporation of
MDO into the design process by designers was largely unexplored. In addition, this research is
aims to determine whether the provided energy performance feedback through EEPFD is able to
influence and support designers’ decision-making. Thus, inspired by the work of Gerber and Flager
(2011) and Flager, Gerber, and Kallman (2014), this research continues to evaluate EEPFD through
a series of three pedagogical case-based experiments, in which students are the primary users.
While each of the three pedagogical experiment sets conducted as a part of this research possess
individual sub-objectives and observational focuses, the fundamental design of experiment sets
is to address the third research objective of evaluating “the impact of the proposed framework
and availability of energy performance feedback on the early stage design process”, as described
in Chapter 1. Therefore, the pedagogical experiments are designed to address the following two
research questions:
- What is the impact of the proposed framework on the early stage decision process?
- How do EEPFD and the provided energy performance feedback support designers’
decision-making?
In order to answer these two questions, the pedagogical experiment sets are devised, allowing
them to address a series of sub-questions, as laid out in Table 7-1. The following is an overview of
the three pedagogical experiment sets conducted by this research as part of the EEPFD’s process
evaluation:
1. Pedagogical Experiment I: Computational Design Tool Course Setting
The first pedagogical experiment is conducted within a design computational tool course
environment, with students who are in the process of learning both simulation tools and
their application to design. The interest of this experimental set is to observe any
measurable effects of the introduction of EEPFD, with the exception of the element of
automation enabled by H.D.S. Beagle. In addition, the aim is to assess the students’ ability
to translate their design intent into a parametric model for further exploration.
2. Pedagogical Experiment II: Design Studio Setting
The second pedagogical experiment set is conducted within an emulated design studio
setting where students are instructed in the application of EEPFD for use in conceptual
design exploration. The aim is to observe the impact of the implementation of EEPFD on
this emulated conceptual design studio.
3. Pedagogical Experiment III: Computational Design Tool Workshop Setting
141
The third experiment set is designed with the focus on expanding the data pool of the
first two pedagogical experiment sets in order to further validate the observations made
during the two prior experiments. This experiment set also expands the measurements,
in order to understand the feedback and observe how the EEPFD-generated data and
energy performance feedback can support and influence design decision-making.
The following sections provide the objectives, methods, and processes for these experimental
sets, along with observed results and summary.
TABLE 7-1: THE RESEARCH QUESTIONS OF THE PEDAGOGICAL CASE-BASED EXPERIMENTS AND THE CORRELATED
EXPERIMENT SET.
Main Research Questions Sub-questions
Correlated
Experiment
Set
1. What is the impact of the
proposed framework on
the early stage decision
process?
1.1. Does EEPFD improve the
process in terms of feedback
time, analysis speed, and
generated solution pool size?
PE I & III
1.2. Does EEPFD assist the
designer in identifying higher
performing design when
compared to the process
without EEPFD?
PE I, II & III
1.3. What is the usability of the
framework to the designers?
1.3.1. Can designers
formulate their
problem?
PE I & II
1.3.2. What effort is
needed for designers
to formulate their
problem?
PE I & II
1.3.3. Does the formulated
parametric model
fully express their
design intent?
PE I & II
1.3.4. Can designers
execute the beagle?
What is the length of
the learning curve?
PE II
2. How do EEPFD and the
provided energy
performance feedback
support designers’
decision-making?
2.1. Can the provided data
support designers’ decision
making?
PE II & III
2.2. Does the energy performance
feedback provided through
EEPFD influence designers’
decision-making?
PE II & III
142
7.2 PEDAGOGICAL EXPERIMENT I: COMPUTATIONAL DESIGN TOOL COURSE SETTING
7.2.1 THE PEDAGOGICAL COURSE EXPERIMENT OBJECTIVES
Pedagogical Experiment I is conducted within a computational design course setting, with
students who are in the process of learning both simulation tools and their application to design.
This experiment is divided into Part A and Part B, each utilizing the same user group and same
platforms over which the experiments are conducted.
Part A is conducted in order to observe the comparison of the EEPFD application, without the
automation element (which is therefore reliant on human decision-making) with the EEPFD
application with the automation element (reliant on MOO decision-making) to the same design
problem. This first part of the experiment is conducted with the purpose of understanding the
human decision-making process and contrasting it with the evolutional process of the MOO. For
comparison purposes both the exploration times and the quality of the generated design solution
pools are documented.
Part B is conducted in order to observe the ability of the user group, in this instance the design
students attending the course, to translate their design intent into a parametric model before
proceeding with the process utilized in Part A. Once completed, comparisons are made with
respect to the geometric diversity of the initial designs provided by students, the relationship of
each initial design and the quality of the resulting solution space. In addition, comparisons are
also made between the resulting solution space provided by the students and that generated by
applying EEPFD to each student’s initial parametric design.
7.2.2 THE PEDAGOGICAL COURSE CASE EXPERIMENTAL BACKGROUND
This experiment was conducted through the course entitled Arch507: Theories of Computer
Technology during the Spring Semester of 2012, as taught by Prof. Karen Kensek at the School of
Architecture, University of Southern California (SoA USC). The course was first offered in the
Spring Semester of 2008 to both undergraduate and graduate students of architectural design,
building science, structural engineering, and construction management disciplines. This course,
consisting of sixteen weekly three-hour lectures, was originally conceived to introduce BIM with
Autodesk® Revit® as the major tool platform. The course subsequently included other related up-
to-date design technologies and functionalities, such as parametric modeling, interoperability
with other analytical tools for sustainable design, construction documentation, and 4D
construction simulations. The organization and course content was altered each semester
according to the most relevant up-to-date computational technologies and functionalities, as
determined by Prof. Karen Kensek. In 2011, the course began to include conceptual massing and
conceptual energy simulations for early stage design exploration. This addition, entitled BIM
Analytics, provided the opportunity to implement the Pedagogical Experiment I as a part of the
standing curriculum.
The original objective of the BIM Analytic course was to introduce the students to the analytical
functionality, as provided by the Autodesk Project Vasari (Vasari) Platform. This content included
143
conceptual massing, parametric modeling, solar radiation, wind tunnel, and energy analysis. The
intent was not to teach students how to conduct in-depth simulations, but rather to allow each
individual to explore the tool’s capabilities in providing different types of simulated analysis
results. While not explicitly covered during the course, this information was presented as a
potential means of assisting in design decision-making in pursuit of higher performing building
design for the students’ future use. This topic, among others, as explicitly addressed as a part of
the modified curriculum as necessary to implement the Pedagogical Experiment.
During the development of EEPFD, the author served as a Teaching Assistant for Arch507 during
the Spring Semesters of 2010, 2011, and 2012, under the instruction of Prof. Karen Kensek. During
this period, the author was made familiar with all course materials, objectives, and lectures. With
the support of Prof. Karen Kensek, the details of Pedagogical Experiment I and the method of
implementing it through the BIM Analytic lecture section were developed by the author. The
original proposal can be found in Section E.1, while the detailed experiment design is described
in Section 0. After finalizing the experiment design, Pedagogical Experiment I was conducted on
March 23, 2012.
The three-hour class lecture was separated into two sessions. The first session was delivered by
the instructor and provided an introduction to the analytical functions of Vasari. The second
session was instructed by the author and is the experiment’s starting point. The lecture content
of the second session is presented in further detail in Section 0.
According to the organization of the Arch507 course, the Pedagogical Experiment I was
conducted during the 10
th
week of the 2012 Spring Semester. At this point in time, students had
been introduced to all necessary platforms and basic parametric modeling techniques for a
minimum of three months. Given that 27 students were enrolled in the ARCH507 course, 26
students participated in the experiment, as 1 was absent at the time. However, one colleague,
who was not enrolled in the class, subsequently volunteered to participate in the experiment.
Therefore, the resulting study population included 27 individuals, of whom 17 were Master of
Architecture candidates, 8 were Master of Building Science candidates, 1 was undergraduate
student pursuing a Minor in Architecture, and 1 was Master of Building Science graduate. The
background and previously acquired experience of each participant is provided in Table 7-2.
.
144
TABLE 7-2: SUMMARY OF STUDENTS’ BACKGROUNDS FOR THE PEDAGOGICAL EXPERIMENT I.
ID
Background
Learn Revit® (Month)
Learnt Other BIM or
Parametric Tool (Y/N)
Time (Month)
Other Tools Learnt
ST01 MArch, BSArch 6 Y 24 AutCAD, Grasshopper, Rhino
ST02 MArch 4 Y 12 Grasshopper
ST03 MArch 6 N N/A N/A
ST04 MArch 6 Y 12 Grasshopper
ST05 MArch, BSArch 3 Y 24 Grasshopper, Panelling Tools. T-Splines
ST06 MBS 3 N N/A N/A
ST07 MArch, BSArch 3 Y 12 Grasshopper
ST08 MArch, BArch 12 Y 24 Digital Project, Grasshopper, Rhino scripting
ST09 MBS, BS in Civil Engineering 6 N N/A N/A
ST10 MArch 4 N N/A N/A
ST11 MArch, BSArch 5 Y 24
Grasshopper, Maya, Processing, Rhino
scripting,
ST12 MArch 3 N N/A N/A
ST13 MArch, BArch 6 Y 6 Digital project
ST14 MBS 3 N N/A N/A
ST15 MArch, BSArch 24 Y 6 Grasshopper
ST16 MBS, BArch 24 Y 2 Grasshopper
ST17 MArch, BArch 3 N N/A N/A
ST18 MBS, BArch 3 N N/A N/A
ST19 MBS, BArch 3 N N/A N/A
ST20 MBS, BArch 2 Y 36 ArchiCAD
ST21 MArch, BS in Design 12 Y 48
ArchiCAD, Digital Project, Grasshopper,
Rhino
ST22
B.S. in Business Administration
Minor in Architecture
6 Y 6 Rhino
ST23 MBS, BArch 60 Y 24 ArchiCAD, Grasshopper
ST24 MArch 12 Y 24
ST25 MArch, BS in Environmental Design 12 Y 12 ArchiCAD, Digital Project, Grasshopper
ST26
MArch, BS in Architecture and
Economics
12 Y 12 Digital Project, Grasshopper
ST27
MBS
BArch
1 N N/A N/A
Note:
BS: Bachelor of Science
BArch: Bachelor of Architecture
BSArch: Bachelor of Science in Architecture
MArch: Master of Architecture
MBS: Master of Building Science
145
7.2.3 THE PEDAGOGICAL COURSE CASE EXPERIMENTAL DESIGN
The experiment aims to achieve two major goals, one of which is the previously stated experiment
objective of serving as a benchmark case for the evaluation of EEPFD. The second goal is to
introduce the participating students, via this lecture and assignment, to the process of using
analytical results and assess its effect on their design decision-making. In addition, this students
are afforded the opportunity to explore their design options within the context of Designing-In
Performance, where multiple competing objectives require consideration. Therefore, the
composition and content of Pedagogical Experiment I are required to provide students the
necessary knowledge and skills as needed to fulfill these two objectives. In addition, the design of
the assignment needs to provide the means by which students can record all necessary data for
cross-comparison purposes.
To the end of fulfilling the first outlined objective of serving as a benchmark case against which
EEPFD can be evaluated, the simulation process used by EEPFD requires adjustment, in order to
be suitable for use by Pedagogical Experiment I. Figure 7-1 illustrates both the simulation process
used by EEPFD and the adjusted process designed for Pedagogical Experiment I. The significant
simulation process difference is predicated on the fact that the pedagogical experiment is run in
a classroom setting, where students are not granted access in real time, or in parallel, to use H.D.S.
Beagle. Therefore, the last three steps need to be performed manually by the participants, rather
than automated through the GA-based multi-objective optimization process provided though
H.D.S. Beagle.
Once the simulation process is determined, the experiment is grouped into three major activities:
(1) course lecture and hands-on practice by students; (2) assignment; and (3) obtaining data from
the students for analysis and cross-comparison. As all activities are designed to utilize the same
platforms as EEPFD, students’ exploration results can be considered comparable to those
obtained through EEPFD. The three platforms utilized are Revit, Excel, and GBS. Revit serves as
the parametric design platform, while Excel serves as the platform for calculation of the three
design objective scores. Finally, GBS is used, as accessed through Revit’s Conceptual Energy
Analysis, as the energy simulation engine, capable of generating all the necessary energy analysis
results. In order for students’ generated design alternatives and the performance results to be
comparable with the results generated by the Beagle, the Excel objective function calculator is
provided using the same SPC and NPV formula as the Beagle. Furthermore, in order to ensure the
appropriate comparability of all generated results, both participants and the Beagle are provided
identical design requirements, as illustrated in Figure 7-2.
146
FIGURE 7-1: SIMULATION PROCESS COMPARISON BETWEEN EEPFD & PEDAGOGICAL EXPERIMENT.
FIGURE 7-2: THE GIVEN DESIGN REQUIREMENT FOR THE PEDAGOGICAL EXPERIMENT I.
147
Despite the students’ prior three months of experience using Revit and parametric modeling
components, instruction regarding the use of Revit to translate design intent in a parametric
model/form, methods of conducting massing studies, and the use of Revit Conceptual Energy
Analysis is needed as part of the implementation of Pedagogical Experiment I. As a result, the
contents of the lecture and assignment are divided into four major portions:
1) Parametric Modeling: The method by which to create a parametric model using Revit’s
conceptual massing environment.
2) Objective Function Calculation: The method by which to conduct Conceptual Energy Analysis,
identify relevant information within the Revit model, and input this information into the Excel
objective function calculator, in order to calculate the three objective functions—EUI, SPC, & NPV.
3) Design Exploration: The process through which students can manipulate their parametric
model and explore their design alternatives.
4) Design Evaluation: The method of evaluating design alternatives according to the Pareto Rank
Evaluation method.
A supplemental assignment handout that provides both a step-by-step tutorial of the entire
assignment and worksheets containing all requested documented data is provided to the students
prior to the lecture. The complete handout can be found in the Appendix E.2. In addition to the
handout, students are also supplied a default Revit parametric model and an Excel file containing
an objective function calculator and digital worksheet for recorded results. After acquiring all
necessary supplemental materials, students are led by the author through the previously
described four lecture portions.
The first portion centers on providing step-by-step instructions to generating a parametric model
in class, so that students have the necessary skills to translate design intent into a parametric
model. The parametric model taught is the hypothetical case Scenario 10, as illustrated in Figure
7-3. The second portion aims to instruct the students in the necessary steps to obtain the three
objective function scores for any generated design. According to the experiment and utilized tool
platforms, the first objective function, EUI, is obtained by utilizing the Conceptual Energy Analysis
(CEA) function provided by Revit. The second and third objective functions, SPC and NPV, are
obtained by inputting the mass information and the analyzed energy use information into the
provided objective calculator in the given Excel file. The detailed tutorial presented can be found
in the handout in Appendix E.2 and is described later in this section. Next, students are instructed
in how to manipulate the parametric values of the given Revit model in order to generate design
alternatives before repeating the previously described analysis procedure. Finally, students are
introduced to the Pareto Rank Evaluation method in order to provide the means of identifying
“better” designs according to this experiment’s definition. At the end of the lecture, students are
expected to have the ability to (1) generate a parametric model from scratch; (2) calculate the
three objective functions utilizing the Revit’s mass scheduling, CEA analysis results and the given
Excel objective function calculator; (3) explore design alternatives by manipulating parametric
values; and (4) determine performance of their design alternatives according to the definition of
the experiment.
148
FIGURE 7-3: INITIAL PARAMETRIC MODEL FOR PEDAGOGICAL EXPERIMENT I AS PROVIDED TO EACH STUDENT.
After the lecture, students are asked to complete the tasks outlined in the provided handout. This
includes a questionnaire documenting students’ previous experience in both parametric modeling
and the use of Revit, an assignment divided in two primary parts (Part A and Part B), and a final
questionnaire documenting students’ experience throughout the entire experiment.
Part A of the assignment asks students to explore the given parametric design in the provided
Revit file with the objective of identifying a higher performing design alternative within the given
parametric ranges. In this way, students are able to generate design alternatives through a manual
interpretation of EEPFD, thereby providing results suitable for cross comparison purposes. Part B
asks the students to generate their own design intent regarding the provided design requirements
and translate it into a parametric model. This facilitates the observation of the translation process
of the design intent into a parametric model, as is a critical component of EEPFD. For both Part A
and Part B, students are required to document their exploration process, including varied
parameter values, calculated objective scores, time spent on calculating objective scores and
exploration process map. In addition, the documentation of the design intent and the
parameterization process map is required for Part B.
Table 7-3 summarizes the recorded data for the entire assignment. The full details of the
assignment can be found in Appendix E.2. For the purpose of documenting their thought and
exploration process, students are asked to provide an illustration of their process. An illustration
is also requested as a means of documenting the parameterization process and problem
formulation. However, strict guidelines regarding the formatting of these acceptable illustrations
are not provided. Despite this variation in the formatting of the collected data, these illustrations
are deemed viable for inclusion in the subsequent review and analysis.
149
TABLE 7-3: SUMMARY OF THE RECORDED DATA FOR PEDAGOGICAL EXPERIMENT I.
Recorded Data Data Type
Background
Questionnaire
1. Education Background Enumeration
2. Experience using Revit Number (Time)
3. Experience using other parametric software Y/N
Number (Time)
Enumeration
Part A & Part B 1. Explored parametric values for each iteration Number
2. The three objective function scores of explored design
alternatives
Number (EUI,
SPC, NPV)
3. Time required to obtain and calculate each objective
function
Number (Time)
4. Time required to obtain the final design Number (Time)
5. Design exploration process Image
Additional
Recorded Data in
Part B
1. Three design intents prior to modeling their own
parametric model
Description
2. Design parameterization process map Image
3. Parametric model design Revit File
4. Created parameters Enumeration
5. Initial value and variation range of each parameter numbers
6. Parameters’ ability to represent the desired design intent Y/N
Percentage
Final
Questionnaire
1. Do you think the parametric modeling can help you to
explore greater number of design iterations more
effectively? Why?
Description
2. Do you think the feedback results (design, financial, and
energy) can support your decision-making?
Description
For comparison purposes, an adjusted benchmark process based on EEPFD’s workflow, as
outlined in Section 4.4, is developed. An in-depth description of the adjusted benchmark process
for the Pedagogical Experiment I is provided below.
Step 1Generate Design
In Part A of this experiment, the initial design is provided, so that the generation of design
alternatives is based on the manual manipulation of the provided parameters within their set
parametric ranges. Students are given nine parameters by which to explore potential design
alternatives, of which five drive the overall building form and geometry and four are built-in
energy parameters available through the Revit conceptual energy analysis platform. The provided
model and parametric ranges are illustrated in Figure 7-3. Figure 7-4 provides a sample of the
worksheet given to students to record their progress.
In Part B of this experiment, while the students are provided the same design requirements, each
initial design is generated by the students individually. The generation of design alternatives is
based on the manual manipulation of these designated parameters within their set parametric
ranges. Students are allowed to designate three form-driving parameters, combined with two
required form-driving parameters, for a total of five form-driving parameters and their ranges. All
students’ designs and their set parameters and parametric ranges are given in Appendix 0. The
150
built-in energy parameters, as defined in Part A, remain unchanged. Students are asked to modify
the provided worksheet as needed, in order to record their decision-making process.
FIGURE 7-4: THE WORKSHEET PROVIDED TO STUDENTS TO RECORD THEIR DECISION MAKING PROCESS.
Step 2 Transfer Model
Once a design alternative has been generated, students are asked to analyze the performance of
their new design option by generating an energy analysis through Revit using the conceptual
analysis tool that sends and retrieves the resulting energy analysis from Green Building Studio.
Step 3 Modify Energy Model
This step is bypassed as the model transfer is automatically provided by Revit to Green Building
Studio.
Step 4 Run Analysis
Energy use analysis results provided by Green Building Studio are generated automatically once
the conceptual energy analysis is requested through Revit. Thus, once these are acquired,
students can record the EUI levels in the provided Excel worksheet, along with other design
attributes, as explained in their experimental set-up and illustrated in Figure 7-5. The input
provided by the students is subsequently used to score their design alternatives according to the
three previously defined objectives, namely EUI, SPC, and NPV.
151
FIGURE 7-5: EXCEL WORKSHEET CALCULATOR PROVIDED TO STUDENTS TO CALCULATE THE PERFORMANCE OF
THEIR GENERATED DESIGN ALTERNATIVE ACCORDING TO THE THREE OBJECTIVE FUNCTIONS.
Step 5 Evaluate Results
In this step, the design attributes and the resulting scores are documented in a provided Excel
Worksheet to allow for comparison against previously generated results, as illustrated in Figure
7-6. This analysis is thus available for students to review before proceeding with any changes in
the designs deemed of interest.
FIGURE 7-6: WORKSHEET PROVIDED TO STUDENTS TO ALLOW THEM TO COMPARE MULTIPLE DESIGN
ITERATIONS.
152
Step 6 Execute Decision
Modifications of the design are made according to the designer’s interpretation of the generated
results. During this period, students are asked to record their progress, along with the time
required to complete each stage of the process:
1. Updating Design Option Time required to modify geometry, level settings, and space settings.
2. Energy Analysis Time recorded as the period between requesting the conceptual
energy analysis through Revit and receiving the results from GBS.
3. Objective Calculations Time required to manually input necessary data into the excel
calculator in order to obtain objective calculations (including time
previously recorded for Energy Analysis).
153
7.2.4 RESULTS & OBSERVATIONS OF THE PEDAGOGICAL COURSE CASE EXPERIMENT
7.2.4.1 RESULTS & ANALYSIS OF PEDAGOGICAL EXPERIMENT I - PART A
In Part A of the experiment, data was collected from 27 students, who generated 118 different
design options in total, through the provided parametric model with a minimum of three
iterations per student. The time the students required to complete this part of the experiment
ranged from 0.33 to 7, with an average of approximately 3 hours.
7.2.4.1.1 Students’ Exploration Results of Pedagogical Experiment I – Part A
Table 7-4 provides a statistical summary of the times recorded by students, as required to
complete the different stages of this experiment.
Table 7-5 summarizes the explored parameters and their ranges, as documented by the students.
Table 7-6 presents each student’s maximum number of iterations, time spent on the overall
process, and the resulting solution space range. Lastly, Table 7-8 illustrates the students’ resulting
data points and their exploration samples.
TABLE 7-4: STATISTICAL SUMMARY OF THE RECORDED TIMES FOR PEDAGOGICAL EXPERIMENT I – PART A.
MIN. MAX. AVERAGE STDEV.P
Time spent on updating design options (Minutes) 2 45 8.61 7.87
Time spent on analyzing energy model (Minutes) 1 35 7.97 7.46
Time spent on calculating three objectives (Minutes) 2 25 7.51 3.99
Total time spent for each feedback (Minutes) 5 55 16.4 8.99
TABLE 7-5: STATISTICAL SUMMARY OF THE RECORDED PARAMETRIC VALUE RANGES EXPLORED BY STUDENTS
FOR PEDAGOGICAL EXPERIMENT I – PART A.
Assigned Explored Parameters Statistics of Students’ Input
Parameter Name Type Unit Range MIN. MAX. MEDIAM AVERAE
Design
Parameters
Floor#
Retail#
Integer N/A [3-15]
1 4 1 1.4
Office# 2 12 3 3.7
Hotel# 1 9 3 4.1
SiteSetback Length ft [15-60] 10* 65* 50 46
ScaleMid Number N/A [0.1-2] 0.1 2 0.9 1
ScaleTop Number N/A [0.1-2] 0.1 4* 0.9 1
R1 Angel ° [0-270] 0 270 50 56
Energy
Parameter
Target % Glazing Number N/A [0.1-0.9] 0* 0.9 0.4 0.4
Target Sill Height Length ft [0-5] 0 9.7* 2.5 2.7
Shade Depth Length ft [0-5] 0 8* 2 2.1
Target % Skylights Number N/A [0-0.9] 0 0.9 0.05 0.1
* Input provided by students beyond designated parameter range.
154
TABLE 7-6: SUMMARY OF ITERATION NUMBERS, EXPLORATION TIMES, AND THE EXPLORED RANGES OF
GENERATED SOLUTION SPACE FOR PEDAGOGICAL EXPERIMENT I – PART A.
Student ID
Iteration #
Exploration Time
(Hour)
NPV (Million Dollars) EUI (kBtu/sqft/yr) SPC
MIN. MAX. RANGE MIN. MAX. RANGE MIN. MAX. RANGE
S1 4 5 -41.0 -8.8 32.1 70.4 174.9 104.5 9.6 35.2 25.5
S2 4 4 -29.9 138.8 168.7 81.0 246.8 165.8 18.1 97.9 79.9
S3 4 3.5 53.7 138.1 84.4 66.8 137.2 70.4 57.4 85.7 28.3
S4 4 1 -32.9 60.9 93.8 79.4 173.0 93.6 16.1 89.3 73.3
S5 4 4.5 -22.1 114.4 136.5 91.9 195.3 103.3 23.5 98.0 74.5
S6 4 3 -45.0 118.5 163.5 70.2 175.3 105.1 7.0 90.2 83.1
S7 4 3 -13.2 414.4 427.6 69.4 147.4 78.0 -151.3 28.6 179.8
S8 12 3 -46.3 758.4 804.7 56.0 226.7 170.7 -415.2 35.2 450.4
S9 4 2 -10.0 196.8 206.8 95.1 180.2 85.1 10.6 96.5 86.0
S10 3 2.25 -46.1 18.9 65.0 101.7 209.9 108.2 5.8 56.1 50.2
S11 13 5 -46.3 758.4 804.7 56.0 226.7 170.7 -415.2 35.2 450.4
S12 4 4 -39.3 120.0 159.3 83.7 154.3 70.5 13.0 40.6 27.6
S13 4 7 75.5 84.6 9.1 82.4 152.2 69.8 82.6 95.0 12.4
S14 3 3 -35.4 -30.6 4.8 108.9 248.5 139.6 14.5 17.4 2.9
S15 3 4 75.7 101.4 25.7 79.3 125.8 46.5 85.4 98.5 13.1
S16 5 6 -40.9 108.0 148.9 70.2 173.8 103.6 9.7 77.7 68.1
S17 5 2 66.6 85.3 18.7 75.9 89.9 14.0 80.0 91.9 11.9
S18 3 2.5 74.6 79.3 4.7 68.0 75.4 7.3 96.9 97.7 0.9
S19 4 3.3 -21.1 152.5 173.6 74.8 123.5 48.8 25.5 96.7 71.2
S20 4 0.33 -48.8 -12.5 36.2 166.8 250.5 83.7 3.9 33.6 29.7
S21 3 1.5 -45.0 -36.3 8.7 157.2 229.9 72.7 7.0 13.2 6.2
S22 3 3 -21.8 81.9 103.7 94.9 128.7 33.8 25.1 96.9 71.8
S23 3 0.58 -44.3 44.6 88.9 95.2 292.3 197.1 7.4 41.6 34.2
S24 4 3 -47.0 27.7 74.7 112.4 217.1 104.7 5.5 60.7 55.3
S25 3 2 -48.0 340.6 388.5 116.0 227.8 111.8 -116.1 29.8 145.9
S26 3 1 -32.6 -9.1 23.5 71.5 133.2 61.7 16.9 34.9 18.0
S27 4 1 78.7 99.5 20.7 65.0 119.0 54.0 84.7 99.3 14.6
MIN. 3.0 0.3 -48.8 -36.3 4.7 56.0 75.4 7.3 -415.2 13.2 0.9
MAX. 13.0 7.0 78.7 758.4 804.7 166.8 292.3 197.1 96.9 99.3 450.4
AVERAGE. 4.6 3.0 -10.4 161.0 175.4 89.1 179.4 92.4 -24.5 65.0 90.2
STDEV.P 2.4 1.6 45.8 199.3 210.3 26.3 53.3 46.2 124.6 30.6 112.7
155
FIGURE 7-7: RAW DATA AND 3D VISUAL REPRESENTATIONS OF THE GENERATED SOLUTION SPACE FOR
PEDAGOGICAL EXPERIMENT I – PART A. ILLUSTRATED BY THE AUTHOR.
156
In order to ensure valid and comparable results, the following steps were taken:
1. All results obtained through parametric values exceeding the designated ranges were
excluded from the comparison studies, which resulted in elimination of 17 out of the
initial 118 iterations.
2. Raw parametric values provided by students were automatically calculated through the
prototype calculator to confirm accuracy of the corresponding calculations, as recorded
by the students.
3. Where discrepancies were identified and a numerical trend was consistent, the updated
results, which were recalculated by the prototype calculator, were included in the
comparison with the results generated by the prototype.
4. Where discrepancies were identified, but numerical trend was inconsistent, student-
recorded data was preserved for the design process analysis, but removed from the
comparison with the prototype-generated results. Based on these criteria, 34 iterations
were excluded from the result calculations and the subsequent comparison against
prototype-generated results.
Table 7-7 provides the validated calculated objective scores of the remaining solution pool
designated by student exploration.
TABLE 7-7: SOLUTION SPACE OF THE VALIDATED RESULTS FOR PEDAGOGICAL EXPERIMENT I – PART A.
NPV
( Million USD)
EUI
(kBtu/ft²/yr)
SPC
MIN. -47.9 66.8 -151.3
MAX. 340.6 292.3 99.3
AVERAGE 42.4 128.7 46.6
STDEV.P 69.2 53.6 45.0
In this experiment, the students were asked to optimize the design according to the same
objectives used by the Beagle—minimize EUI, while maximizing NPV and SPC. The average
exploration time recorded was three hours, with approximately five generated iterations. No
correlation was observed among the solution quality, number of iterations, and time required.
However, two of the students, S8 and S11, who recorded the greatest number of iterations, also
recorded solutions with optimal performance in NPV and EUI when compared to the results
achieved by other students. This findings implies that, while there is no observable guarantee that
greater number of iterations yields higher performing results, there does appear to be an increase
in the potential for identifying higher performing solutions.
The participating students were also asked to document their exploration and decision-making
process. The individual approaches employed by the 27 students can be divided into two broad
categories—random and linear. Random exploration was utilized by students who began their
optimization process by randomly selecting parametric values with little consideration of the
given objectives before comparing results. After a few iterations, students adopting this approach
157
would typically proceed to select their highest performing model, based on the available feedback.
The other approach employed was linear and was employed by students that would typically
begin by analyzing an initial design alternative of interest before varying parameters with the
intent of improving the performance with respect to the three overall objectives. Next, they would
analyze their new model, review feedback, and proceed with a new variation, based on the results
obtained. Figure 7-8 provides an illustration of a linear exploration process. In this case, a shift in
intent is evident after each iteration analysis. For example, after the first stage, the intent shifts
from maximizing the SPC score to adjusting with the aim of improving energy performance.
However, it can be noted that these adjustments were heavily reliant on the user’s intuition,
combined with previously generated results. Consequently, due to the project’s complexity, the
fruition of the user’s adjusted intent was not always realized through subsequent iterations, as
illustrated in Figure 7-8.
FIGURE 7-8: ILLUSTRATED EXAMPLE OF A LINEAR MANUAL EXPLORATION PROCESS AS MAPPED THROUGH THE
PEDAGOGICAL BENCHMARK CASES. IMAGE REPRODUCED BY THE AUTHOR, BASED ON THE ORIGINAL,
COURTESY OF USC ARCH507 2012 STUDENT ABDUL ALI KHAN.
7.2.4.1.2 Exploration Result Comparison of Pedagogical Experiment I – Part A
With respect to comparison of the solution pool, two areas are of particular interest. First, the
quality of the solution pool generated through EEPFD within the same time constraint as the
average recorded time that students spent during their manual exploration process. Second, the
quality improvement of the solution pool generated through EEPFD when given extended time
and extended generation numbers.
Figure 7-9 provides the performance ranges of the solution pools generated by the validated
student results and the results generated through EEPFD during a three-hour period. This duration
was selected, as it corresponded to the average time recorded by the students. According to these
results, the prototype was able to provide a solution pool with a 26.8% increase in the measured
NPV and a 13.7% reduction in the calculated EUI. However, the pedagogical solution pool was
able to provide a 22.2% more compliant spatial programming set. This was in part due to instant
feedback available regarding the SPC while exploring the design options through the Revit
158
platform. Overall, when ranked according to their performance for all three objectives, 36.7% of
the design alternatives generated by the prototype were designated as Pareto optimal solutions,
while only 26.9% of the student-generated design alternatives received this designation.
It should be noted that the ranked solutions provided by the students included all 67 results
generated by all 27 students, excluding only the results previously designated invalid. In addition,
an uneven distribution of designated Pareto solutions was observed among the student-
generated results. It was observed that students using a random exploration process had more
dramatic performance variations of their subsequent iterations when compared to their initial
designs, as shown in Table 7-6 and Figure 7-7. In contrast, students using a linear exploration
process typically displayed a series of incremental changes in the performance of their
subsequent iterations. This resulted in some students being able to produce multiple Pareto
solutions, while others produce one. This finding implies that the probability of generating Pareto
design alternatives is highly dependent on a combination of factors, including the quality of the
individual’s initial design, the chosen exploration method, and the exploration strategies
employed. In contrast, when H.D.S. Beagle was used, consistent generation of higher performance
design alternatives was noted. This approach also enables the exploration of the full spectrum of
potential parametric variables without being restricted by individual preferences. As illustrated
in Table 7-8 and Figure 7-10 below, the prototype was able to continue to improve the
performance of the solution space through increased generations.
TABLE 7-8: PERFORMANCE COMPARISON OF THE SOLUTION SPACE FOR HYPOTHETICAL SCENARIO 10 BETWEEN
THE PEDAGOGICAL EXPERIMENT RESULTS AND THE H.D.S. BEAGLE-GENERATED RESULTS AFTER A
RUNTIME OF 3 HOURS, 7 HOURS, AND 6 GENERATIONS.
SOLUTION SPACE RANGE
SGSS
BGSS
3 HRS 7HRS 6th Gen
NPV MIN. -48 -41 -41 -41
(Million USD) MAX. 341 432 584 835
Initial: -41 Improved 382 473 626 875
EUI MIN. 67 59 59 56
(kBtu/ft²/yr) MAX. 292 233 233 233
Initial: 174 Improved 107.0 115.1 115.1 117.8
SPC MIN. -151 -134 -266 -404
MAX. 99 81 83 88
Initial: 10 Improved 90 72 73 78
Pareto (%)(Students/Beagle) 3 OBJ 26.9/36.7 22.4/40.0 19.4/37.2
159
FIGURE 7-9: ILLUSTRATION OF THE PERFORMANCE COMPARISON OF SOLUTION SPACES GENERATED BY THE
PEDAGOGICAL EXPERIMENT I-PART A AND THROUGH EEPFD, AFTER COMPLETING THREE-HOUR
RUNTIMES.
160
FIGURE 7-10: THE SOLUTION SPACE COMPARISON FOR HYPOTHETICAL SCENARIO 10 BETWEEN THE PEDAGOGICAL
EXPERIMENT RESULTS AND THE PROTOTYPE-GENERATED RESULTS AFTER A RUNTIME OF 3 HOURS, 7
HOURS, AND 6 GENERATIONS.
161
7.2.4.2 RESULTS & ANALYSIS OF PEDAGOGICAL EXPERIMENT I - PART B
In Pedagogical Experiment I Part B, data provided by 25 of the 27 students enrolled in the course
was collected in the class. The remaining two students failed to complete this section of the
assignment, thereby making their datasets unavailable. Time spent for this part of the experiment,
as recorded by the students, ranged from 1 to 13 hours, corresponding to an average of 5.5 hours.
This included time necessary to create the individual parametric models, as well as that devoted
to the design exploration through at least four iterations.
During this part of the experiment, students were required to create their own parametric model
using five form-driving parameters, of which two—FloorNumber and SiteSetback—were the
required elements. As these designated two parameters had been previously taught in the class,
students were already familiar with their set-up requirements, although the designated range of
parametric values for these two parameters was left to student discretion. The other three
requested form-driving parameters were available for full customization by the students based
on the requirements of their design intent. The students were asked to document their design
intent prior to composing their own parametric model, in reflection of said design intent.
Afterwards, students were asked to evaluate the ability of their customized parameters to reflect
their design intent. The original documentation format can be found in Appendix E.2.
7.2.4.2.1 Evaluations of Students’ Parametric Models
Once all parametric models were collected from students, the author proceeded to determine
their quality based on two key aspects of interest. One aspect pertained to the quality of the
parameters included in the model and included a student’s ability to set-up and define the two
pre-designated parameters, as they were taught in class, along with the setup of the three
customized form-driving parameters. In particular, the link defined between these parameters
and the resulting geometry was scrutinized. The second key aspect pertained to the overall
performance of the model within the parametric ranges designated by each student for his/her
model, as well as an examination of the quality of all driven parameters included in the design.
Essentially, this step was included in order to evaluate compliance with experiment requirements
and provide the means of screening for any received parametric designs unsuitable for cross-
comparison studies.
The quality of each parameter’s setup was evaluated through the author’s examination of said
parameter before being designated to one of three tiers, namely “Poor”, “Acceptable”, and
“Good”. Here, “Poor” designation was given if the connection between the parameter and
geometry was not well defined. In practical application, this meant that the parameter was unable
to directly modify the design geometry. Parameter setups receiving a designation of “Acceptable”
were observed as having only a partial connection to the design geometry. For example, an
“Acceptable” SiteSetback parameter may be able to influence the building footprint, while failing
to influence the overall building design. Finally, parameter setups receiving a designation of
“Good” were observed as having a direct connection to the overall building geometry.
Table 7-9 summarizes the resulting quality evaluation of the first two designated form-driving
parameters, FloorNumber and SiteSetback, as required of all students. The functional setup for
these two parameters was specifically taught during the class; thus 18 out of the 25 students were
162
able to receive a designation of “Good” for both of these parameter setups. Of the seven
remaining students, six received “Poor” designation and one “Acceptable”, with regards to the
setup for the required FloorNumber parameter. This can be attributed to the significantly greater
number of steps required for the proper setup of this parameter, in comparison to the SiteSetback
required parameter. Regarding the SiteSetback parameter, the seven students that received
“Acceptable” were successful in applying the parameter to the base of their design, thereby
allowing the parameter to modify the building footprint. However, they failed to link the
parameter to the overall building geometry. Therefore, overall, the setup for the two required
parameters can be considered as having a 72% success rate.
TABLE 7-9: PEDAGOGICAL EXPERIMENT I – PART B: QUALITY EVALUATION SUMMARY OF STUDENT SETUP FOR
THE TWO REQUIRED PARAMETERS. PERCENTAGES PROVIDED BASED ON THE RECEIVED WORK OF 25
STUDENTS.
Quality of Generated Parameter Poor Acceptable Good
Required Parameter 1: FloorNumber
6
(24%)
1
(4%)
18
(72%)
Required Parameter 2: SiteSetback
0
(0%)
7
(28%)
18
(72%)
In reference to the three form-driving parameters customized by each student (73 in total), Table
7-10 provides the summary of parameters receiving each previously defined quality designation.
As two student-defined parameters out of the anticipated 75 were not received, only 73 student-
defined parameters were available for evaluation. Out of the received 73 parameters, 64% were
set up correctly by students, as evaluated by the author.
TABLE 7-10: PEDAGOGICAL EXPERIMENT I – PART B: QUALITY EVALUATION SUMMARY OF STUDENT SETUP FOR
THE CUSTOMIZED PARAMETERS. PERCENTAGES BASED ON THE RECEIVED WORK OF 25 STUDENTS.
Quality of Generated Parameter Poor Acceptable Good
Number Count of Each Category
7
(10%)
19
(26%)
47
(64%)
After the assessment of all individual parameters, the overall parametric model was subjected to
evaluation. The purpose of this evaluation was twofold. The first purpose was to provide the basis
for evaluating students’ ability to create their own parametric designs after receiving a 1.5 hour
long lecture. The second purpose was to discern the comparability of the student-generated
solution pool to each other, as well as to the EEPFD-generated solution pool. For these evaluation
purposes, each student’s design model was assessed in four categories, namely Parameter
Accuracy, Model Robustness, Instruction Compliance, and Accuracy of Calculated Data. Of these
four categories, scores for Parameter Accuracy and Model Robustness were used to roughly
gauge each student’s parametric modeling ability. Driving Parameter Compliance and Accuracy of
Calculated Data were used to determine whether the student-generated data was comparable
with each other, as well as with the data generated by H.D.S. Beagle. This assessment was
163
followed by an overall evaluation score being assigned to each student’s design model, reflecting
the number and extent of changes each design model was subjected to in order to ensure
compatibility with the Beagle. The assessment is summarized in Table 7-11, while the detailed
evaluation of each student’s parametric model can be found in Appendix 0.
1) Parameter Accuracy: This overall assessment was made through an evaluation of all student-
provided driving, driven, and fixed parameters found in each student’s parametric model, as
received at the end of the experiment. This evaluation focused on the quality and capability of all
student provided parameters by observing the correlation of each parameter and the actual
impact on the design geometry. A direct and intended impact on the design geometry by a
parameter was thus necessary in order for the parameter to be considered as meeting assignment
requirements. In the case of a parameter exhibiting an unintended impact, or failing to provide
the intended impact on the design geometry, the parameter was deemed as not meeting the
assignment requirements. This evaluation was based on the author’s judgment and the
interpretation of the intent of the student provided parameters. Once the assessment was
complete, a designation of either “Poor”, “Acceptable”, or “Good” was assigned to the parameter
group provided by each student. A rating of “Poor” was assigned when less than 50% of the
parameters within a student’s model met the requirements. Similarly, “Acceptable” was assigned
when more than 50% of the parameters met the requirements, and a designation of “Good” was
assigned only when all of the parameters provided by a student met all the requirements.
2) Model Robustness: This assessment was used to determine whether a student’s design model
was able to maintain its integrity throughout the design exploration process. This determination
was made by observing the model’s components in two categories, one of which pertained to the
parametric ranges provided by the student, and included the assessment of whether any
combination of those ranges compromised the design model’s integrity. A designation of “Yes” or
“No” was assigned at this point. The second category pertained to any rules utilized by students
in order ensure the design model’s integrity throughout the exploration process. Either a
designation of “Yes” or “No” was then assigned based on observations regarding the nature of
these rules. A “No” was assigned if no rules were introduced, other than the rule provided by the
assignment. A “Yes” was assigned if addition rules were introduced, regardless of performance.
3) Driving Parameter Compliance: This evaluation was based on whether or students followed the
instruction that required inclusion of five form-driving parameters in their design. The objective
of this assessment was to ensure a consistent comparable quantity of the explored parameters
among the students’ design models. Therefore, students’ exploration process and solution space
can be considered comparable to each other. If a student’s observed driving parameter quantity
was not compliant with the requirements, two steps were taken. First, the student’s results were
excluded from the cross-comparison study of all student-generated results. Second, modifications
were made by the author, as needed to ensure compatibility with H.D.S. Beagle for further studies.
Finally, a “Yes” or a “No” was assigned based on the author’s observations of each student’s
design model.
4) Accuracy of Calculated Data: This evaluation was divided into two parts in order to verify the
validity of the student-provided results. The first part aimed verify that each student’s model was
set up according to the instructions, in order to ensure that accurate values were provided for
calculation purposes. In the second step, the author verified that all resulting values were properly
164
utilized in the requested calculations to generate performance scores. A “Yes” or a “No” was
assigned for each of these two steps accordingly. Only students receiving a “Yes” for both steps
were considered to have provided results suitable for inclusion in subsequent comparison studies.
Students failing to receive a “Yes” for both steps had their data recalculated by the author in order
to ensure comparability to the results generated by H.D.S. Beagle.
5) Final Evaluation: Once assessment in all four categories was assigned an overall quality
evaluation of each student’s design model was determined. This final evaluation was provided
through a grade of “A” to “D”, reflecting the level of suitability of the model for direct use with
H.D.S. Beagle. A grade “A” indicated that the model and exploration range could be used with
H.D.S. Beagle directly with no further modifications. A grade “B” indicated that modifications were
necessary to either parametric ranges or rules to ensure the integrity and increase robustness of
the design model. A grade “C” implied that modifications were necessary to the model’s set-up,
according to the observed student design intent. Finally, a grade “D” indicated that the design
intent could not be determined based on the data provided by the student and the necessary
modifications needed to be made at the author’s discretion. Thus, models receiving a grade of “D”
cannot be considered as having the original student design intent preserved.
The summary of evaluation results is presented in Table 7-11 and is illustrated in Figure 7-11. No
direct correlation was observed between students’ documented experience with Revit and the
quality of their parametric model. Among the eight students who indicated having more than a
year’s experience with Revit only 37.5% (3 students) received grade “B” for their model, while the
remaining 62.5% (5 students) received grade “C” as their final evaluation. In contrast, among the
remaining 17 students who indicated less than six months of Revit experience, 64.7% (11 students)
received grade “B” for their model, 5.9% (1 student) received grade “C”, and 29.4% (5 students)
received grade “D”. It should be noted that no students documented their experience in Revit as
ranging between six months and one year. These results indicate that increased familiarity with
Revit does not directly translate into the ability to produce a higher-quality parametric model.
However, a distinct lack of experience in Revit does imply an increased difficulty in the production
of a parametric model of usable quality. In addition to the students’ prior experience with Revit,
their prior parametric modelling experience was also examined.
TABLE 7-11: SUMMARY OF FINAL EVALUATION OF STUDENTS’ PARAMETRIC MODELS.
Evaluatio
n
Categorie
s
Parameter
Accuracy
Model Robustness
Driving
Parameter
Compliance
Accuracy of Calculated
Data
Final
Evaluation
Rule
Variation
Range
Accuracy of
Model
Setup
Accuracy of
Calculations
Scale
(Poor,
Acceptable,
Good)
(Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (A-D)
Summary
P: 9 (36%)
A: 6 (24%)
G: 10 (40%)
N: 11
(44%)
Y: 14 (56%)
N: 5
(100%)
Y: 0 (0%)
N: 13 (52%)
Y: 12 (48%)
N: 11 (44%)
Y: 14 (56%)
N: 13 (52%)
Y: 12 (48%)
A: 0 (0%)
B: 14 (56%)
C: 6 (24%)
D: 5 (20%)
165
When asked to identify their parametric modeling experience, 17 of the 25 students
i
indicated
having prior experience using other platforms. Among these students, 52.9% (9 students) received
grade “B”, 29.4% (5 students) received grade “C”, and 17.7% (3 students) received grade “D”.
Among the eight students indicating no prior parametric modeling experience, 62.5% (5 students)
received grade “B”, 12.5% (1 student) received grade “C”, and 25% (2 students) received grade
“D”. These results indicate that prior parametric modeling experience does not necessarily imply
the production of a higher quality model in this context. It should be noted that, instead, there is
an indication that prior parametric modeling experience with other platforms may provide an
obstacle, rather than an advantage. Among the five students that received a final evaluation of
grade “D”, 60% (3 students) indicated prior parametric modeling experience, but less than six
months experience with Revit. The remaining two students indicated no prior experience in either
parametric modeling or Revit.
FIGURE 7-11: ILLUSTRATION OF EVALUATION RESULTS SUMMARY OF STUDENTS’ PARAMETRIC MODEL.
i
The 17 students who indicated prior parametric modeling experience across other platforms are not the same 17
students who indicated less than six months of experience with Revit.
166
7.2.4.2.2 Design Intent Translation Analysis
This section examines feedback received from the 25 students regarding their perception of their
ability to translate their design intent into a parametric model. First, students were asked to
provide a brief description of their design intent. Then, during this self-evaluation, students were
asked to assess each of their customized form-driving parameters by answering the following
questions: (1) Is this parameter from your original design intent? (Y/N); (2) If yes, by what
percentage can this parameter express your design intent?
The received data was compiled and is summarized in Table 7-12 and Figure 7-12. The data
analysis revealed that of evaluated custom form-driving parameters, 86% were able to represent
the student’s original design intent.
i
However, despite this indication, the majority of the students
indicated that the custom form-driving parameters were only able to express their design intent
by 26-50%. This feedback indicates that the students struggled to fully define their design intent
through parameterization, possibly due to inexperience with parametric modeling, which can be
remedied through extended exposure. However, there also exists the possibility of aspects of
design being non-translatable into parametric definition. It should be noted that cause based on
the determination between lack of parametric modeling experience or limitations of parametric
modeling itself cannot be asserted at this time based on the available data.
Overall, several concerns regarding potential bias within the currently acquired data should be
noted. First, as design was not the focus of this class, students were not evaluated on their design
intent, as would be the case in a design studio setting. As such, time was not spent on the
development of design intent, and was not requested of the students outside of the context of
this exercise. As the purpose of this exercise was to put into practice skills and knowledge
imparted through previous course instruction, there is the possibility of students limiting their
design intent to these specific applications. In addition, there is a possibility of undocumented
influence of prior knowledge of parametric requirements on students’ design intent. Finally, there
is a potential for bias among the student sample, as the class this experiment was conducted in is
an optional computer course. As such, a predilection of the participating students towards use of
computational environments for design is likely. As such students not meeting these criteria are
likely to be excluded from this data set.
TABLE 7-12: SUMMARY OF RECEIVED STUDENT SELF-EVALUATIONS OF THEIR CUSTOMIZED FORM-DRIVING
PARAMETERS’ ABILITY TO TRANSLATE THE DESIRED DESIGN INTENT.
Is this parameter from your
original design intent? (Y/N)*
If yes, by what percentage can this parameter
express your design intent? (%)**
Yes No 0-25 26-50 51-75 76-100
Quantity
Percentage (%)
64
(86%)
10
(14%)
12
(21%)
20
(36%)
10
(18%)
14
(25%)
*Based on the 74 received parameter evaluations
**Based on the 56 received evaluations
i
Calculated from the received “yes” answers to the first prompt per custom form-driving parameter.
167
FIGURE 7-12: STUDENT SELF-EVALUATION OF THEIR CUSTOMIZED FORM DRIVING PARAMETERS’ ABILITY TO
TRANSLATE THE DESIRED DESIGN INTENT.
7.2.4.2.3 Comparison of the Exploration Results
After evaluating the students’ models, the author applied the necessary modifications to each
model in order to run the student’s design through EEPFD and further compare each student’s
exploration results with the H.D.S. Beagle-generated results. The detailed information regarding
each student’s parametric design, the modifications made by the author, and subsequent
explorations can be found in Appendix 0. In Pedagogical Experiment I Part B, data from 25
students was collected. On average, the students spent 5.5 hours on this part of the experiment,
based on the range of 1 to 13 hours. The recorded time spent included both initial model set-up
time and the design exploration time through a minimum of four design iterations. The diversity
of the received designs is illustrated in Figure 7-13, along with the final performance ranges and
the comparable H.D.S. Beagle-generated performance ranges after a 3-hour exploration period.
It can be observed that, for all 25 models, H.D.S. Beagle was able to generate a better performing
solution space than the students’ exploration process within the designated 3-hour time period.
168
FIGURE 7-13: SUMMARY OF STUDENTS’ PARAMETRIC DESIGNS WITH THE PERFORMANCE RANGE FOR BOTH
STUDENT- AND H.D.S. BEAGLE- GENERATED ALTERNATIVES. NPV CALCULATED IN MILLION USD
AND EUI IN KBTU/FT²/YR.
169
7.2.4.3 FINAL QUESTIONNAIRE SUMMARY OF PEDAGOGICAL EXPERIMENT I
The last part of Pedagogical Experiment I was a questionnaire, which required the participants to
provide a written response to two questions. These responses were collected from 23 students
and are summarized below. The originally collected data can be found in Section E.5.
Q1: Do you think the parametric model can help you to explore a greater number of design
iterations more effectively? Why?
In response to this question, 22 of the 23 students provided positive feedback, indicating that the
parametric model allowed them to explore design iterations more effectively. The one remaining
student did not provide a distinctly positive or negative response, but instead pointed out that
the benefits of parametric modeling were dependent on a designers’ goal and computer skills.
Among the 22 positive responses, time saving was cited by students as the most common reason
they thought the parametric model could increase the effectiveness of the exploration process.
These students pointed out that parametric modeling expedited the process of generating new
design alternatives without demanding a complete remodel of the initial design. Thus, as the
students required less time for design modifications, more time could be devoted to system
development. The next cited positive characteristic was that the integrated performance
feedback process facilitated identification of higher performing designs. Students attributed this
to the ability to make quick modifications and observe how those changes influenced the overall
building’s visual and performance properties. By changing elements of the parametric model,
designers were able to improve upon each design and observe the results instantaneously, which
influenced their next design. Students indicated that this process was helpful in predicting the
most efficient overall design within the defined parametric ranges for the number of floors,
glazing, etc. In particular, two students mentioned that this exploration method was especially
beneficial in dealing with complex system and geometry, where the relationship of the building
form and performance was difficult to anticipate.
One of the respondent mentioned that this integrated performance-based parametric exploration
process solved the conventionally recognized interoperability issues between different expert
domains. This particular respondent cited “speed and accuracy” as potential benefits of the
information transfer. Another student pointed out that this process had the potential to generate
new and unexpected design forms, as it allowed manipulating different combinations of the
adjustable parameters.
Among the students that provided positive responses, two indicated that, in order for the
parametric modeling exploration process to be considered efficient, an effective parametric
model was required. Two other students described difficulties encountered while creating their
parametric models. However, they mentioned that, once they had overcome the learning curve,
parametric modeling was “a huge help.” Another two students expressed their concerns
regarding the relevancy of the process to an academic setting, but acknowledged its viability in
application to professional practice. Furthermore, some students mentioned that the time
allocated to learning how to use the tool was inadequate. While the current process appeared
improved, it required repeatedly going back and forth between Excel, Revit, and GBS. This was
remarked as being “quite a hassle.” Finally, one student indicated that it would be helpful if more
time were allocated to experiment, as this would enable exploring a greater number of parametric
combinations, and likely result in increased geometric variations.
170
Q2: Do you think the feedback results (design, financial, and energy) can support your decision-
making?
Among the responses provided, all but two students’ indicated a positive inclination towards the
impact of the feedback provided in this exercise on their decision-making process. However, there
were several points of concern that were repeatedly singled out in the provided responses. For
example, some students stated that a large number of repeated runs were necessary in order to
isolate the impact of specific parameters on performance, thereby enabling improvement in the
overall score. This apparently posed a challenge in identifying the combinations of parametric
values that would lead to an increased overall performance. It was noted that the ability to
associate specific design approach techniques with potentially improved performance would
provide more weight to the results. This suggests that, if the impact of specific parameters on the
overall performance can be identified, the uncertainty of the design exploration process can be
reduced, thereby streamlining a designer’s efforts and interests.
Another consistent issue observed by the students pertained to the concept of tradeoffs between
three performance objectives, which were deemed of equal importance. As a result, students
expressed their difficulty distinguishing “good or bad” with the given three scores. This implies
that the Pareto Rank evaluation method was somewhat counterintuitive to their design approach,
suggesting that a hierarchy of the three objectives would have simplified the process. Students
also expressed an interest in identifying the expected performance boundary of each objective in
order to provide a basis for their own evaluation of the resulting objective scores for their
individual design iterations.
Among the concerns expressed by the students regarding potential fallbacks of the design
exploration process was the possibility of too much feedback information, as this was likely to
hinder the design process and limit the designers’ freedom in their design exploration. Another
issue mentioned pertained to the inclusion of the aspects of the design process that are outside
the realm of computational analysis.
Finally, some respondents commented that these particular feedback results might be more
suited to utilization by clients or builders, rather than designers. While the sustainable
environmental nature of some of the feedback was considered relevant for design, it was pointed
out that the financial model appeared to be too detailed was therefore less relevant to this design
stage.
7.2.5 SUMMARY OF THE PEDAGOGICAL COURSE CASE EXPERIMENT
As previously described, the three noticeable gaps in current research efforts are flexibility,
usability, and design process evaluation. As illustrated in Figure 7-13, EEPFD demonstrates its
ability to accommodate a wide variety of geometrically diverse designs, while still providing design
alternatives with improved performance. In addition, EEPFD’s encoding method to identify “genes”
utilizes standard parametric design techniques within industry standard building information
modeling design environment as a means of ensuring designer usability. As a result, with
minimum instruction, students were able to prepare a parametric design compatible with
exploration through EEPFD. While issues remain regarding the ability to successfully translate
171
design intent into a parametric model, this may be mitigated through increased exposure to
parametric modeling design approaches. However, there also exists the possibility of aspects of
design intent note being translatable into parametric definition. This is an issue that, even with
extensive experience in parametric modeling, is not currently resolvable based on current tool
capability and issues of design idiosyncrasies and computing in general. It should be noted that
the exact cause behind this issue cannot be asserted at this time, based on the available data.
Furthermore, the performance comparison between EEPFD and manually generated design
alternatives illustrated EEPFD’s ability to provide higher performing design alternatives within a
given time limit. This is even more prevalent through further improved results within the
designated performance objectives when given extended time. Given that time typically
dominates early design exploration, it can be extrapolated that the reduction in computation time
necessary to generate the desired results would further enhance the suitability of the framework
to the early stage design process. This provides initial confirmation of the utility of the
optimization techniques during the early stage design process. Nonetheless, future research is
needed to obtain and explore more empirically defined data regarding the application of EEPFD
to the design process within a design studio setting and outside of academia. The finding of such
studies would help gauge the impact of EEPFD on the overall design process.
172
7.3 PEDAGOGICAL EXPERIMENT II: DESIGN STUDIO SETTING
7.3.1 THE PEDAGOGICAL STUDIO EXPERIMENT OBJECTIVES
Pedagogical Experiment II is designed with the intent to emulate a design studio setting in which
to observe the implementation of EEPFD. The intent is to observe the impact of applying EEPFD
to the early stage design process pertaining to a design studio problem, thereby providing the
basis on which to evaluate EEPFD’s applicability to the design process and its ability to assist in
design decision-making. The most significant aspect of this experiment is that this is the first time
that users outside of the research team will have direct interaction with H.D.S. Beagle, where
previous implementation has been reserved solely for the author. Therefore, this experiment can
be considered as the initial full application of EEPFD to the early stage design process. Hence, it is
expected that modifications and aspects of EEPFD identified as in need of modification will be
included in the resulting feedback of this experiment. In addition, this experiment is the first
application of EEPFD to a single-family residential design problem, as before only larger-scale
projects, including a mixed-use program, were explored. This allowed for the observation of
EEPFD’s applicability to the singular challenges of single-family housing.
7.3.2 THE PEDAGOGICAL STUDIO CASE EXPERIMENTAL BACKGROUND
This experiment was conducted during the summer session of 2012 under the supervision of Prof.
David Gerber. The students participating in this experiment were members of the SoA USC’s
Graduate Research Scholar Program (GRS), which enables academic learning by providing
students the opportunity to assist in academic research led by a Professor. In the summer of 2012,
a second year Master of Architecture candidate (Student A) and a fifth year Bachelor of
Architecture candidate (Student B) were assigned to Prof. David Gerber. Student A was familiar
with the Revit platform, but had only been recently introduced to parametric modeling through
completion of Arch507 in spring of 2012. Student B, while unfamiliar with the Revit platform, had
acquired parametric modeling experience through the completion of Arch417 Computer
Programming in Architecture, as taught by Prof. David Gerber. Pedagogical Experiment II was
outlined by Prof. David Gerber and implemented under the supervision of the author.
7.3.3 THE PEDAGOGICAL STUDIO CASE EXPERIMENTAL DESIGN
As part of Pedagogical Experiment II, the two participants are asked to design a single-family
residence to be located along Wonderland Park Avenue in Los Angeles, CA, as illustrated in Figure
7-14. Prof. David Gerber, acting as both the owner and the studio instructor, is responsible for
providing all the necessary project requirements. Student A and Student B are then asked to fulfill
these project requirements, under Prof. Gerber’s design guidance. One design guidance is to avoid
any curved geometric design elements in order to avoid previously encountered errors in the
utilized tool platforms.
173
Based on the program requirements, the single-family home of interest should include 4
bedrooms, 3 full bathrooms, a 2-car garage, as well as living, dining, and kitchen areas, totaling
no more than 3,000 ft². All room areas are subject to designer preference, with bedroom
dimensions not exceeding 20 ft x 20 ft. A 10 ft setback from all site boundaries is also specified.
The overall design goals include meeting all design requirements combined with consideration for
environmental conscientiousness.
For the purpose of evaluation, a benchmark model is provided by the author, based on the existing
site conditions, calibrated against current residents’ electric and gas bill. All material selections,
space type assignment, and other financial construction-oriented specifications are provided to
the students prior to the implementation of the experiment, who are instructed to remain
consistent throughout the experiment. In this manner, student designs can be compared with the
benchmark case to evaluate the performance improvements of their designs.
FIGURE 7-14: PROJECT SITE OF PEDAGOGICAL EXPERIMENT II. IMAGE SOURCE FROM GOOGLE MAPS.
The experiment is divided into four stages, as illustrated in Figure 7-15: (1) pre-parameterization;
(2) parameterization stage; (3) H.D.S. Beagle stage; and (4) final design decision and project
conclusion.
FIGURE 7-15: THE DESIGN OF PEDAGOGICAL EXPERIMENT II PROCESS.
Pre-parameterization Stage
Parameterization Stage
H.D.S. Beagle + EEPFD
Decision Making Stage
174
1) Pre-parameterization Stage: 2 weeks (5/16/2012- 5/30/2012)
During the first week, students are asked to propose their design ideas according to the set design
requirements. Prof. Gerber acts as both project owner and studio instructor, and is responsible
providing critique and recommendations for design modifications. During the second week,
students proceed with finalizing their conceptual design ideas and begin identifying parameters
of interest regarding the exploration of their adjusted design. During this stage in the design
process, students are allowed to utilize their platform of choice for conceptual design exploration.
In Pedagogical Experiment II, Rhino is chosen by both Student A and Student B. They are also
asked to document their design process, describing how they developed their design ideas. At the
end of the second week, interviews are conducted, allowing the students to describe their
educational background, environmental design knowledge, tools previously utilized for design,
design process experience in previous design studios, and the design process used in this specific
design studio.
2) Parameterization Stage: 2.5 months (5/31/2012-8/17/2012)
During the previous stage, students are asked to identify parametrically suitable areas of
exploration for their conceptual design. In this stage, they are asked to model their designs in
Revit, according to their proposed method of parameterization. In addition, the students are
requested to record their exploration intent, parameterization process, encountered difficulties,
responses to encountered challenges, simplifications made, and different parameterization
approaches attempted.
3) Learning H.D.S. Beagle Stage: 4 hours (on 6/19/2012)
During the development of students’ parametric model, a lecture session is scheduled to instruct
in the H.D.S. Beagle’s application to their design. At this point, students should have a simplified
version of their parametric model suitable for testing purposes. The content of the lectures
includes the concept behind H.D.S. Beagle, building Beagle executable Revit models, setting up
required Excel templates, and testing their own parametric model once ready.
4) Final Design Selection
During this stage, students are asked to select a final design based on all available information.
The design chosen by each student is then compared to the existing residence benchmark model
provided by the author. Finally, students are asked to take part in a second interview, aimed at
eliciting their overall experience using EEPFD and H.D.S. Beagle.
7.3.4 RESULTS & OBSERVATIONS OF THE PEDAGOGICAL STUDIO CASE EXPERIMENT
Initially, two students were chosen to take part in this experiment. However, during the
experimental process, Student B was unable to complete the required work, as the initial design
could not be translated into a parametric model compatible with H.D.S. Beagle within the
research period afforded to this experiment. While the direct cause of this outcome cannot be
verified, the student’s parametric modeling capabilities, geometric complexity of the initial design,
and technological issues encountered with the Revit platform can all be considered as
175
contributing factors. As such, Student B’s contribution to this experiment is invalid and the
observational focus remains solely on Student A.
During the pre-parameterization process, the designer chose Rhino as the preferred design
platform. Based on the designer’s design logic, and over the course of two weeks, a total of four
major iterations were explored and presented for discussion with the client (i.e. research team),
as illustrated in Figure 7-16. Based on this discussion, a parameterization concept is proposed by
the designer, who thus proceeds with exploring shading, opening, and each space’s spatial
compositions of the design.
After the determination of intent for the parameterization model, the designer proceeds to define
the parametric model in Revit, according to the proposed parameterization logic and initial design
concept. The final parametric model generated over the course of 2.5 months is illustrated in
Figure 7-17. This recorded time includes the designer’s required time to become familiar with the
use of Revit for conceptual design through a trial and error period. As one of the secondary goals
of this experiment is to observe the ability of a designer to translate the intended design into a
parametrically oriented mathematically-defined form, the designer was asked to avoid any
geometric simplifications from the original design geometry for the purpose of expediency. As
such, the complexity of the original design geometry, combined with designer’s unfamiliarity with
parametric design and the use of Revit in application to parametric design, can be considered as
contributing factors to the extended experience parameterization process. Another contributing
factor can be identified in the trial and error period necessary to define the design’s constraints
file in order to maintain both the design intent and the model robustness during the optimization
process, since the current version of H.D.S. Beagle will prematurely terminate if the geometry
breaks. According to the feedback given by the designer, if a simplification of the north façade
had been allowed, the parameterization process could have been completed within two days with
the designer’s prior level of Revit experience.
After the completion of the parameterization process, the H.D.S. Beagle instructional stage is
documented as a 4-hour lecture session. During these four hours, the designer becomes familiar
with the process of setting up the executable files necessary for engaging H.D.S. Beagle in a GA
run. The designer is also informed of the EEPFD optimization mechanism and the significance of
all generated data. While improved energy performance in the design is one of the designated
design goals, the designer is given absolute discretion in organizing the design priorities and
balancing the design objectives.
176
FIGURE 7-16: PRE-PARAMETERIZATION DESIGN PROCESS OF STUDENT A. IMAGE COURTESY OF XINYUE AMBER
MA.
177
FIGURE 7-17: PARAMETRIC DESIGN OF THE STUDENT A. IMAGE RE-DIAGRAM BY THE AUTHOR BASED ON STUDENT
A’S DESIGN.
For the final model selection, the data sets from two GA runs were provided to the designer for
decision-making purpose, as illustrated in Figure 7-18. One GA run maintained a fixed glazing
configuration along the north- and south-facing facades, thereby only allowing exploration for
varying amounts of glazing along those facing east and west. The second GA run included an
exploration of varied glazing percentages applied consistently across all four facades. It should be
noted that, in both of these GA runs, exploration of the percent glazing of each façade
independent of the other facades was not enabled. Both data sets provided solution pools with
an improved performance when compared to the benchmarked initial design.
Although the ranking of each design alternative for the design scenario was provided, the designer
did not limit the analysis to the design alternatives receiving the highest ranking from the provided
data set. Instead, all resulting design alternatives were considered in the context of the generated
solution pool. The final design decision process is described below, and is based on the
information received from the designer for this experiment.
178
The initial decision made by the designer was to discard the data from Data Set I with the fixed
north and south facades. This was done in response to a consistently observed increase in the EUI
for the overall data set when compared to the solution pool for Data Set II. Secondly, the designer
deemed the overall NPV range as acceptable, therefore indicating that primary decision-making
was based on EUI and SPC results.
From the overall Data Set II, the designer observed that the best EUI performance of the solution
pool corresponded to 43.82 kBtu/ft²/yr. Therefore, the designer narrowed the desired solution
pool to the design alternatives with EUI performance in the 43 to 45 kBtu/ft²/yr range. The
solution pool was then further restricted to only include design alternatives with an SPC score
exceeding 95. In this final solution pool, the designer also included the only design alternative to
receive the highest available SPC score, regardless of the EUI. From this narrowed solution pool,
the final design was selected based on aesthetic properties through the designer’s analysis of the
provided 3D images of each design alternative. The designer’s decision-making process is
illustrated in Figure 7-19. The objective scores of the final selected design include NPV = -2.38
million USD, EUI = 52.04 kBtu/ft²/yr, and SPC= 99.29. It should be noted that, while the final design
was not within the initially designated EUI range, it did have an improved performance across all
three objectives when compare to the initial benchmark design. Once this final selection was
made, the designer proceeded to the next stage in design development with the generated Revit
massing model.
7.3.5 SUMMARY OF THE PEDAGOGICAL STUDIO CASE EXPERIMENT
This case study is the first documented attempt to integrate MDO into the design process by a
designer user, as opposed to prior case studies with engineers or researcher team members as
the primary users. As such, this case study provides initial observations with respect to the impact
of MDO on the early design process when implemented by a designer. The design process
observed in this case study demonstrated the effectiveness of EEPFD in supporting informed
decision-making despite the volatile subjective nature of the design process. By providing a
context in which design alternatives can be evaluated, EEPFD allows designers to organize their
priorities based on individual preferences or project requirements. In this experiment, despite the
dominance of aesthetic preference as the determining factor for the final design, an improvement
in all three objective scores with respect to those pertaining to the initial design was observed.
Aside from aesthetic exploration, the designer indicated that EEPFD provided an opportunity to
learn through previously inaccessible performance feedback on the relationships between design
elements and their impact on the resulting design performance. Therefore, in this case study,
EEPFD succeeded in providing a “designing-in performance” environment, where design with an
improved performance was achieved. In addition, while this case study indicates that
familiarization with EEPFD does not require an extended learning period, prior familiarity with
parametric design concepts and processes is a necessary prerequisite.
179
FIGURE 7-18: SUMMARY OF THE PROVIDED DATA SETS FROM TWO GA RUNS FOR FINAL DESIGN DECISION
MAKING.
180
FIGURE 7-19: ILLUSTRATION OF THE STUDENT A’S DECISION-MAKING PROCESS.
181
7.4 PEDAGOGICAL EXPERIMENT III: COMPUTATIONAL DESIGN WORKSHOP SETTING
7.4.1 THE PEDAGOGICAL WORKSHOP EXPERIMENT OBJECTIVES
Pedagogical Experiment III is conducted within a 4-day computational design workshop setting,
whereby the participants are students interested in learning Revit and Vasari. The goal of this
experiment is to further support and evaluate the applicability of EEPFD for the early stage design
process. According to prior research results, EEPFD is able to (1) overcome previously discussed
interoperability issues; (2) explore complex geometry; (3) increase feedback; 4) identify better fit
results, and (5) provide trade-off studies for competing objectives. However, the tested problems
and experiment participants are still limited. In addition, the effectiveness of the feedback to
support the designers’ decision-making are still in question. As a result, the design of this
experiment focuses on the following six objectives:
1. This workshop experiment is designed to repeat the Pedagogical I Part A in order to collect
additional data needed to confirm observations made during prior experiments.
2. To expand the research data set to include a new design scenario outside of the scenario,
i.e., not used for Pedagogical Experiment I Part A.
3. To expand the experimental variables to allow students to set up their own initial models
and exploration ranges. To observe the impact of these varying initial models, explored
parameters, and parametric ranges.
4. To expand the experiment to include individual surface exploration as a part of the
exploration process, thereby increasing potential complexity of design problems.
5. To access the effectiveness of the feedback generated by EEPFD by observing decisions
made by students based on the EEPFD-generated data set.
6. To assess the effectiveness of the feedback generated by EEPFD through observations
made regarding the student-provided settings for subsequent GA, runs based on the
results received through EEPFD.
7.4.2 THE PEDAGOGICAL WORKSHOP CASE EXPERIMENTAL BACKGROUND
The design of this experiment is confined to fit the curriculum designated for a 4-day summer
workshop in Shanghai, China, during the summer of 2013, instructed by Dr. David Gerber, in which
six students from Tongji University took part. Also included in this data set are two students, one
graduate and one undergraduate, from the SoA USC, who performed the experiment in parallel
at USC, under the supervision of the author. The summary of the participants’ background can be
found in Table 7-13.
182
TABLE 7-13: SUMMARY OF STUDENTS’ BACKGROUNDS FOR THE PEDAGOGICAL EXPERIMENT III
ID
Background
Learn Revit® (Month)
Learnt Other BIM or
Parametric Tool (Y/N)
Time (Month)
Other Tools Learnt
Energy Sim. Experience
Energy Sim. Tool
Other Sim. Experience
Other Sim. Tools
SA01 MArch, BS in Civil 12 N N/A N/A N N/A Y
SAP, PKPM,
3D3S
SA02 BS in LA 0 Y 6 grasshopper N N/A N N/A
SA03 BS in UP 0 N N/A N/A Y Ecotect N N/A
SA04 MArch, BArch 0 Y 6 grasshopper N N/A N N/A
SA05 MArch, BArch 0 Y 12 grasshopper Y Ecotect N N/A
SA06 MBA, BS in Economics 0 N N/A N/A N N/A N N/A
SA07 BArch 0 N N/A N/A Y HEED Y HEED
SA08 MArch, BArch 0 Y 12 grasshopper Y Ecotect Y
Autodesk
Algor
Note:
BS in LA: Bachelor of Science in Landscape Architecture
BS in UP: Bachelor of Science in Urban Planning
BArch: Bachelor of Architecture
BSArch: Bachelor of Science in Architecture
MS in LA: Master of Science in Landscape Architecture
MArch: Master of Architecture
7.4.3 THE PEDAGOGICAL WORKSHOP CASE EXPERIMENTAL DESIGN
The aim of the summer workshop is to the attending students an opportunity to learn the Revit
platform and Vasari. As such, this experiment provided basic instructions regarding the use of
these platforms, modeling and analytical capabilities of these platforms, and an introduction to
the concept of a “designing-in performance” design process. Similar to the Pedagogical
Experiment I, this experiment involves three major activities (1) curriculum design; (2) lecture &
data collection; and (3) data analysis. The design of the curriculum is based on the limited material
students are expected to learn and retain in a one-day workshop while focusing on the objectives
of this research. Each day has specific course objectives, research objectives, handouts, and tasks
designed prior to the workshop’s implementation. The outline of the curriculum is illustrated in
Figure 7-20 and Table 7-14.
183
FIGURE 7-20: EXPERIMENTAL OUTLINE OF PEDAGOGICAL EXPERIMENT III
184
TABLE 7-14: CURRICULUM OUTLINE OF THE PEDAGOGICAL EXPERIMENT III.
DAY 1 AM PM
Learning
Objective &
Major Activities
Course Introduction
Tool installation
Introduction of Revit, GBS, Vasari
Handouts to
Students
D1-1_ST_OVERVIEW.pdf
D1-2_ST_SOFTWARE.pdf
D1-Q1.pdf
D1-3_ST_INTRO.pdf
Files to Students Software.exe
Lecture Notes D1_AM.pptx
D1-1_LN_OVERVIEW.pdf
D1-2_LN_SOFTWARE.pdf
D1_PM.pptx
D1-3_LN_INTRO.pdf
Collect Data D1-Q1.pdf
Research
Objective
Obtain students’ background information.
DAY 2 AM PM
Learning
Objective &
Major Activities
Parametric Conceptual Mass AAC-1.rvt.
1. EEPFD Manual Exploration AAC-1.rvt
2. Pareto Rank
3. Decision making from EEPFD generated
data
Handouts to
Students
D2-1_ST_PARAM_MASS1.pdf D2-2_ST_MANUAL_EEPFD.pdf
D2-3_ST_DM_FROM_EEPFD.pdf
Files to Students AAC-1_MODEL.zip
AAC-1_EEPFD_DATA.zip
Lecture Notes D2_AM.pptx
D2-1_LN_PARAM_MASS1.pdf
D2_PM.pptx
D2-2_LN_MANUAL_EEPFD.pdf
D2-3_LN_DM_FROM_EEPFD.pdf
Collect Data AAC-1.rvt AAC-1.zip (AAC-1.xlxs, AAC-1.rvt, AAC-1.jpg,
D2-Q1.pdf), D2-Q2.pdf
Research
Objective
Measure students’ ability to create accurate
parametric models by following the
instructions.
Collect feedback for AAC-1 model.
Solution space comparison: manual vs. Beagle
Final selection comparison
Resulting solution space from the adjusted
exploration range.
DAY 3 AM PM
Learning
Objective &
Major Activities
Parametric Conceptual Mass AAC-2.rvt
EEPFD Manual Exploration AAC-2.rvt
Based on AAC-2.rvt to define a new initial
model and variation range.
Handouts to
Students
D3-1_ST_PARAM_MASS2.pdf
D3-2_ST_MANUAL_EEPFD.pdf
D3-3_ST_SETUP NEW EXPLORATION.pdf
Files to Students AAC-2_MODEL.zip
Lecture Notes D3_AM.pptx
D3-1_LN_PARAM_MASS2.pdf
D3_PM.pptx
D3-2_LN_MANUAL_EEPFD.pdf
D3-3_LN_SETUP NEW EXPLORATION.pdf
Collect Data AAC-2.rvt AAC-2.zip (AAC-2.xlxs, AAC-2.rvt, AAC-2.jpg,
D3-Q1.pdf)
AAC-3.zip (AAC-3.xlxs, AAC-3.rvt)
Research
Objective
Measuring ST’s parametric modeling
capability.
Understand how different design complexity
levels affect students’ ability to utilize
performance feedback during the design
process.
185
Collect different initial models and
exploration ranges from students to further
validate the need for best practice of MDO in
early stage of the design process.
DAY 4 AM PM
Learning
Objective &
Major Activities
EEPFD Manual Exploration with Individual
Surface
Learning extra e.g. dynamo (t.b.d.)
Decision making from EEPFD generated data
for AAC-2.rvt
Other performance provided by Vasari (t.b.d.)
Handouts to
Students
D4-1_ST_INDIVIDUAL SURFACE.pdf
Files to Students AAC-2_EEPFD_DATA.zip
Lecture Notes D4_AM.pptx
D4-1_LN_INDIVIDUAL SURFACE.pdf
D4_PM.pptx
D4-2_LN_EXTRA.pdf
Collect Data AAC-4.zip (AAC-4.xlxs, AAC-4.rvt, AAC-4.jpg,
D4-Q1.pdf)
D4-Q2.pdf
Research
Objective
Measuring the increasing complexity (AA-2
with individual surfaces.) Human vs.
computer.
Collect feedback for AAC-2 model.
Each day of the workshop is separated into morning and afternoon sessions. Based on the learning
and research objectives, the required lecture notes and handouts are designed and given to
students in each session.
Day 1 – Introduction: Day 1 program aims to assist students in installing the required software
and help students become familiar with the working platforms, Revit and Green Building Studio.
In addition, the background survey D1-Q1 is conducted to collect the students’ background
information regarding their major, parametric modeling skill levels, and any prior experiences in
analytical tool use. The provided survey document can be found in Appendix F: F.1. The summary
of the background questionnaire is organized in Table 7-14.
Day 2 – AAC-1 Parameterization, Design Exploration & Design Decision Making: Three major
activities are incorporated into the Day 2 curriculum. The first major activity aims to introduce the
students to the parametric conceptual mass, AAC-1.rvt. This is the same parametric model as
Scenario 10, as well as the model used in Pedagogical Experiment I Part A. The data collected in
this session is used to increase the data pool previously collected from Pedagogical I Part A. During
this activity, students are given step-by-step instructions in creating their first parametric model.
The second major activity pertains to instructing students in the manual EEPFD exploration
process. This process, as described in Section 7.2.3, is explained to students through handouts
and lectures on how to generate design alternatives, calculate three objective functions, Pareto
rank their results, and record their explorations using the given Excel calculator. However, unlike
in Pedagogical Experiment I Part A, where students were allowed discretion in their exploration
times, during this experiment, all workshop participants are allowed the same amount of time for
their manual EEPFD exploration process. At the end of the manual exploration process, a
questionnaire, D2-Q1.pdf, is given to collect their feedback regarding their explorations. The
questionnaire can be found in Appendix F: F.1. Finally, after the manual EEPFD exploration
process, a H.D.S. Beagle pre-generated data set, AAC-1_EEPFD_DATA, is given to students in order
to measure how the feedback from EEPFD is able to assist students in their design decision-making.
The given dataset, as summarized in Figure 7-21, includes an Excel file, containing 730 design
186
alternatives, along with their objective scores and overall rank information, four trade-off study
images, and the 3D images of all Rank 1 design alternatives. A lecture and detailed handout
regarding how to use the data set are also given to the students prior to implementing their design
decision process. The given handout, D2-3_ST_DM_FROM_EEPFD.pdf, can be found in Appendix
F: F.2. This decision-making process consists of four steps the students are asked to follow. First,
students are asked to consider different decision strategies based on the type of content available
in the dataset, but prior to exploring the dataset itself. Second, students are given a chance to
explore the given dataset and select a design alternative. After making a selection, students are
asked to record the differences between their hypothesized design-decision making strategy and
their actual design-decision making strategy. Next, students are given the opportunity to improve
their selected design based on the given dataset. Finally, students are asked to define a new set
of exploration ranges of interest, based on the provided dataset. During their selection, a
questionnaire, D2-Q2.pdf, is given to the students, who are asked to record their final selection
and feedback regarding the decision-making process enabled through EEPFD. The given
questionnaire can be found in Appendix F: F.1.3.
FIGURE 7-21: SUMMARY OF THE AAC-1 EEPFD PRE-GENERATED DATA SET PROVIDED FOR PEDAGOGICAL
EXPERIMENT III.
187
Day 3 – AAC-2 Parameterization, Design Exploration, & Design Decision Making + AAC-3 Design
Exploration: Day 3 is similar to Day 2 in that is offers three major activities. However, the design
scenario for Day 3 varies from the one provided for Day 2, as shown in Figure 7-22, Figure 7-23,
and Table 7-15. The given scenario is named AAC-2, and has a horizontally oriented program, as
opposed to AAC-1’s vertical arrangement. For scenario AAC-2, students are instructed step-by-
step in creating their parametric model. As the manual exploration process is identical to the one
introduced on Day 2, no further instruction is provided. A questionnaire, D3_Q1.pdf, is given to
obtain their feedback regarding this process. The questionnaire can be found in Appendix F: F.1.4.
Following the exploration of AAC-2, students are asked to determine their own initial parametric
ranges of interest, based on the same parametric design, and name it AAC-3. At this point, each
student’s AAC-3 is different, as specific modifications to the design are allowed, and each
student’s exploration range is expected to individually vary. After setting their initial model and
exploration ranges, the students are also asked to explore their own AAC-3 based on these
changes. At the end of this process, another questionnaire is given, allowing the students to
provide the data and the feedback pertaining to this process.
Site location: 8788 Sunset Boulevard West Hollywood, CA 90069
Site Area: 75x150 m² = 11,250 m² (121,093.5 ft²)
Design Requirements:
Retail Area5,500m² (59,201ft²)
Office Area10,000m² (107,639 ft²)
Hotel Area10,000m² (107,639 ft²)
Site Constraints:
Floor Area Ratio = Total Floor Area / Site Area < 6.5
Building Height < 54.86 m (180’)
Design Goal/Objectives:
a. Meet the design requirements: Maximize the Spatial Programming Compliance (SPC) score
b. Maximize the energy performance: Minimize Energy Use Intensity (EUI)
c. Maximize the financial performance: Maximize Net Present Value (NPV)
FIGURE 7-22: THE GIVEN DESIGN REQUIREMENT FOR THE PEDAGOGICAL EXPERIMENT III AAC-2.
TABLE 7-15: DESIGNATED EXPLORATION PARAMETERS AND RANGES OF AAC-2 MANUAL EXPLORATION PROCESS.
Parameter Name Min Max
Design
Parameters
RetailMidWidth 16.4ft (5m) 75.5ft (23m)
OfficeMidWidth 16.4ft (5m) 75.5ft (23m)
HotelMidWidth 14.8ft (4.5m) 39.4ft (12m)
VoidOffsetWidth 13.1ft (4m) 72.2ft (22m)
OfficeStartFr# 1 3
RetailFr# 1 3
OfficeFr# 1 3
HotelFr# 1 10
Energy
Parameter
Target Percentage Glazing 10% 82%
Shade Depth 0ft (0mm) 3.6ft (1097.3 mm)
188
FIGURE 7-23: SECOND PARAMETRIC DESIGN AAC-2.
Day 4 – AAC-4 Design Exploration with individual surface configuration variation + AAC-2 Design
Decision through EEPFD pre-generated data: This day is divided into two major activities. First, a
lecture is given to instruct students in how to explore designs with individual surface variations
and explain the relevance of such exploration. During this process, a model AAC-4 is generated
from the previously generated AAC-2. Students are then asked to manually explore their
individual designs. At the end of this exploration, a questionnaire, D4-1.pdf, is given to obtain
their feedback regarding this process. The questionnaire can be found in Appendix F: F.1.6. In the
afternoon, a data set for AAC-2, pre-generated through EEPFD, is given to students, in order to
understand how the given data set can assist their design decision-making and the effectiveness
of the provided information. Figure 7-24 provides the summary of the data set. During this process,
a questionnaire, D4-2.pdf, is given for students to record their results and their final thoughts
regarding this process. The questionnaire can be found in Appendix F: F.1.7.
The overall collected data throughout the experiment is summarized in Table 7-16.
189
FIGURE 7-24: SUMMARY OF THE AAC-2 EEPFD PRE-GENERATED DATA SET PROVIDED FOR PEDAGOGICAL
EXPERIMENT III.
190
TABLE 7-16: SUMMARY OF THE RECORDED DATA FOR THE OVERALL PEDAGOGICAL EXPERIMENT III.
Recorded Data Data Type
Background
Questionnaire
D1-Q1.pdf
1. Education Background Enumeration
2. Experience using Revit Number
3. Experience using other parametric software Y/N, Number,
Enumeration
4. Experience using energy simulation programs Y/N, Number,
Enumeration
5. Experience using other analytic tools Y/N, Time,
Enumeration
Manual Exploration
Exploration Records
AAC-1, AAC-2, AAC-3, AAC-4
Questionnaires
AAC-1, AAC-2, AAC-4
D2-Q1.pdf
D3-Q1.pdf
D4-Q1.pdf
1. Explored parametric values for each iteration Number
2. The three objective function scores of
explored design alternatives
Number
3. Time required to obtain and calculate each
objective function
Number
4. Time required to obtain the final design Number
5. Design exploration process Image
6. Final thoughts and suggestions regarding the
manual exploration process.
Description
Additional Recorded Data for
AAC-3
D3-Q2.pdf
1. New initial design Revit File
2. Parametric value of each parameter for the
new initial design
Number
3. Method to decide the initial model Description
4. Method to ensure the robustness of the
model
Description
Decision Making through
EEPFD Questionnaires
D2-Q2.pdf
D4-Q2.pdf
1. Decision-making strategies prior the
availability of the provided dataset
Description
2. Decision-making strategies after reviewing
the pre-generated dataset
Description
3. Strategies change factors Description
4. Final selection’s offspring ID and objective
scores
Number
5. Parametric values and objective scores of the
improved design after the selection
Number
6. Design improvement strategy Description
7. The considered relevance of the provided
data during the improving design process
Y/N, Description
8. New exploration ranges for each parameter. Number
191
7.4.4 RESULTS & OBSERVATIONS OF THE PEDAGOGICAL WORKSHOP CASE EXPERIMENT
While 13 participants were initially registered for the workshop, some did not attend the entire
4-day program. In addition, due to a combination of unexpected technical challenges, such as lack
of internet connectivity, and a miscalculation in the time required for the outlined tasks, the 4-
day schedule was not strictly adhered to. However, despite these discrepancies and in the interest
of clarity, tasks that were assigned to Day 3 or Day 2 will still be referred to as such even if they
were completed at a later time. Table 7-17 summarizes the available results pertaining to each
participant, for each designated activity. The following discussion is based on this collected data
and is categorized according to the specific scenarios.
TABLE 7-17: SUMMARY OF DATA AVAILABILITY FOR EACH DESIGNATED ACTIVITY FOR EACH PARTICIPANT.
DAY 1 DAY 2 DAY 3 DAY 4
ID
D1-Q1.pdf
AAC-1.rvt
AAC-1_MANUAL
D2-Q1.pdf
AAC-1_EEPFD
D2-Q2.pdf
AAC-2.rvt
AAC-2_MANUAL
D3-Q1.pdf
AAC-3_MANUAL
D3-Q2.pdf
AAC-4_MANUAL
D4-Q1.pdf
AAC-2_EEPFD
D4-Q2.pdf
SA01 √ √ √ √ √ √ √ √
SA02 √ √ √ √ √ √ √ √
SA03 √ √ √ √ √ √ √
SA04 √ √ √ √ √ √ √
SA05 √ √ √ √
SA06 √ √ √ √ √
SA07 √ √ √ √ √ √ √ √ √ √
SA08 √ √ √ √ √ √ √ √ √ √
192
7.4.4.1 AAC-1 SCENARIO RESULTS & COMPARISONS
AAC-1 Manual Exploration Process
For the AAC-1 model, the students were first asked to engage in the manual exploration process,
previously described in reference to for Pedagogical Experiment I. This activity was completed by
eight students, who collectively explored 70 design alternatives. However, only 50 design
alternatives were calculated correctly during their manual exploration process, resulting in a
28.57% human error rate. The documented exploration times and solution space ranges for the
eight participants is summarized in Table 7-18. This recorded data indicates that the average
exploration time required by these students is similar to that observed during Pedagogical I Part
A, with each student devoting on average approximately three hours to exploration. Further data
analysis was performed by comparing the collected results with the solution spaces resulting from
Pedagogical Experiment I Part A, AAC-1, and H.D.S. Beagle-generated results. These results can be
considered comparable, since each group was provided the same design problem, design
constraints, parametric ranges, and instructions. The results in Table 7-19 and Table 7-20 provides
a summary of all the explored alternatives through Pedagogical Experiment I Part A, Pedagogical
Experiment III AAC-1, and EEPFD. This includes solutions that violate the provided project
constraints, as previously discussed Section 7.2.3.
In Table 7-19, the design alternatives generated by all three solution spaces are ranked against
each other. When compared against this all-inclusive pool, the percentage of the Pareto solutions
generated by each group can be provided. In this environment, H.D.S. Beagle was able to identify
a higher percentage of solutions designated as Pareto solutions than provided by either of the
pedagogical groups. Moreover, when H.D.S. Beagle is allowed to run the GA for a longer period,
it provides greater number of solutions. When the two pedagogical groups are ranked, with the
solution space generated by H.D.S. Beagle in three hours of run time, 30 design alternatives are
generated by H.D.S. Beagle, with 40% of these solutions being designated as Pareto solutions.
When combined with the solution space generated over a 7-hour runtime, and through 10
generations, over 60% of Beagle’s solutions are designated as Pareto. With the assumption of
technological progression, it can be expected that more solutions can be generated by H.D.S.
Beagle within the 3-hour period, hence increasing the number of Pareto-designated solutions that
can be generated.
However, despite the increase in Pareto-designated design alternatives, H.D.S. Beagle was not
able to find a solution that exceeded performance boundaries of the other pedagogical groups,
as illustrated in Table 7-20. These results indicate that the exploration by the workshop
participants identified higher performing solutions with respect to the NPV and EUI, but not the
SPC, when comparing the results with the solution spaces generated by the students that took
part in Pedagogical Experiment I Part A. Between these two groups a significant discrepancy in
the highest-performing EUI and SPC is not observed. However, in the exploration pool of the AAC-
1 an anomaly is observed, whereby one student explored an extreme solution, which achieved
the highest NPV score among all other solutions. However, this specific scenario also had the
lowest documented SPC score and is thus considered invalid when examined against the
allowable floor area ratio and the height limits of the project site.
Table 7-21 provides a comparison of all the solution spaces of each group consisting of only valid
design alternatives. In this scenario, design alternatives are considered “valid” if they adhered to
193
all provided design constraints. Based on these results, H.D.S. Beagle was able to generate a
solution space with better performing EUI and NPV. However, the highest SPC score is still found
in the student-generated solution space. This result is consistent with the findings reported for
Pedagogical Experiment I Part A. One explanation for this observation is that, due to the utilized
platform, students were able to observe the direct impact their design decisions had on the SPC
score of their design. This led to the optimization of the SPC becoming the primary goal, with
optimization of the EUI and NPV becoming secondary as opposed to equal during H.D.S. Beagle’s
GA run. This result supports the need for providing the instant feedback for EUI and NPV, in line
with that available for the SPC, in order to allow for equal consideration of all objectives during
the design decision-making process.
TABLE 7-18: SUMMARY OF PEDAGOGICAL EXPERIMENT III – AAC-1 MANUAL EXPLORATION PROCESS’S
ITERATION NUMBERS, EXPLORATION TIMES. AND THE EXPLORED RANGES OF THE GENERATED
SOLUTION SPACE.
Student ID
Iteration #
Exploration Time
(Hour)
NPV (Million USD) EUI (kBtu/ft²/yr) SPC
MIN. MAX. RANGE MIN. MAX. RANGE MIN. MAX. RANGE
SA01 8 2.5 -26.1 380.3 406.4 49.5 105.0 55.5 -124.0 61.8 185.8
SA02 12 2 -41.7 161.2 202.9 50.8 142.7 91.8 9.1 80.0 71.0
SA03 15 N/A -40.6 731.6 772.3 47.4 103.5 56.1 -365.0 45.5 410.6
SA04 7 6 46.8 83.7 37.0 53.9 65.9 12.1 75.4 95.4 19.9
SA05 10 3 105.9 401.3 295.4 47.8 65.9 18.2 -139.7 77.5 217.2
SA06 9 2.5 -48.8 1908.5 1957.4 45.0 261.2 216.2 -1166.5 7.2 1173.7
SA07 4 2 -44.3 112.6 156.9 51.1 141.3 90.3 7.8 74.6 66.8
SA08 8 3 19.1 154.8 135.7 50.3 67.8 17.5 42.7 99.1 56.4
MIN. 4 2 -48.8 83.7 37.0 45.0 65.9 12.1 -1166.5 7.2 19.9
MAX. 15 6 105.9 1908.5 1957.4 53.9 261.2 216.2 75.4 99.1 1173.7
AVERAGE. 9.1 3 -3.7 491.8 495.5 49.5 119.2 69.7 -207.5 67.6 275.2
STDEV.P 3.1 1.3 52.5 572.1 591.8 2.5 61.2 62.7 386.0 27.9 359.7
TABLE 7-19: PERFORMANCE COMPARISON OF THE PARETO SOLUTIONS GENERATED BY THE PEDAGOGICAL
EXPERIMENT I PART A, PEDAGOGICAL EXPERIMENT III AAC-1, AND H.D.S. BEAGLE AFTER 3-HOUR
AND 7-HOUR RUNTIMES AND 10 GENERATIONS.
Pedagogical
Experiment III
Pedagogical
Experiment I
H.D.S. BEAGLE
Run_20121108_0004 Total
AAC-1 Part A 3 HRS. 7 HRS. 10 GEN.
Pareto Solution No. 24 (36%) 29 (24%) 12 (40%) 65
Solution No. 66 120 30 216
Pareto Solution No. 24 (36%) 27 (23%) 45 (64%) 96
Solution No. 66 120 70 258
Pareto Solution No. 20 (30%) 26 (22%) 81 (62%) 129
Solution No. 66 120 130 318
194
TABLE 7-20: PERFORMANCE COMPARISON OF THE SOLUTION SPACES GENERATED BY THE PEDAGOGICAL
EXPERIMENT I PART A, PEDAGOGICAL EXPERIMENT III AAC-1, AND H.D.S. BEAGLE AFTER 3-HOUR
AND 7-HOUR RUNTIMES AND 10 GENERATIONS. IMPROVEMENT IS MEASURED AGAINST THE INITIAL
BASELINE OBJECTIVE SCORE.
Pedagogical
Experiment III
Pedagogical
Experiment I
H.D.S. BEAGLE
Run_20121108_0004
AAC-1 Part A 3 HRS. 7 HRS. 10 GEN.
NPV MIN. -48.8 -48.8 -40.8 -40.8 -40.8
(Million USD) MAX. 1908.5 759.9 835.6 835.6 835.6
INITIAL: -40.8 IMPROVED 1949.4 800.7 876.4 876.4 876.4
EUI MIN. 45.0 42.8 46.1 45.6 45.6
(kBtu/ft²/yr) MAX. 261.2 166.2 114.2 114.2 114.2
INITIAL: 101.2 IMPROVED 56.2 58.4 55.1 55.6 55.6
SPC MIN. -1166.5 -415.2 -444.9 -444.9 -444.9
MAX. 99.1 99.3 87.9 87.9 87.9
INITIAL: 9.7 IMPROVED 89.5 89.6 78.2 78.2 78.2
TABLE 7-21: PERFORMANCE COMPARISON OF THE VALID SOLUTION SPACES GENERATED BY THE PEDAGOGICAL
EXPERIMENT I PART A, PEDAGOGICAL EXPERIMENT III AAC-1, AND H.D.S. BEAGLE AFTER 3-HOUR
AND 7 HOUR RUNTIMES AND 10 GENERATIONS. IMPROVEMENT IS MEASURED AGAINST THE INITIAL
BASELINE INITIAL OBJECTIVE SCORE.
Pedagogical
Experiment III
Pedagogical
Experiment I
H.D.S. BEAGLE
Run_20121108_0004
AAC-1 ARCH507 3 HRS. 7 HRS. 10 GEN
NPV MIN. -48.8 -48.8 -40.8 -40.8 -40.8
(Million USD) MAX. 249.9 197.4 315.3 329.9 329.9
INITIAL: -40.8 IMPROVED 290.7 238.2 356.1 370.7 370.7
EUI MIN. 47.8 46.2 46.1 46.1 46.1
(kBtu/ft²/yr) MAX. 261.2 166.2 114.2 114.2 114.2
INITIAL: 101.2 IMPROVED 53.4 55.0 54.3 54.3 54.3
SPC MIN. -25.6 -12.5 -72.3 -72.3 -80.6
MAX. 99.1 99.3 87.9 87.9 87.9
INITIAL: 9.7 IMPROVED 89.5 89.6 78.2 78.2 78.2
195
FIGURE 7-25: SOLUTION SPACE COMPARISON OF PEDAGOGICAL EXPERIMENT I PART A, PEDAGOGICAL
EXPERIMENT III AAC-1 AND H.D.S. BEAGLE GENERATED RESULTS (RUN_20131108_0004).
196
AAC-1 Decision-making through EEPFD Pre-generated Data
In this activity, seven participants provided the requested responses by completing the survey
D2_Q2.pdf.
The first two sections of the questionnaire focus on understanding whether the students were
able to select a better fit final design compared to their manual exploration process. Table 7-22
summarizes the comparison of design selections from each student’s manual exploration process
and EEPFD pre-generated data. A note regarding each student’s decision-making strategy is also
provided, as documented by the students during their decision-making process. When ranking
their selected design based on the manual process and that using the EEPFD pre-generated data,
the majority of the students were able to identify a Pareto solution in both approaches, with the
exception of SA03’s manual exploration and SA07’s EEPFD exploration process. SA03 was able to
identify higher performing design through EEPFD, which improved the original design from Rank
3 to Rank 1. On the other hand, SA06 was able to find better-fit valid offspring. Overall, six out of
the seven students were able to use EEPFD pre-generated data to identify Pareto solutions
through the EEPFD process, except for SA07’s selection, which was given Rank 2. The subsequent
analysis of his decision-making strategies revealed that his documented selection criteria were
not based on the use of the three objective functions as the primary goal, but rather on the
rotation factors, as the student desired to emphasize his aesthetic objective. This shift in focus
resulted in his design selection not being ranked as a Pareto solution.
If the design alternatives for both approaches are compared, no clear dominance of the design
alternatives provided by EEPFD is revealed. However, when put in the context of the each
student’s decision-making strategy, it is evident that EEPFD was able to provide improved
performance in alignment with each student’s outlined priorities. Thus, EEPFD demonstrates an
ability to find better-fit solutions based on each student’s individual goals. This direct relationship
between the outlined design decision strategy and the resulting design alternative’s performance
is not readily observable in the students’ manual exploration process.
The third section of the questionnaire asked the students to improve their design by utilizing the
EEPFD pre-generated dataset. The responses provided in this section are summarized in Figure
7-26. The ranking information provided is based on a solution pool consisting of all the designs
generated by all three approaches. In this case, no new Pareto solutions were generated during
the improvement process, with all the solutions receiving Rank 1. The observed exception is SA07,
since his first declared objective was to maintain the rotation parameters for reasons of aesthetic
preference. Although the generated solutions do not clearly dominate their prior design selection,
the provided data is able to further support the students’ decision making towards their design
goal. This conclusion is drawn from the answers provided to the questionnaire item “Does the
data help improve your design?” received a positive response from all seven participants. When
elaborating on their experience, some students commented on the questionable efficiency
regarding the provided data, while others stated that they felt more confident regarding their
design decisions and were thus able to improve their design in alignment with their stated design
priorities.
197
TABLE 7-22: PERFORMANCE COMPARISON OF THE SELECTED DESIGNS FROM THE MANUAL EXPLORATION
PROCESS AND SUBSEQUENT DESIGN SELECTED FROM THE EEPFD PRE-GENERATED DATA.
IMPROVEMENTS IN THE OBJECTIVE PERFORMANCES THROUGH EEPFD PRE-GENERATED DATA ARE
HIGHLIGHTED. PARETO RANKS BASED ON SOLUTION POOL CONSISTING OF ONLY THE 14 SELECTED
DESIGNS.
ID
Exploration Approach :
1. manual exploration
2. Select from EEPFD pre-
generated dataset
NPV
Million USD
EUI
kBtu/ft²/yr
SPC
Decision Making Priority
during the decision
making process via EEPFD
pre-generated data
Violation of Site
Constraints (Y/N)
Rank
SA01 1 174.4 51.9 28.2 N 1
2 329.9 47.2 -71.4 Valid Rank 1> NPV+EUI>NPV+SPC N 1
SA02 1 161.2 50.8 44.9 N 1
2 300.9 51.9 -51.1 NPV>EUI>SPC
N
1
SA03 1 9.2 77.7 45.5
N
3
2 104.6 48.0 78.9 EUI+SPC>NPV
N
1
SA04 1 83.7 54.3 93.8
N
1
2 79.5 49.4 97.7 SPC>EUI+NPV
N
1
SA06 1 1908.5 56.8 -1166.5 Y 1
2 79.2 71.3 99.3 SPC>EUI+SPC>NPV N 1
SA07 1 112.6 51.1 74.6 N 1
2 91.8 71.9 71.1 Aesthetics (R=270°)>EUI N 2
SA08 1 80.4 50.3 96.0 N 1
2 83.1 55.4 95.3 SPC>EUI>NPV Y 1
198
FIGURE 7-26: STUDENTS’ SELECTED DESIGN VIA THE THREE EXPLORATION APPROACHES: (1) SELECTION FROM
THEIR MANUAL EXPLORATION PROCESS; (2) SELECTION FROM EEPFD PRE-GENERATED SOLUTION
SPACE; AND (3) IMPROVED DESIGN BASED ON THE PROVIDED EEPFD PRE-GENERATED DATASET.
199
In the last section of this part of the questionnaire, the students are asked to propose a new set
of exploration ranges that they would be interested in exploring if they were allowed to run
through EEPFD by themselves based on the previously provided dataset. The exploration ranges
they set up are summarized as Table 7-23 below. From this table, is can be observed that the new
exploration ranges most of the students proposed were within the original ranges, thereby
narrowing the explored solution pool. The exceptions to this are highlighted in the Table 7-23.
This observation implies that the provided data helped students eliminate unwanted options, and
the associative parametric values, during this process.
While there are seven responses, only the settings provided by six students could be run through
the EEPFD process. This is due to SA07’s intent to explore more geometrically diverse design
alternatives which failed to maintain the integrity of the robustness of the model during the
exploration process. As a result, the SA07’s new range settings are excluded from further
comparisons.
Thus, the remaining six students’ new exploration ranges were run through H.D.S. Beagle for up
to 6 generations with the same GA settings as used previously. Table 7-23 summarizes each
student’s resulting solution space performance in terms of the performance boundaries for the
NPV, EUI, and SPC. Also provided is the number of the Rank 1 solutions, obtained from ranking all
new design alternatives with the original solution space. Figure 7-27 provides the distribution
comparison of these students’ solution spaces based on their new exploration settings.
The results show that the performance boundaries of each student’s solution space are limited
by their more narrowed exploration settings. Despite these more explicit ranges, the new solution
spaces did not result in better performing boundaries. None of the new solutions provided a
superior performing NPV and only SA08 generated a better performing EUI solution. In terms of
the SPC score, only SA01’s new solution space provided design alternatives with a better SPC
when compared with the original solution space. In addition, as demonstrated in the Figure 7-27,
the new generated solution spaces are more clustered when compared to the original solution
space. However, the clustered directions of their new solution spaces are not necessarily
considered as new Pareto solutions when compared with each other and with the original solution
space. As Table 7-23 shows, the new solution spaces generated based on the exploration settings
of SA03 and SA04 do not have any Pareto solutions when ranked against the design alternatives
from other solution spaces. However, other students’ new exploration settings did produce new
Pareto solutions, such as SA01, SA02, SA06, and SA08. It can be observed from Table 7-23 and
Figure 7-27 that SA08’s solution space provides more Pareto solutions when considering the NPV
and EUI trade-off study and SA01’s solutions space has a better SPC score. However, when
considering all three objectives, only SA02 has the highest Pareto solution rate.
Overall, the current results indicate that, if only the performance boundary of the solution space
and the Pareto solutions generated are considered, the newly generated solution spaces cannot
be viewed as an improvement. However, one significant factor that cannot be included at this
time due to its unavailability is the design intent behind each proposed set of exploration ranges.
As such, this experiment fails to measure students’ expected results (goals) when they set up the
new exploration ranges. Therefore, as the design exploration intent is lacking, the success or
improvement of the new solution space cannot be accurately determined. In future iterations of
200
this pedagogical experiment, the design exploration intent should be included at this stage, as this
will enhance the understanding of the effectiveness of the feedback provided by EEPFD.
TABLE 7-23: STUDENTS’ PROPOSED NEW EXPLORATION RANGES AND THE RESULTING SOLUTION SPACE
PERFORMANCE BOUNDARIES AFTER 6 GENERATIONS. EXPLORATION RANGES EXCEEDING THE
ORIGINALLY PROVIDED RANGES HAVE BEEN HIGHLIGHTED.
Students’ new exploration parameters and variation ranges
Parameter
Name
Unit Initial Original SA01 SA02 SA03 SA04 SA06 SA07 SA08
SiteSetback ft 60 [15,60] [16,49] [15,25] Fix Fix [16,20] Fix [23,49]
R1 ° 50 [0,270] [0,90] [20,50] [0,180] [0,180] [85,100] [0,1080] [0,90]
ScaleMid .9 [.4,2.2] [.6,1.2] [1.1,1.2] [.8,1] [.4,1.5] [.6,1] [5.10] [.5,1.5]
ScaleTop .6 [.4,2.2] [.6,1.2] [.3,.6] [.6,1] [.4,1.5] [.4,0.5] [5,10] [.5,1.5]
RetailFr# 1 [1,4] [1,3] [1,2] [1,4] [1,3] [3,4] Fix [1,3]
OfficeFr# 3 [2,8] [2,6] [3,5] [4,8] [2,6] [6,8] Fix [2,6]
HotelFr# 3 [2,8] [2,6] [5,8] [4,8] [2,8] [3,5] Fix [2,6]
Glazing% .4 [.1,.82] [.2,1] [.3,.5] [.1,.6] [.1,.9] [.6,.7] Fix [.1,.7]
Sill Height ft 2.5 [0,4.5] [2,3.9] [2.5,3.5] Fix [1,3.9] [10,15] Fix [0,0.3]
Shade Depth ft 1 [0,3.6] [0.3,3.3] [0.4,0.6] Fix [0.3,3] Fix [3.3,32.8] [0,2.8]
Skylights% .05 [0,.45] [0,1] Fix Fix Fix Fix Fix [0,.1]
Solution Space Performance
Initial Original SA01 SA02 SA03 SA04 SA06 SA07 SA08
NPV -40.8 Min. -40.8 40.8 40.8 40.8 -48.5 40.8 n/a 40.8
Million USD Max. 835.2 285.5 476.3 -19.0 4.6 392.5 n/a 624.8
EUI 106.7 Min. 45.6 48.2 46.0 64.1 69.7 52.4 n/a 42.3
kBtu/ft²/yr Max. 114.2 121.4 106.7 140.3 139.2 106.7 n/a 106.7
SPC 9.7 Min. -444.9 -62.2 -193.0 9.7 4.1 -136.4 n/a -316.0
Max. 87.9 93.9 36.1 27.2 44.4 74.9 n/a 83.0
Total
Solution no. 574 82 82 82 82 82 82 n/a 82
Pareto no.
NPV,EUI,SPC 156 33 22 49 0 0 23 n/a 29
NPV+EUI 5 2 0 0 0 0 0 n/a 3
NPV+SPC 98 18 18 33 0 0 18 n/a 11
EUI+SPC 9 1 3 0 0 0 0 n/a 5
201
FIGURE 7-27: COMPARISON OF THE DISTRIBUTION OF THE SOLUTION SPACES GENERATED FROM EEPFD BASED
ON EACH STUDENT’S NEW EXPLORATION RANGE.
202
7.4.4.2 AAC-2 SCENARIO RESULTS & COMPARISONS
AAC-2 Manual Exploration Process
In this part of the exercise, five responses were received. During this activity, students were asked
to explore the AAC-2 parametric model with the same initial settings and exploration ranges as
described in Section 7.4.3. Table 7-24 summarizes their resulting exploration numbers, times and
performance of each solution space in EUI, SPC, and NPV. Based on the collected results, students’
average exploration time was 2.8 hours with seven design alternatives explored. This exploration
time is longer when compared to the recorded exploration speeds of AAC-1, which had an average
of 9.1 iterations within three hours. This can be attributed to the additional steps required to
update all three masses to generate a new design alternative. Moreover, as only 7 out of the 17
recorded design alternatives were calculated correctly, this resulted in a 58.5% human error rate
for this activity, which is twice as high when compared to the human error rate observed in the
AAC-1 manual exploration process. Despite this observed increase in the human error rate, half
of the students responded through the survey that they found this exploration easier than AAC-1
due to parameters in this scenario being more independent in nature and having less influence
on each other. In addition, the prior experience of the AAC-1 manual exploration activity provided
them a better foundation for formulating strategies, as they are more familiar with the
exploration process. However, the other half of the students responded through the survey that
this exploration process was harder, due to the necessary repetitive input for three different
masses, as required for this specific scenario. In addition, as a greater number of parameters were
involved in this scenario, these students found it harder to understand the direct impact of each
individual parameter on the objective functions.
TABLE 7-24: SUMMARY OF PEDAGOGICAL EXPERIMENT III – AAC-2 MANUAL EXPLORATION PROCESS
ITERATION NUMBERS, EXPLORATION TIMES, AND THE EXPLORED RANGES OF THE GENERATED
SOLUTION SPACE.
Student ID
Iteration #
Exploration Time
(Hour)
NPV (Million USD) EUI (kBtu/ft²/yr) SPC
MIN. MAX. RANGE MIN. MAX. RANGE MIN. MAX. RANGE
SA01 8 2 125.0 220.9 95.9 44.3 68.7 24.4 61.4 69.4 8.0
SA02 12 3 n/a 61.03 n/a 55.86 n/a n/a n/a 63.8 n/a
SA03 7 n/a -263.1 228.0 491.0 53.7 59.4 5.7 6.4 77.3 70.9
SA07 3 2 -108.3 149.3 257.5 44.7 56.9 12.3 40.5 78.5 38.0
SA08 5 4 n/a n/a n/a n/a n/a n/a n/a n/a n/a
MIN. 3.0 2.0 -263.1 61.0 95.9 44.3 56.9 5.7 6.4 63.8 8.0
MAX. 12.0 4.0 125.0 228.0 491.0 55.9 68.7 24.4 61.4 78.5 70.9
AVERAGE. 7.0 2.8 -82.1 167.4 281.5 49.2 61.7 14.2 36.1 72.3 39.0
STDEV.P 3.0 0.8 159.5 60.5 162.2 4.7 5.0 7.7 22.7 6.0 25.7
* n/a denotes data missing from collected students’ results.
203
Similar to the analysis pertaining to AAC-1, in order to understand the EEPFD’s potential to assist
the exploration process, students’ manual exploration results are compared with the results
generated through the EEPFD process in 2.8 hours, corresponding to the average exploration time
that students reported. For this scenario, H.D.S. Beagle was able to generate each design
alternative within one minute. As a result, a total of 168 design alternatives were generated to
compare with students’ results within the same averaged 2.8 hours. As indicated by the
comparison shown in Table 7-25, H.D.S. Beagle is capable of providing design alternatives with
only a superior EUI boundary, while the superior boundaries for the SPC and NPV are found in the
student generated solution space. When ranking the solution space of both the student’s
generated results and H.D.S. Beagle’s generated results in 2.8 hours, only four design alternatives
from the 168 solutions generated by H.D.S. Beagle are designated as Pareto solution. These
results did not exclude the miscalculated results from students, but rather used the recalculated
results for this comparison. Assuming that were no mistake made during the students’ exploration
process, students were able to optimize their design in terms of NPV and SPC faster than H.D.S.
Beagle for this scenario. However, H.D.S. Beagle still performed better in terms of finding the
optimal EUI performance. This result suggests that, despite the three objectives being equally
important, designers have their preference in terms of the performance objectives. As observed
in this data set, when the relationship between design parameters and performance objectives is
easily discernible, it is easier for students to optimize the performance objectives in order of their
preference. While all three objective scores are accessible at the same time, and despite students’
expressed goals of lowering the EUI of their design, the observed exploration results pertaining
to the manual exploration process do coincide with this stated goal of design intent. In contrast,
energy performance appears to be the last objective to be considered. As such, this process
cannot be deemed providing a “designing-in energy performance” environment under these
circumstances.
With further comparison of the resulting solution space of 5 hours and 30 generation run time,
H.D.S. Beagle is able to generate improved NPV and EUI performance design alternatives, as listed
in Table 7-25. Although H.D.S. Beagle is not able to generate a majority of the Pareto solutions in
the designated 2.8 hours runtime, it is able to continuously generate a growing number of Pareto
solutions, which can dominant the solutions generated by students using the manual exploration
process.
AAC-2 Decision-making through EEPFD Pre-Generated Data
Only two students, SA07 and SA08, fully participated in this exercise. Similar to the previously
described AAC-1’s decision-making through EEPFD process, a questionnaire and dataset are given
to students to guide them through the process following these three major activities. First,
students are asked to select their design based on the given pre-generated data set and record
their decision-making strategy. Secondly, students are asked how they would improve their
selected design based on the given data set. Finally, students are asked to define new parametric
ranges for exploration. Table 7-26 and Figure 7-28 summarize the results pertaining to the first
two activities and compare the selected designs from the manual exploration process.
204
TABLE 7-25: PERFORMANCE COMPARISON OF THE SOLUTION SPACES GENERATED BY THE PEDAGOGICAL
EXPERIMENT III AAC-2 AND H.D.S. BEAGLE AFTER RUNTIMES OF 2.8 HOURS, 5 HOURS, AND 30
GENERATIONS. IMPROVEMENT IS MEASURED AGAINST THE INITIAL BASELINE OBJECTIVE SCORE.
PERCENT OF CONTRIBUTION TO THE PARETO SOLUTION POOL IS ALSO PROVIDED.
Pedagogical
Experiment III
H.D.S. BEAGLE
Run_201311021614
AAC-2 2.8 HRS. 5HRS. 30 GEN.
NPV MIN. -263.1 -264.4 -264.4 -264.4
(Million USD) MAX. 228.0 172.7 234.8 235.1
INITIAL: -82.6 IMPROVED 310.5 255.2 317.3 317.7
EUI MIN. 44.3 41.1 40.8 40.8
(kBtu/ft²/yr) MAX. 68.7 118.2 118.2 118.2
INITIAL: 80.9 IMPROVED 36.6 39.8 40.0 40.0
SPC MIN. 6.4 6.1 6.1 6.1
MAX. 78.5 77.8 84.6 84.6
INITIAL: 45.8 IMPROVED 47.5 32.0 38.9 38.9
Total
Pareto Solution No. (2.8 HRS.) 12 (67%) 5 (3%) 17
Solution No. 18 168 186
Pareto Solution No. (5 HRS.) 6 (33%) 52 (17%) 58
Solution No. 18 300 318
Pareto Solution No. (30 GEN) 6 (33%) 150 (31%) 156
Solution No. 18 490 508
TABLE 7-26: PERFORMANCE COMPARISON OF THE SELECTED DESIGNS FROM THE MANUAL EXPLORATION
PROCESS AND SUBSEQUENT DESIGNS SELECTED FROM THE EEPFD PRE-GENERATED DATA.
IMPROVEMENTS IN THE OBJECTIVE PERFORMANCES THROUGH EEPFD PRE-GENERATED DATA ARE
HIGHLIGHTED. PARETO RANKING IS BASED ON THE SOLUTION POOL CONSISTING OF ONLY THE 6
SELECTED DESIGNS.
ID
Exploration Approach :
1. manual exploration
2. Select from EEPFD pre-
generated dataset
3. Improve from EEPFD Pre-
generated dataset
NPV (Million USD)
EUI (kBtu/ft²/yr)
SPC
Decision Making Priority
during the decision making
process via EEPFD pre-
generated data
Violation of Site Constraints
(Y/N)
Rank
SA07 1 149.2 44.7 78.5 N 1
2 101.9 43.1 79.7 EUI+SPC>NPV N 1
3 125.5 44.6 83.4 SPC>EUI>NPV N 1
SA08 1 n/a n/a n/a n/a n/a
2 91.1 43.6 82.3 SPC N 1
3 105.4 49.7 84.2 SPC N 1
205
FIGURE 7-28: PERFORMANCE COMPARISON OF THE SOLUTION SPACE PROVIDED THROUGH EEPFD AND THE
SELECTED DESIGNS OF SA07 AND SA08 VIA THREE DIFFERENT APPROACHES: (1) MANUAL
EXPLORATION; (2) SELECTION BASED ON THE GIVEN EEPFD PRE-GENERATED DATASET; AND (3)
IMPROVED DESIGN BASED ON THE GIVEN EEPFD PRE-GENERATED DATASET.
206
When the results from all three approaches are ranked, no solutions that clearly dominate the
rest are identified, with the exception of SA08’s last approach, which is dominated by the other
solutions. However, despite SA08’s final improved design being dominated by the others in the
solution space pool, SA08 was successful in improving the design selected from the EEPFD
provided data set based on the EEPFD provided data. This observation is consistent with
observations made during the students’ AAC-1 EEPFD exploration process. When students utilized
the EEPFD pre-generated dataset, they were able to improve their design that has a dominating
performance to that of the previous design. This is not observed during their manual exploration
process based on the fact that their final selections are not usually the last they have explored.
Another observation of their recorded manual exploration process is that the students are not
always able to improve their designs towards their overall indicated design goal. These results
also demonstrate that, when given an EEPFD pre-generated data set, students are able to improve
their design towards their intended goal, while considering the EUI, SPC, and NPV scores
simultaneously. In addition, students are able to further their stated design intent of reducing the
EUI of their design by using the EEPFD pre-generated data set as opposed to the other two
approaches.
It is observed that, while students were able to improve upon their selected design, the provided
EEPFD data set did not include the design alternatives with the superior SPC objective scores that
were identified by students during the manual exploration process. This provides an indication
that there is potential limitation in the solution space provided for selection, and that the
improvement is bound by the given dataset. While the efficiency, inclusivity, and optimization of
the algorithm used by H.D.S. Beagle is not the focus of this research, because the provided
solution space did not include the design alternatives with comparable SPC results, the students
were unable to improve their selected design beyond the design alternative generated through
their manual exploration process. As a result, the performance of their selected design from the
given dataset could not clearly dominate their manually defined design. This shortcoming may in
part be addressed through a future ability of H.D.S. Beagle to give weight to specific objective
scores over others in reflection of designer intent or individual preference.
In the last session of this activity, the students were required to set up new exploration ranges on
the presumption that they would be given another chance to explore through EEPFD. The two
available responses were subsequently to run these ranges through H.D.S. Beagle for up to six
generations with the same GA settings as used previously. Table 7-27 summarizes the resulting
performance of the solution space in terms of the performance boundaries in NPV, EUI, and SPC
with Rank 1 solution numbers, when ranked against the accumulative available design
alternatives. Figure 7-29 provides a distribution comparison of these students’ solution spaces
based on their new exploration settings. In this part of the experiment, students were not limited
to their exploration ranges, as the task was to further explore broader ranges of parametric
settings. Student SA07 employed a similar approach to the one he utilized in AAC-1. This student
indicated that he focused solely on exploring the potential degrees of the dramatic geometric
variations and their relative performance. As a result, his exploration was dedicated to exploring
the geometric parameters, as opposed to energy-related parameters, through the provided new
ranges. For this reason, SA07’s resulting solution space has a limited performance in EUI when
compared with the other two solution spaces, but a wider performance range in NPV and SPC.
Student SA08, in this case, identified a limitation to the previously provided ranges, which resulted
207
in the limited performance boundary of the SPC. As a result, SA08 indicated a focus on expending
the exploration range in order to achieve a higher SPC score. Based on these results, as shown in
Table 7-27, the solution space generated through Student SA08’s new exploration settings do
provide an increased SPC score in comparison with the other two.
When ranking the three solution spaces in tandem, based on the three objective functions, S07’s
solution space has the highest Pareto solution rate, whereby 32 out of the 55 identified Pareto
solutions are from S07’s solution space. However, when ranking is based on EUI and SPC, none of
the S07’s solutions is designated as Pareto solutions. This implies that improving the NPV score
was the dominant objective of the S07’s solution space, while disregarding the EUI and SPC scores.
This can also be observed from the Figure 7-29. For this part of the experiment, the students’ new
exploration ranges generate more Pareto solutions when compared with the original solution
space. If the comparison is based on the performance boundaries, the students’ solution spaces
are superior in SPC and NPV, but not EUI. On the other hand, if comparison is based on the
generated Pareto solution space numbers, both students’ new solution spaces are superior over
the original. However, similar to prior description, the success of the feedback cannot be
identified currently due to the lack of records regarding the intent of the new exploration spaces.
This part of the metrics should be further developed, measured, and recorded for future
experiments, allowing further evaluation in the effectiveness of the feedback for the next round
of explorations.
TABLE 7-27: STUDENTS’ PROPOSED NEW EXPLORATION RANGES OF AAC-2 AND THE RESULTING SOLUTION
SPACE PERFORMANCE BOUNDARIES AFTER 6 GENERATIONS. EXPLORATION RANGES EXCEEDING THE
ORIGINALLY PROVIDED RANGES HAVE BEEN HIGHLIGHTED.
Students’ new exploration parameters and variation ranges
Parameter Name Unit Initial Original SA07 SA08
VoidOffsetWidth ft 65.6 [13.1,72.2] Fix [13.1,26.2]
RetailMidWidth ft 82.0 [16.4,75.5] [3.3,49.2] [16.4,75.5]
OfficeMidWidth ft 82.0 [16.4,75.5] [3.3,59.1] [16.4,114.8]
HotelMidWidth ft 16.4 [14.8,39.4] [3.3,26.2] [16.4,39.4]
OfficeStartFr# 1 [1,3] Fix [0,3]
RetailFr# 2 [1,3] [1,8] [1,3]
OfficeFr# 2 [1,3] [1,8] [1,5]
HotelFr# 7 [1,10] [1,30] [1,10]
Glazing% .4 [.1,.82] Fix [.1,.82]
Shade Depth ft 1 [0,3.6] Fix [0,4]
Performance boundaries of the solution spaces generated through 10 generations (approx. 2.8 hours)
Initial Original SA07 SA08
NPV -82.6 Min. -264.4 -286.5 -262.8
Million USD Max. 172.7 596.9 318.1
EUI 80.9 Min. 41.1 56.2 43.4
kBtu/ft²/yr Max. 118.2 424.1 113.4
SPC 45.8 Min. 6.1 -7.7 6.3
Max. 77.8 90.5 95.0
Total
Solution no. 510 170 170 170
Pareto no.
NPV,EUI,SPC 55 3 32 20
NPV+EUI 13 3 5 5
NPV+SPC 30 n/a 23 7
EUI+SPC 10 3 n/a 7
208
FIGURE 7-29: COMPARISON OF THE DISTRIBUTION OF THE SOLUTION SPACES GENERATED FROM EEPFD BASED
ON EACH STUDENT’S NEW EXPLORATION RANGES FOR THE AAC-2 SCENARIO.
209
7.4.4.3 AAC-3 SCENARIO RESULTS & COMPARISONS
In this activity, students are asked to define their own initial design and exploration ranges based
on the same parametric model as previously used in AAC-2. In addition, they are allowed to assign
different rules to their parametric models in terms of each parameter’s properties, i.e., whether
it is a driving, driven, or a fixed parameter in the new model. Overall, students are instructed to
have eight driving parameters and two original energy-related parameters, as in AAC-2, but are
allowed to assign new variation ranges to the two original energy related parameters. Six students
took part in this exercise. After setting up their new parametric models and variation ranges, the
students spent an average of 2.3 hours to explore seven iterations of their AAC-3 design. Each
student’s modification can be found in Appendix F.3. In order to understand the impact of EEPFD
on the outcome of this experiment, the students’ newly established initial models and variation
ranges are run through H.D.S. Beagle and compared to the solution spaces of each student’s
manual exploration process.
Since current limitations of H.D.S. Beagle are unable to handle breaks in the geometry, the
robustness of the model needs to be intact during the entire GA run. However, as this requirement
was not emphasized during the experiment instruction, only one student’s model and ranges
fulfill this requirement. In order for Beagle to run the rest of students’ models, a new adjusted
variation ranges are assigned for each student’s model based on their original set ranges and their
actual explored ranges. If the original set range can maintain the integrity of the model, the
individual range is left intact. If the provided range is observed to result in a break in the geometry
the range of their actual exploration is used instead. The updated ranges and each model’s
evaluation can be found in Appendix F.3. In this part of the exercise, a questionnaire, D3-Q2.pdf,
is given to guide the students through the steps. The original design of the exercise aims to ensure
the comparability of their models; thus, students are specifically instructed to have eight driving
design parameters and two identical energy-related parameters. However, the analysis of this
activity revealed that a majority of the students did not follow these instructions. Table 7-28
summarizes the evaluation of both the collected AAC-3 models and the integrity of the
subsequent manual exploration process. Four evaluation criteria are used to check students’
settings and models. First, it is determined whether the student adhered to the instruction to
have eight design parameters and two energy-related parameters. Second, it is verified whether
the students consistently assigned their parametric values to all three masses for each explored
design alternative. Next, it is verified whether students correctly assigned corresponding spaces
to each of the masses for each of their explored design alternatives. Lastly, the robustness of the
variation ranges is determined. The summary of each student’s evaluation can be found in Table
7-28. From the information presented, it can be determined that none of the students was able
to successfully fulfill all four criteria. Thus, their explored alternatives and calculated three
objective scores are incomparable to both their previous results and those generated through
H.D.S. Beagle. This questionable comparability of the students’ generated results requires
allowances and considerations to be made during the analysis pertaining to this activity.
210
TABLE 7-28: SUMMARY OF THE EVALUATION OF STUDENTS’ AAC-3 MODELS.
SA01 SA02 SA03 SA04 SA07 SA08 Summary
Explore 8 design and 2
energy parameters.
Y N N Y Y Y
Y: 4 (67%)
N: 2 (33%)
Three masses have the
same parametric settings.
N Y N N N Y
Y: 2 (33%)
N: 4 (67%)
Space types are assigned
correctly.
Y Y N N N Y
Y: 3 (50%)
N: 3 (50%)
Exploration ranges are
acceptable.
N N N Y N N
Y: 1 (17%)
N: 5 (83%)
As the majority of the students’ manual exploration calculations were inaccurate due to human
errors, their decision-making process based on their miscalculated results cannot be considered
as valid or used for further analysis. However, based on each student’s documented parametric
values for each design alternative, a new and valid solution space could be recalculated for each
student that is comparable to the solution space generated by H.D.S. Beagle. For comparison
purposes, the students’ new initial models and their adjusted exploration ranges were run
through H.D.S. Beagle for 2.3 hours, which was the average recorded exploration time of the
student manual exploration process. This 2.3 hour runtime allowed for up to eight generations
based on the GA settings listed in Table 7-29.
TABLE 7-29: DESIGN REQUIREMENTS, PROJECT SIZE, AND THE GA SETTINGS FOR EEPFD EXPLORATION OF
STUDENTS’ AAC-3 MODELS.
Project Size 274,479 ft²
Project Requirement
Retail Area Requirement: 59,201 ft²
Office Area Requirement: 107,639 ft²
Hotel Area Requirement: 107,639 ft²
GA Settings
Initial Population Size 10
Population size 16
Selection Size 20
Maximum Iteration 50
Crossover Ratio 0.5
Mutation Ratio 0
Analyzed Generation No. 8
The results of this section of the exercise are summarized in Table 7-30. Table 7-30 presents each
solution space’s performance in EUI, NPV, and SPC for the following three categories:
1. Each student’s initial AAC-3 model (INITIAL)
2. Manually explored Student Generated Solution Space (SGSS)
3. Beagle Generated Solution Space (BGSS) in 2.3 hours based on each student’s new design
settings and the adjusted exploration ranges when necessary.
211
TABLE 7-30: PEDAGOGICAL EXPERIMENT III AAC-3 PERFORMANCE COMPARISON OF STUDENT-GENERATED
SOLUTION SPACE VS. H.D.S. BEAGLE-GENERATED SOLUTION SPACE IN 2.3 HOURS.
INITIAL SGSS 2.3 HRS BGSS IMAGE
Original (AAC-2)
Solution no. 19 138
NPV (Million USD) -82.6 (-263.1)-228.0 (-264.4)-154.3
EUI (kBtu/ft²/yr) 80.9 44.3-68.7 41.1-118.2
SPC 45.8 6.4-93.3 6.1-77.7
SA01
Solution no. 7 138
NPV (Million USD) 206.0 206.0-335.2 (-188.6)-460.3
EUI (kBtu/ft²/yr) 57.7 46.4-58.3 46.3-91.9
SPC 84.0 70.1-84.0 23.0-89.5
SA02
Solution Space no. 12 138
NPV (Million USD) (-60.4) (-60.4)-121.9 (-211.2)-94.9
EUI (kBtu/ft²/yr) 75.1 50.4-99.8 46.3-136.2
SPC 48.0 48.0-59.9 17.7-68.0
SA03
Solution no. 7 138
NPV (Million USD) 183.5 183.5-289.4 52.0-357.7
EUI (kBtu/ft²/yr) 53.3 47.2-53.7 42.8-71.7
SPC 81.0 79.5-90.5 22.0-89.2
SA04
Solution no. 7 138
NPV (Million USD) 130.3 179.9-217.3 (-15)-241.7
EUI (kBtu/ft²/yr) 58.1 61.7-67.2 43.8-81.8
SPC 85.0 87.3-90.3 56.9-98.0
SA07
Solution no. 3 138
NPV (Million USD) (-47.0) (-95.4)-(-46.9) (-136.4)-101.2
EUI (kBtu/ft²/yr) 73.3 47.8-116.4 46.2-118.7
SPC 55.6 44.7-55.6 31.9-62.5
SA08
Solution no. 4 138
NPV (Million USD) (-19.5) (-19.5)-323.3 (-260.2)-613.2
EUI (kBtu/ft²/yr) 60.1 48.3-59.1 41.1-118.6
SPC 57.0 57.7-94.4 7.1-83.1
212
In the previous pedagogical experiment, Pedagogical Experiment I Part B, the impact on the
solution space performance measured from varying initial designs for the same design problem
were explored. The geometrically diverse initial models resulted in equal diversity among the
performance for the solution spaces generated. However, in this experiment, the students began
with essentially the same geometry from which each student explored according to his/her
preferences. Therefore, while Pedagogical Experiment I Part B demonstrated significant variance
in the performance of the solution space based on the geometric starting point, Pedagogical
Experiment III AAC-3 demonstrates the impact of designer-driven explorations of the same
geometry. This highlights the influence of how designers choose to explore their design on the
performance of their solution space.
While the geometry components of the design problem were identical, as the students
formulated their design problem based on their individually defined design rules, each student
began the exploration with an initial model that was similar, but not identical, to the others. The
results collected from the six students indicate that, aside from the direct impact of the design
geometry, the design exploration logic also plays a significant role in the performance of the
solution space.
In order to facilitate comparison, the solution spaces of all six students—including their initial
model, all design alternatives defined during their manual exploration process, and the design
alternatives generated by Beagle based on students’ defined parametric ranges—were combined
and Pareto ranked together based on all three objectives scores. A summary of these rankings is
provided in Table 7-31. The diversity of the performance of the initial models, despite their
geometric similarity, can be observed in SA01’s initial model (ranked 6
th
) and SA02’s initial model
(ranked 33
rd
). Table 7-31 also shows that, through both the manual and EEPFD exploration
processes, higher performing solutions were identified by all students. SA02, for example,
progressed from having a design alternative with a ranking of 33 to one with a ranking of 14. SA07
also produced a design alternative with a ranking of 13, significantly improving on the original
ranking of 29. Further indications of improvement can be observed, in that through the Student
Generated Solution Space (SGSS), two of the six students were able to generate Pareto 1
designated design alternatives. Through EEPFD, four of the six students were able to identify
Pareto 1 designated design alternatives. From these results, it is demonstrated that, through both
manual exploration and the EEPFD exploration process, students were able to identify higher
performing solutions based on the three objectives Pareto Ranking.
These results further indicate that, when the initial designs possess a higher ranking, a better
performing solution space can be obtained in the manual exploration process. For example, the
initial designs of students SA01 and SA03 ranked as the top two among all the initial designs.
During their subsequent manual exploration processes, the solution spaces generated by these
two students were the only ones having Pareto rank solutions. This is reasonable, as these
students had a better starting point on which to improve. Thus, they reached a better ending
point with their manual exploration process, resulting in higher-ranking performances than the
other students. However, if the initial design has a lower performance, and in this case a
corresponding lower ranking, the manual exploration process can only generate design
alternatives within the performance boundaries of the initial design. This is in contrast to the
outcome of utilizing EEPFD, where the exploration process is able to help identify higher ranking
design alternatives even when the initial design has a poorer starting performance. For example,
213
by using EEPFD, the initially provided design, which has a Pareto ranking of 35 among the entire
solution space, is able to generate a Pareto rank 1 design alternative. In addition, student SA08’s
initial design has a rank of 27, but the generated solution space is able to identify Pareto solutions
that cannot be found through the manual exploration process. However, it is observed that the
higher-ranking initial design does affect the percentage of the resulting Pareto solutions
generated. In this case, BGSS of Students SA01 and SA03 still have the highest Pareto solution rate
compared to the others. Despite the fact that EEPFD provides the opportunity for identifying
higher ranking design alternatives, the performance of the solution spaces is still bound by the
designer’s set exploration ranges and their problem formulation methods. Based on the results
of this experiment, it is suggested that the manual exploration process is limited by the initial
design, as opposed to EEPFD’s process, which can identify a broader range of results that can
represent the full performance potential of a design problem formulation. This further supports
the suggestion presented in Section 5.6 that, due to the nature of the early stage design process,
when different design approaches are formed and need to be rapidly evaluated, the performance
boundaries determined by EEPFD for a solution space may have more immediate relevance than
individual design alternatives. However, the use of the EEPFD to identify design concepts, as
opposed just to explore a design, is in need of further research.
TABLE 7-31: RANKING COMPARISON OF THE SOLUTION SPACE OF AAC-3 INCLUDING STUDENTS’ INITIAL DESIGNS,
SGSS, AND BGSS.
Initial Design
Ranking
SGSS Pareto
Solution
no./Solution no.
Highest
Ranked
Solution
from SGSS
BGSS Pareto
Solution
no./Solution no.
Highest
Ranked
Solution
from BGSS
Original 35 0/19 3 1/138 1
SA01 6 4/7 1 48/138 1
SA02 33 0/12 14 0/138 14
SA03 7 3/7 1 30/138 1
SA04 11 0/7 4 14/138 1
SA07 29 0/3 19 0/138 13
SA08 27 0/4 3 14/138 1
214
7.4.4.4 AAC-4 SCENARIO RESULTS & COMPARISONS
The last portion of the experiment increases the exploration complexity from AAC-2 by enabling
students to explore glazing area ratios and shade depths independently for the north- and south-
facing walls, while prior explorations only allowed uniform application of these energy parameters
to all walls. This change increases potential variations, with the assumption that different façade
configurations can help achieve better EUI performance during the optimization process.
However, this modification increases the exploration complexity for the human users, as
additional steps are required in order to implement this variable availability. As a result, this
experiment aims to observe whether EEPFD is able to further facilitate design exploration when
being compared to the human explored benchmarks. Unfortunately, at this last stage of the
experiment, only two students were available for participation in this exercise. While the sample
size is acknowledged as a possible limitation regarding the validity of observations made from the
data, an attempt is still made to determine if previous observation could be reinforced with the
new sample.
In this exercise, the two students explored 10 design alternatives within two hours, with the
questionnaire, D4-Q1.pdf, used to gather responses for this exercise. To the question “If given
more time and iterations, could you improve the performance of the design?” one student
responded positively and the other negatively. The student who provided an indication that he
would not be able to improve the design through additional time and iterations cited the provided
design range as being too limited in order to improve the SPC. When asked to describe their
experience, both students expressed increased ease with the process, despite the increased
complexity of AAC-4’s exploration process, which they attributed to their prior experience with
AAC-2. SA07 further expressed that, with this prior experience, he would be able to successfully
improve the EUI performance of the design. It should be noted that, when compared to previously
collected results for AAC-2, no obvious improvement in the EUI is observed. Table 7-32 and Figure
7-30 summarize the combined manually explored solution space of both students and compare
this solution space with the one generated by H.D.S Beagle in two hours.
Based on these results, it is evident that H.D.S. Beagle is able to generate better performing
solutions in all three objectives when compared to the solution space generated by the students’
manual exploration process. Unlike the previous exploration comparisons of AAC-1 and AAC-2,
H.D.S. Beagle is able to both generate more Pareto solutions and identify better performance
boundaries in the provided 2-hour time limit. It should be noted that this result is generated by
students who have prior exploration experience through AAC-2, which was based on the same
design and exploration range as AAC-4, but without individual surface variation. In this
experiment, H.D.S. Beagle is able to generate a solution space with superior performance
boundaries and more Pareto solutions despite the apparent advantage of the students. It would
be expected that, with their prior experience with the design, the students would already be
familiar with parametric combinations, resulting in design alternatives with superior performance.
This is in direct contrast to previous observations, where students were able to outperform H.D.S.
Beagle in generated SPC and NPV scores. These results are attributed to the increased complexity
of the design exploration process, which impeded the students’ progress in improving their design.
This is consistent with previous observations regarding a decrease in the ability of students to
identify better-fit solutions when faced with increasing design complexity.
215
TABLE 7-32: PERFORMANCE COMPARISON OF THE SOLUTION SPACES GENERATED BY THE PEDAGOGICAL
EXPERIMENT III AAC-4 AND THE BEAGLE AFTER A COMPLETED RUNTIME OF 2 HOURS.
IMPROVEMENT IS MEASURED AGAINST THE INITIAL BASELINE OBJECTIVE SCORE. PERCENTAGE
CONTRIBUTION TO THE PARETO SOLUTION POOL IS ALSO PROVIDED.
Pedagogical
Experiment III
H.D.S. BEAGLE
Run_20131115_0756
AAC-4 2 HRS. (7 GEN)
NPV MIN. 2.3 -108.9
(Million USD) MAX. 131.1 188.8
INITIAL: -52.1 IMPROVED 183.1 240.8
EUI MIN. 50.7 46.6
(kBtu/ft²/yr) MAX. 69.8 108.7
INITIAL: 76.1 IMPROVED 25.5 29.6
SPC MIN. 60.8 37.8
MAX. 84.6 91.6
INITIAL: 51.7 IMPROVED 32.9 39.9
Total
Pareto Solution No. (2 HRS.) 0 (0%) 7 (5.7%) 7
Solution No. 10 122 132
216
FIGURE 7-30: PEDAGOGICAL EXPERIMENT III AAC-4 SOLUTION SPACE COMPARISON: STUDENTS’ MANUALLY
EXPLORED SOLUTION SPACE VS. H.D.S. BEAGLE-GENERATED SOLUTION SPACE WITHIN 2- HOUR
TIME LIMIT.
217
7.4.5 SUMMARY OF THE PEDAGOGICAL WORKSHOP CASE-BASED EXPERIMENT
This set of the experiments fulfills six previously outlined objectives. The first objective, to confirm
observations made during prior experiments by expanding the sample data available for analysis
through repetition of Pedagogical I Part A, is met through the exploration of AAC-1. Overall, when
comparing solution spaces generated by the manual benchmark process versus solution spaces
generate by H.D.S. Beagle, the results of manual exploration of AAC-1 are consistent with the
observations made during the Pedagogical Experiment I Part A. For both sample groups, students’
manual exploration process generated higher SPC scores and showed tendency towards
optimizing the SPC for the same scenario. However, when considering all three objective functions
equally, H.D.S. Beagle consistently generated a higher rate of Pareto solutions. In addition, it was
observed that the utilization of H.D.S. Beagle negated the observed human error rate of up to
28.6 % during the manual exploration process for the AAC-1 design scenario.
The second objective of this experiment set aimed to expand the research data to include a new
design scenario outside of the scenario used for Pedagogical Experiment I Part A. This objective
was met through the exploration of the new design scenario used in AAC-2. This manual
exploration process, identical to the one used in AAC-1, provided observed trends consistent with
prior experiments. However, based on the collected student responses for AAC-2, the impact of
the geometric parameters in direct relation to the SPC and NPV values is easier to grasp than in
the AAC-1 scenario. As a result, the tendency towards optimizing the SPC and NPV scores by
students is more obvious. It was also observed that, due to the increase in complexity of the
exploration process for AAC-2, an increased human error rate of up to 58.5% was observed. When
comparing the solution space manually generated by students to the solution space generated by
H.D.S. Beagle, the student-generated solution space had more Pareto solutions due to their
superior SPC and NPV scores. However, the best EUI design alternatives were still generated by
Beagle. Consistently, H.D.S. Beagle demonstrates the ability to continuously generate an
increasing number of Pareto solutions for this scenario.
The third objective of this experiment set aimed to expand experiment variables to allow students
to set up their own initial models and exploration ranges in order to observe the impact of these
varying initial models, explored parameters, and parametric ranges, as designed in the AAC-3
activity. Based on these results, each student’s AAC-3 model represented a unique problem
formulation approach despite the similarity of the geometric components. These indicated that
students’ manual exploration solution spaces are limited by both their starting point and their
variation ranges. This is in contrast to EEPFD, which is only limited to the variation ranges made
available. In this exercise, students focused on exploring their own model and design based on
their set rules. The results achieved by all the students consistently demonstrated that the use of
EEPFD is able to find a broader range of solutions with higher performances when compared to
the manual exploration process.
The fourth objective, to increase the potential complexity of design problems by expanding the
experiment to include individual surface exploration as part of the exploration process, was met
as part of the design in the AAC-4 exercise. These results indicate that EEPFD demonstrates further
effectiveness when compared to the manual exploration process of students who have prior
exploration experience. This is consistent with previous observations that, when faced with
increased complexity, students demonstrate a decrease in their ability to generate better-fit
218
solutions. These observations reinforce the concept of EEPFD’s increasing ability to facilitate the
design exploration process with increasing design complexity. However, an expanded sample size
in needed to confirm these observations.
The fifth objective, to gauge the effectiveness of the feedback generated by EEPFD, was met
through an analysis of the responses to questionnaires collected for AAC-1 and AAC-2 regarding
students’ use of the provided EEPFD pre-generated dataset in their design exploration process.
All students provided positive responses regarding the ability of the EEPFD pre-generated data
set to support their design decision-making. In addition, this effectiveness is also demonstrated
through their recorded data, which indicated that students were able to identify their final designs
based on their described design goals. Further improvement was also observed when students
used the given dataset to improve their design.
Finally, the last objective of this experiment was to gauge the effectiveness of the feedback
generated by EEPFD through observations made regarding the student-provided settings for
subsequent GA runs based on the results received through EEPFD. While this part of the
experiment was designed and student-provided ranges were collected, students were not asked
to provide the intent of their new exploration ranges. Thus, without the exploration intent, the
success or improvement of the new solution space cannot be accurately determined at this time.
Overall, these findings confirm that the use of EEPFD is able to provide better energy performance
feedback and improve guidance when improving energy performance is the intended goal.
However, there are several limitations and concerns regarding this experiment set. The first
concern pertains to the sufficiency and representativeness of the sample pool and size. All study
participants were students who have already demonstrated an interest in design computation. As
a result, their proficiency in terms of using computers might have been biased towards the user
group they are expected to represent. The second concern pertains to the consistency of the
instruction. While AAC-1 is meant to extend the data set of the prior Pedagogical Experiment I
Part A, the instruction students received and process they were expected to participate in differed
from the Pedagogical Experiment. As a result, these differences could impact the collected results.
The third concern pertains to the consistency of the content of provided instructions. The
students indicated that, if they were able to have a better understanding regarding the formulas
and calculation of the three objective scores, their results might have been different. This
information was provided to participants in Pedagogical Experiment I Part A, but could not be
confirmed for AAC-1. The next concern stems from the limited sample size available for analysis
during this workshop, which had only eight participants to begin with and only two at the end.
This sample is deemed insufficient for identifying any trends or outliers. Finally, the influence of
accumulative experience is not accounted for. During students’ participation in these experiments,
they became familiar with the process and gradually understood the concepts of the mechanism
and the impact of their decisions on the resulting design. As a result, their prior experience might
have provided an advantage, which reflected in their speed and knowledge documented during
the later design activities.
Irrespective of these limitations, based on the survey results and the decrease in the workshop
attendance, it was evident that students were not motivated to participate in the tedious manual
design exploration process, as a part of that design approach. This observation further reinforces
the need for an automated exploration approach to facilitate designers’ design exploration
219
process. While the students indicated that the EEPFD pre-provided information was useful in
assisting them in deciding on a better performing design, difficulties were observed regarding the
use of the currently provided “interface”. Currently, the EEPFD-generated data is manually
organized by the author. The users browse, filter, and organize the data through built-in
functionalities within the Excel platform. It is noted that the study participants needed more
instructions regarding use of this platform and the manner in which they could use the data to
assist their decision-making. Though, as a part of this experiment, a handout was given to all
participants, explaining the meaning of the data, it seems this is not sufficient for users to
understand how to use this data. Moreover, the interface provided still requires several manual
manipulations to facilitate users’ decision-making process. These are known issues and are
expected to be addressed through future improvement of the prototype tool by providing better
user interface and interactive guidance.
220
7.5 SUMMARY OF THE PEDAGOGICAL CASE-BASED EXPERIMENTS
This chapter described the three pedagogical case-based experiments, conducted to address the
following two research questions with the aim of understanding the impact of EEPFD during the
design process:
- What is the impact of the proposed framework on the early stage decision process?
- How do EEPFD and the provided energy performance feedback support designers’
decision-making?
In order to observe the impact of EEPFD on the early stage decision process, a benchmark process
based on EEPFD, excluding the automation component, was established and utilized to compare
against by the EEPFD process. When comparing the feedback time, analysis speed and the Pareto
solutions rate of the generated solution pool, the solution pool generated through EEPFD
demonstrated superior results against the solution pool generated through students’ manual
exploration process, with further improved results available with extended runtimes. During
these experiments, the use of EEPFD also demonstrated the ability to negate the observed human
error rate of up to 50%. In addition, students were able to utilize EEPFD-generated results to
identify higher performing design alternatives based on their intended exploration goals. This
observation does not apply to their manual exploration process, especially when improving
energy performance is designated as their design exploration intent. Regarding the usability of
EEPFD, this determination was made through the observations of (1) whether designers/students
can formulate their design problem into a compatible parametric model, (2) their personal
responses to their ability to translate their design intent into a parametric model as gathered
through interviews and questionnaires; and (3) the time necessary for students to learn to execute
H.D.S. Beagle with their parametric models. According to the experiment results, students were
able to formulate their design problem within a 1.5-hour lecture, given a previous learning period
of a minimum of six months and their familiarity with the working design platform. More than
80% of the student participants expressed that their parametric model reflected their design
intent. However, less than half expressed confidence that the full extent of their design intent
could be expressed parametrically. When given a design problem with increased geometric and
programing complexity, students with a maximum of three months experience with the
parameterization platform and parametric modeling demonstrated difficulties in translating their
design intent and required up to a month to fully formulate their design problem to match their
original design intent. However, this issue could potentially be mitigated with additional
experience in both the parametric design method and improved familiarity with the used platform.
Lastly, the usability was measured as the time necessary for students to learn and execute the
prototype tool. Based on this premise, students required an average of four hours of lectures and
hands-on experience to execute H.D.S. Beagle and understand how to utilize the data generated
through EEPFD.
In order to observe how the provided feedback supported the students’ decision-making, a series
of questionnaires and interviews were conducted during these experiments. Positive feedback
was received when students were asked if the provided data by EEPFD was relevant to their design
decision-making. In addition, when improved energy performance is a student-designated design
exploration goal, students were consistently able to identify design alternatives with improved
221
energy performance through EEPFD. However, the participants did not consistently exhibit the
ability to identify design alternatives with improved energy performance through their manual
exploration process. This is in part attributed to students demonstrating a tendency towards first
optimizing the SPC due to their ability to easily grasp the impact of geometric variation on the SPC
score. As a result, even when improved energy performance was a designated design exploration
goal and the feedback was available, this student inclination marginalized the influence of the
energy performance feedback on their decision-making during the manual exploration process.
This inclination also resulted in the solution pool from the students’ manual exploration process
generating superior results in both SPC and NPV, in relation to the EEPFD generated solution pool.
However, it is expected that, when faced with increased project complexity, students’ ability to
gauge the direct relationship between geometric variation and the impact on the SPC and NPV
scores would be reduced. Therefore, with both increased project complexity and computing
power, it is expected that EEPFD will be able to provide a solution space with superior
performance in all three objectives over the manual exploration process. However, this is a
subject for future research.
The concerns regarding the sample pool and size of these experiments notwithstanding, overall,
these pedagogical case-based experiments demonstrated that, when students utilize EEPFD, they
are able to include the energy performance feedback in their considerations and improve the
energy performance during their design exploration process. Furthermore, these experiments
demonstrate that students who do not possess energy domain knowledge are able to observe the
impact on energy performance during their exploration process through EEPFD. They are also able
to understand the concept of EEPFD and formulate their design problem through the lectures and
instruction provided during these pedagogical experiments. These observations and responses
provide the basis of initial confirmation for the value and usability of EEPFD, when employed by
designers during the early stages of the design process. It is the prediction of this research, based
on these results, that the teaching of MDO and EEPFD in a pedagogical setting can be used as a
means of equipping novice designers with no prior energy domain knowledge with the ability to
include energy performance feedback in their design exploration process and introduce them to
the impact design decisions have on energy performance. It is also the prediction of this research
that MDO, as defined by this research, can be included in a pedagogical setting as a new design
method by which to enhance the design exploration process.
222
CHAPTER 8 CONCLUSION + FUTURE WORK
The overall aim of the research presented in the previous chapters was to propose a framework
that can provide a “designing-in performance” environment, where designers can integrate
energy performance feedback to support their decision-making process during the early stages of
design. Through the literature review, this research developed the functional requirements of the
desired “designing-in performance” environment and identified MDO as having the greatest
potential for bridging the gap between the inherently complex design and energy simulation
domains. Further investigation of current MDO precedents revealed that evaluation of the MDO
approach to support energy performance feedback for designers during the early stages of the
design process was absent. This was attributed to a deficiency in usability and the flexibility of
prior application platforms. Consequently, this research proposed a designer-centric MDO
framework, followed by a series of experiments to evaluate the ability to provide a “designing-in
performance” environment, where designers could integrate energy performance feedback to
support their decision-making process during the early stages of design. The explicit contributions
of this research to the body of knowledge in this field are outlined in the following sections.
8.1 CONTRIBUTIONS TO THE BODY OF KNOWLEDGE
8.1.1 CONTRIBUTION 1: FUNCTIONAL REQUIREMENTS OF “DESIGNING-IN PERFORMANCE” FOR EARLY
STAGE DESIGN AND ENERGY PERFORMANCE
Through an extensive literature review, this research determined that, despite the
acknowledgement that decisions made during the early design process have a significant impact
on the design’s energy performance, energy performance feedback is rarely available at this
design stage. Therefore, this research isolated the necessary functional requirements of a
“designing-in performance” environment that would enable the inclusion of energy performance
during the early stages of design. These requirements are the ability to rapidly generate design
alternatives across varying degrees of geometric complexity, evaluate design alternatives, provide
tradeoff analysis for competing criteria, and have a searching method that can identify design
alternatives with better-fit performance. Upon further analysis, a MDO framework that included
four essential components—parameterization, platform integration, automation, and a GA-based
multi-objective optimization algorithm—was identified to have the greatest potential for fulfilling
the identified “designing-in performance” requirements.
223
8.1.2 CONTRIBUTION 2: EVOLUTIONARY ENERGY PERFORMANCE FEEDBACK FOR DESIGN (EEPFD)
EEPFD is a novel MDO framework developed as a part of this research, which enables the
combination of complex geometric form exploration with energy performance feedback for early
stage design, thereby addressing a previously observed gap in precedents. Spatial and financial
performance were also included as competing objectives, as they are acknowledged as common
considerations during the early stages of the design process. Through the development of EEPFD,
the four previously identified criteria for providing a “designing-in performance” environment for
designers were met. In other words, this research has shown that EEPFD is able to provide (1)
rapidly generated design alternatives, (2) simultaneously evaluate these alternatives, (3)
intelligently identify, and Pareto optimize the alternatives with better fit performance, and (4)
provide a tradeoff study of all generated results to provide the context for design decisions.
In addition, earlier concerns regarding the designer usability in precedents were addressed
through EEPFD’s unique encoding method to identify “genes” that enable problem definition
through typical parametric design processes within an industry standard design platform.
Therefore, EEPFD enables designers to engage an automated GA-based multi-objective
optimization algorithm, which systematically explores, analyzes, and ranks best-fit design
alternatives among competing objectives within any familiar designer-oriented parametric
modeling environment.
Through the discussed experimental runs, EEPFD was able to successfully demonstrate the ability
to adapt to a wide spectrum of design scenarios, while providing a solution space with an
improved performance, as defined by this research for each. Therefore, this research determined
that EEPFD can be considered a valid approach eligible for further process evaluation.
8.1.3 CONTRIBUTION 3: PROCESS EVALUATION OF EEPFD
One of the gaps identified by this research during the analysis of the precedents was the distinct
lack of case studies where, after a proposed framework was developed, experiments were
conducted in order to assess the impact of the new framework on the design process. In particular,
no experimental work of this nature was reported where designers served as research
participants. In contrast, as a part of this research, a series of pedagogical-based experiments and
a practice-based experiment were designed and implemented, allowing the impact of EEPFD on
the design process, when used by designers, to be evaluated. For the purpose of process
evaluation, both measurement metrics and a method by which to evaluate “designing-in
performance” environments, as well as the activities these environments necessitate for inclusion
in the design process by designers, were developed.
Through the practice-based case experiment, EEPFD was observed, alongside the conventional
approaches of using in-house energy analysis and an MEP engineer consultant, to include energy
performance feedback as a part of the design process for a net ZEB school design. While these
three approaches are not strictly considered comparable, the observed trend demonstrated that
EEPFD provided the most potential in enabling energy performance feedback to influence
designer’s decision-making. This is in part due to the tool interoperability issues encountered
through the in-house analysis, which led to design cycle latency in the required manual
224
redundancy of modeling to acquire energy performance feedback. The level of detail necessary
to provide to the MEP engineer and the level of detail received in the feedback proved more
suitable to generating final construction documents, rather than supporting form exploration for
early stage design. As a result, neither of the two conventional approaches was able to provide
relevant feedback in a timely manner to enable a “designing-in performance” environment for
the early stages of the design process.
Pedagogical Experiment I demonstrated that, with EEPFD’s encoding method to identify “genes”
utilizing standard parametric design techniques, with minimal instruction, students were able to
prepare a parametric design compatible for exploration through EEPFD. While issues remain
regarding the ability to successfully translate design intent into a parametric model, this may be
mitigated through increased exposure to parametric modeling design approaches. Furthermore,
the performance comparison between EEPFD and manually generated design alternatives
illustrated EEPFD’s ability to provide higher-performing design alternatives within a given time
limit. This was particularly evident when further improved results were obtained when the
allocated time was extended. Given that time typically dominates early design exploration, it can
be extrapolated that the reduction in computation time necessary to generate desired results
would further acclimate the framework to the early stage design process. This provides the basis
of initial confirmation for the utility of optimization techniques during the early stage design
process.
Pedagogical Experiment II is the first documented attempt to integrate MDO into the design
process through a designer user, as opposed to prior case studies with engineers or research team
members as the primary users. As such, this case study provided initial observations regarding the
impact of MDO on the early stage design process when implemented by a designer. The observed
design process in this experiment demonstrated that EEPFD was successful in supporting
informed decision-making, despite the volatile subjective nature of the design process. By
providing a context in which design alternatives can be evaluated, EEPFD allowed the participating
designers to organize their priorities based on individual preferences or project requirements. In
this experiment, despite the dominance of aesthetic preference as the determining factor for the
final design, an improvement in all three objective scores in comparison with those pertaining to
the initial design was observed. Aside from aesthetic exploration, the designers indicated that
EEPFD provided an opportunity to learn through previously inaccessible performance feedback
about relationships between design elements and their impact on the resulting design
performance. Therefore, in this experiment, EEPFD succeeded in providing a “designing-in
performance” environment, where design with an improved performance was achieved.
Furthermore, these experiments demonstrate that even students with no prior energy domain
knowledge are able to observe the impact on energy performance during their exploration
process through EEPFD. In Pedagogical Experiment III, students who designated improved energy
performance as a design goal were able to consistently identify design alternatives with improved
energy performance through EEPFD. However, students did not consistently exhibit this ability to
identify design alternatives with improved energy performance through their manual exploration
process. These observations provide the basis for initial confirmation of the value and usability of
EEPFD for designers during the early stages of the design process.
225
It is acknowledged that MDO application is not restricted to including energy performance
feedback. However, among the design objectives, only energy performance required the use of
EEPFD to influence design decision-making, as demonstrated in Pedagogical Experiment III.
Therefore, although EEPFD is potentially capable of including other performance criteria, the
major contribution of the framework is in its ability to demonstrate that MDO can successfully
provide energy performance feedback that influences design decision-making during the early
stages of the design process.
8.2 PRACTICAL IMPLICATIONS
Considering the radically different inherent nature of early stage design in the architectural field—
which consists of subjective and objective elements, time constraints, level of uncertainty
regarding components, and unique conditions of each design problem—the findings of the
research indicate that the best practice of applying MDO to early stage design may differ from the
more common method of seeking a mathematically defined convergence “best-fit” solution set.
It was observed during the hypothetical case-based experiments that the optimal performance
boundaries for a design could be obtained after only a few generations of GA runs. After these
boundaries are established, the Pareto curve is more densely populated during the subsequent
generations. This finding implies that these boundaries can be used as the context in which any
individual design alternative can be evaluated, while providing the performance potential of the
design concept.
When provided identical design requirements and energy parametric settings, but with
significantly varying conceptual designs, a wide range in resulting performance boundaries was
observed. This implies a direct relationship between the initial conceptual design with its set
variations and the resulting performance boundaries outlying the potential performance levels of
generated design iterations. While the application of MDO to other fields may be with the intent
of optimizing a single design, early conceptual architectural design demands diversity. Therefore,
the ability of EEPFD to rapidly determine the performance potential of multiple competing
conceptual designs for the same design requirements may be more applicable than pursuing a
single optimized solution space.
In addition, this research demonstrated that, as parametric design problems increase in their
complexity, so does the relevancy of using EEPFD. As the pool of potential design solutions
increased exponentially, EEPFD was capable of providing a sampling means that was beyond
conventional manual approaches. This, in turn, resulted in an improvement in the performance
of the generated solution space for geometrically complex design problems, which was
consistently observed during the experiments.
Furthermore, one element that defies accounting through objective optimization is that of
aesthetic preference which differs widely between individuals. As EEPFD possesses no aesthetic
preference, equally it possesses no aesthetic prejudice. While it may spend time analyzing
solutions that will ultimately be dismissed by the designer, it may reveal Pareto optimal solutions
that could be potentially overlooked by the designer. The result is a much broader design solution
226
pool with overall improved multi-objective performance levels that can enable more informed
design decision-making, inclusive of a more expansive simulated aesthetic and formal range.
Moreover, EEPFD possesses no bias towards any one of the three design objectives, which is in
contrast to the trends observed through the pedagogical and practice-based experiments. In
these experiments it was consistently observed that only through EEPFD could energy
performance be included on equal footing to the other design objectives which experienced
observed optimization priority through the manual exploration processes due to human bias.
Finally, through the pedagogical experiments, students were able to understand the concept of
EEPFD and could formulate their design problem assisted by the lectures and instruction.
Therefore, this research indicates that the teaching of MDO and EEPFD in a pedagogical setting
can be used effectively as a means of equipping novice designers with no prior energy domain
knowledge with the ability to include energy performance feedback in their design exploration
process. The research also indicates that EEPFD can be used to subsequently introduce students
to the impact their decisions have on the energy performance of their designs. Thus, based on the
findings of this research, it is recommended that EEPFD be included in a pedagogical setting as a
new design method by which to enhance the design exploration process.
8.3 RESEARCH LIMITATIONS
As previously discussed, limitations inherent in the selected platforms used by this research at the
time the experiments were conducted were experienced during the research process. These
limitations included, but were not limited to:
1. Level of detail availability through Revit’s massing capabilities
2. Limits to geometric complexity from utilized platforms
3. Experienced runtime speed
4. Analysis accuracy dependence on platform’s simulation engines
As H.D.S. Beagle is still a prototype, the following technological limitations were also experienced
by this research:
1. Automated GA continuation
2. Lack of automatic 2D/3D data visualization requiring manual intervention
Currently, EEPFD is limited in scope regarding the inclusion of performance considerations. For
example, natural daylighting is not included in either the objectives, or as contributing to the EUI
calculation. However, if EEPFD can be expanded to include these additional performance
considerations, then EEPFD’s rapidly provided feedback could support design decision-making in
pursuit of net ZEB design.
During the pedagogical experiments, additional limits were encountered, both technologically
with the use of parametric modeling and within the quality of the pedagogical sample pool.
Pertaining to the use of parametric design, while the experiments indicated that familiarization
with EEPFD does not require an extended learning period, a prior familiarity with parametric
design concepts and processes is a necessary prerequisite. Therefore, the pedagogical experiment
227
outcomes were likely affected by participants’ undocumented prior experience with parametric
modeling that may have influenced students’ design intent. Finally, the limited sample pool for
the pedagogical experiments is also a recognized limitation of this research, as the participants
were drawn from the student population attending an optional computational environmental
class. As a result, there is the potential for bias among the student sample unaccounted for within
this data set. Additional bias was also observed in the pedagogical experiment results, due to the
calculation steps for each objective score leading to implied single objective optimization, versus
the true MDO approach explored through EEPFD. Unaccounted for variables influencing the
collected data are also possible for Pedagogical Experiment III, as it was not conducted under the
direct supervision of the author. Finally, the instruction provided for the three pedagogical
experiments was not consistent, which may have influenced the collected data.
8.4 FUTURE WORK
Currently, EEPFD is developed within the scope of this research. However, the following items
have been identified as potentially capable of improving the design process, and would thus
benefit from further development.
1. Improved User Interface/ Human Computer Interaction
2. Inclusion of Sensitivity Analysis with generated results
3. Ability to generate design recommendations based on sensitivity analysis
4. Ability to integrate a Multi Criteria Decision Making strategy with the UI
Other future works include empirically measuring the impact of the user-driven GA settings
available through EEPFD on the Problem, Process, and Product measurements. The actual impact
of these GA settings on the overall solution pool is yet to be fully explored and quantified. In
addition, the optimal settings that would provide the most efficient solution pool have yet to be
identified. Possible improvements in the algorithm, or potential use of a weighted algorithm, are
also in need of exploration.
Finally, future research is needed to obtain and explore more empirically defined data regarding
the application of EEPFD to the design process within a design studio setting and outside of
academia, as this would help evaluate the impact of EEPFD on the overall design process. These
studies are also needed to verify and validate the proposed best practice methods outlined by
this research.
228
BIBLIOGRAPHY
Abdollahi, Gh, and M. Meratizaman. 2011. "Multi-objective approach in thermoenvironomic
optimization of a small-scale distributed CCHP system with risk analysis." Energy and
Buildings no. 43 (11):3144-3153. doi: 10.1016/j.enbuild.2011.08.010.
Adamski, Mariusz. 2007. "Optimization of the form of a building on an oval base." Building and
Environment no. 42 (4):1632-1643. doi: 10.1016/j.buildenv.2006.02.004.
AIAA. 1991. Current state of the art on multidisciplinary design optimization (MDO). Washington
D.C., USA: American Institute of Aeronautics and Astronautics.
Aish, Robert, and Andrew Marsh. 2011. An integrated approach to algorithmic design and
environmental analysis. Paper read at SimAUD 2011, 4-7 April 2011, at Boston, MA,
USA.
Aish, Robert, and Robert Woodbury. 2005. "Multi-level interaction in parametric design." In
Smart Graphics, edited by Andreas Butz, Brian Fisher, Antonio Krüger and Patrick Olivier,
924-924. Springer Berlin / Heidelberg.
Akin, Ömer. 1984. "An exploration of the design process." In Developments in design
methodology, edited by Nigel Cross, 189-208. New York: John Wiley.
Akin, Ömer. 2001. "Variants in design cognition." In Design knowing and learning: Cognition in
design education, edited by Charles M. Eastman, W. Michael McCracken and Wendy C.
Newstetter, 105-124. Oxford: Elsevier Science.
Al-Homoud, Mohammad S. 1997. "Optimum thermal design of office buildings." International
Journal of Energy Research no. 21 (10):941-957. doi: 10.1002/(SICI)1099-
114X(199708)21:10<941::AID-ER302>3.0.CO;2-Y.
Al-Homoud, Mohammad S. 2005. "A systematic approach for the thermal design optimization of
building envelopes." Journal of Building Physics no. 29 (2):95-119. doi:
10.1177/1744259105056267.
Al-Homoud, Mohammad S. 2009. "Envelope Thermal Design Optimization of Buildings with
Intermittent Occupancy." Journal of Building Physics no. 33 (1):65-82. doi:
10.1177/1744259109102799.
Al-Homoud, Mohammad Saad. 2001. "Computer-aided building energy analysis techniques."
Building and Environment no. 36 (4):421-433. doi: 10.1016/S0360-1323(00)00026-3.
Anderson, Ren, Craig Christensen, and Scott Horowitz. 2006. Program design analysis using
BEopt (Building Energy Optimization) software: Defining a technology pathway leading
to new homes with zero peak cooling demand. Paper read at 2006 ACEEE Summer Study
on Energy Efficiency in Buildings, 13-18 August 2006, at Pacific Grove, California.
Architecture 2030. 2011. Energy - Buildings consume more energy than any other sector 2011
[cited 18 April 18 2012 2011]. Available from
http://architecture2030.org/the_problem/problem_energy.
229
Asadi, Ehsan, Manuel Gameiro da Silva, Carlos Henggeler Antunes, and Luís Dias. 2012a. "Multi-
objective optimization for building retrofit strategies: A model and an application."
Energy and Buildings no. 44:81-87. doi: 10.1016/j.enbuild.2011.10.016.
Asadi, Ehsan, Manuel Gameiro da Silva, Carlos Henggeler Antunes, and Luís Dias. 2012b. Multi-
objective optimization model for building retrofit strategies. Paper read at SimBuild
2012, 1-3 August 2012, at Wisconsin, USA.
Asadi, Ehsan, Manuel Gameiro da Silva, Carlos Henggeler Antunes, and Luís Dias. 2012c. "A
multi-objective optimization model for building retrofit strategies using TRNSYS
simulations, GenOpt and MATLAB." Building and Environment no. 56:370-378. doi:
10.1016/j.buildenv.2012.04.005.
Asiedu, Y., Robert W. Besant, and P. Gu. 2000. "HVAC duct system design using genetic
algorithms." HVAC&R Research no. 6 (2).
Atman, Cynthia J., Monica E. Cardella, Jennifer Turns, and Robin Adams. 2005. "Comparing
freshman and senior engineering design processes: an in-depth follow-up study." Design
Studies no. 26 (4):325-357. doi: 10.1016/j.destud.2004.09.005.
Attia, Shady. 2011. State of the art of existing early design simulation tools for net zero energy
buildings: A comparison of ten tools. Architecture et climat at Université catholique de
Louvain.
Attia, Shady. 2012. A tool for design decision making: Zero energy residential buildings in hot
humind climates, Architecture et climat, Université catholique de Louvain, Université
catholique de Louvain.
Attia, Shady, Liliana Beltrán, André De Herde, and Jan Hensen. 2009. "Architect friendly": A
comparison of ten different building performance simulation tools. Paper read at
Building Simulation 2009, 27-30 July 2009, at Glasgow, Scotland.
Attia, Shady G., Mohamed Hamdy, Mina Samaan, André De Herde, and Jan L.M. Hensen. 2011.
Towards strategic use of BPS tools in Egypt. Paper read at Building Simulation 2011, 14-
16 November 2011, at Sydney, Australia.
Attia, Shady G., and André De Herde. 2011. Early design simulation tools for net zero energy
buildings: A comparison of ten tools. Paper read at Building Simulation 2011, 14-16
November 2011, at Sydney, Australia.
Attia, Shady, Elisabeth Gratia, André De Herde, and Jan L. M. Hensen. 2012. "Simulation-based
decision support tool for early stages of zero-energy building design." Energy and
Buildings no. 49:2-15. doi: 10.1016/j.enbuild.2012.01.028.
Attia, Shady, Mohamed Hamdy, William O’Brien, and Salvatore Carlucci. 2013. "Assessing gaps
and needs for integrating building performance optimization tools in net zero energy
buildings design." Energy and Buildings no. 60:110-124. doi:
10.1016/j.enbuild.2013.01.016.
Attia, Shady, Jan L. M. Hensen, Liliana Beltrán, and André De Herde. 2012. "Selection criteria for
building performance simulation tools: contrasting architects' and engineers' needs."
230
Journal of Building Performance Simulation no. 5 (3):155-169. doi:
10.1080/19401493.2010.549573.
Augenbroe, Godfried. 1992. "Integrated building performance evaluation in the early design
stages." Building and Environment no. 27 (2):149-161. doi: 10.1016/0360-
1323(92)90019-L.
Augenbroe, Godfried. 2002. "Trends in building simulation." Building and Environment no. 37 (8-
9):891-902. doi: 10.1016/S0360-1323(02)00041-0.
Augenbroe, Godfried, Pieter de Wilde, Hyeun Jun Moon, and Ali Malkawi. 2004. "An
interoperability workbench for design analysis integration." Energy and Buildings no. 36
(8):737-748. doi: 10.1016/j.enbuild.2004.01.049.
Augenbroe, Godfried, and Jan Hensen. 2004. "Simulation for better building design." Building
and Environment no. 39 (8):875-877. doi: 10.1016/j.buildenv.2004.04.001.
Autodesk. 2012. Autodesk® WikiHelp - Best practices for conceptual energy analysis. Autodesk, 7
Mar 2012 2012a [cited 30 July 2012 2012]. Available from
http://wikihelp.autodesk.com/Revit/enu/2013/Help.
Autodesk. 2013. Autodesk® WikiHelp - Reference for conceptual energy analysis. Autodesk
2012b [cited 25 February 2013 2013]. Available from
http://wikihelp.autodesk.com/Revit/enu/2013/Help/00001-Revit_He0/3251-
Referenc3251/3319-Referenc3319.
Autodesk. 2013. Autodesk® WikiHelp - Conceptual energy analysis: Energy settings. Autodesk, 3
May 2011 2012c [cited 25 February 2013 2013]. Available from
http://wikihelp.autodesk.com/Revit/enu/2013/Help/00001-Revit_He0/2489-
Analyze_2489/2515-Conceptu2515/2519-Energy_S2519/2522-Energy_M2522.
Autodesk. 2013. Design optioneering: Variation, exploration, correlation. Autodesk 2013 [cited 7
January 2013 2013]. Available from
http://usa.autodesk.com/adsk/servlet/item?siteID=123112&id=19055278.
Bäck, Thomas. 1996. Evolutionary algorithms in theory and practice : Evolution strategies,
evolutionary programming, genetic algorithms. New York: Oxford University Press.
Baker, James Edward. 1987. Reducing bias and inefficiency in the selection algorithm. Paper
read at the Second International Conference on Genetic Algorithms on Genetic
algorithms and their application, 28-31 July 1987, at Cambridge, Massachusetts, United
States.
Bambardekar, Suhas, and Ute Poerschke. 2009. The architect as performer of energy simulation
the early design stage. Paper read at Building Simulation 2009, 27-30 July 2009, at
Glasgow, Scotland.
Bazjanac, Vladimir. 2008. IFC BIM-based methodology for semi-automated building energy
performance simulation. Lawrence Berkeley National Laboratory: Lawrence Berkeley
National Laboratory.
231
Beightler, Charles S., Don T. Phillips, and Douglass J. Wilde. 1979. Foundations of Optimization. 2
ed: Prentice Hall.
Beyer, Hans-Georg, and Hans-Paul Schwefel. 2002. "Evolution strategies - A comprehensive
introduction." Natural Computing no. 1 (1):3-52. doi: 10.1023/a:1015059928466.
Bichiou, Youssef, and Moncef Krarti. 2011. "Optimization of envelope and HVAC systems
selection for residential buildings." Energy and Buildings no. 43 (12):3373-3382. doi:
10.1016/j.enbuild.2011.08.031.
Bogenstätter, Ulrich. 2000. "Prediction and optimization of life-cycle costs in early design."
Building Research & Information no. 28 (5-6):376-386. doi: 10.1080/096132100418528.
Bouchlaghem, N. 2000. "Optimising the design of building envelopes for thermal performance."
Automation in Construction no. 10 (1):101-112. doi: 10.1016/S0926-5805(99)00043-6.
Bouchlaghem, N. M., and K. M. Letherman. 1990. "Numerical optimization applied to the
thermal design of buildings." Building and Environment no. 25 (2):117-124. doi:
10.1016/0360-1323(90)90023-K.
Broadbent, Geoffrey 2000. Design in Architecture: Architecture and the Human Sciences. 2 ed:
David Fulton Publishers.
Brownlee, Alexander, and Jonathan Wright. 2012. Solution analysis in multi-objective
optimization. Paper read at BSO12, 10-11 September 2012, at Loughborough, UK.
Buchanan, Richard. 1992. "Wicked problems in design thinking." Design Issues no. 8 (2):5-21.
doi: 10.2307/1511637.
Bucking, Scott, Andreas Athienitis, Radu Zmeureanu, William O’Brien, and Matt Doiron. 2010.
Design optimization methodology for a near net zero energy demonstration home.
Paper read at EuroSun 2010, 28 September - 1 October 2010, at Graz, Astria.
buildingSMART. 2013. Model - Industry Foundation Classes (IFC). buildingSMART 2013 [cited 12
March 2013 2013]. Available from http://www.buildingsmart.org/standards/ifc.
Bukhari, Fakhri A. 2011. A hierarchical evolutionary algorithmic design (HEAD) system for
generating and evolving building design models. PhD thesis, Built Environment and
Engineering, School of Design, Queensland University of Technology.
Bukhari, Fakhri, John H. Frazer, and Robin Drogemuller. 2010. Evolutionary algorithms for
sustainable building design. Paper read at The 2nd International Conference on
Sustainable Architecture and Urban Development, 12-14 July 2010, at Amman, Jordan.
Burhenne, Sebastian, Olga Tsvetkova, Dirk Jacob, Gregor P. Henze, and Andreas Wagner. 2013.
"Uncertainty quantification for combined building performance and cost-benefit
analyses." Building and Environment no. 62:143-154. doi:
10.1016/j.buildenv.2013.01.013.
Burry, Mark , and Zolna Murray. 1997a. Architectural Design Based on Parametric Variation and
Associative Geometry. In 15th eCAADe Conference. Vienna (Austria)
232
Burry, Mark, and Zolna Murray. 1997b. Computer aided architectural design using parametric
variation and associative geometry. Paper read at 15th eCAADe Conference: Challenges
of the Future, 17-20 September, 1997, at Vienna, Austria.
Caldas, Luisa. 2002a. Evolving three-dimensional architecture form: An application to low-
energy design. Paper read at Artificial Intelligence in Design '02.
Caldas, Luisa. 2011. Generation of energy-efficient patio houses: Combining GENE_ARCH and a
Marrakesh Medina shape grammar. Paper read at 2011 AAAI Spring Symposium Series:
Artificial Intelligence and Sustainable Design, 21–23 March 2011, at Palo Alto, CA, USA.
Caldas, Luisa G. 2005. Three-dimensional shape generation of low-energy architectural solutions
using Pareo genetic algorithms. Paper read at 23rd eCAADe Conference: Digital Design:
the quest for new paradigms, 21-24 September 2005, at Lisbon, Portugal.
Caldas, Luisa G., and Luis Santos. 2012. Generation of energy-efficient patio houses with
GENE_ARCH: Combining an evolutionary generative design system with a shape
grammar. Paper read at 30th eCAADe Conference: Digital Physicality | Physical
Digitality, 12-14 September 2012, at Czech Technical University in Prague, Faculty of
Architecture (Czech Republic).
Caldas, Luisa Gama. 2002b. A generative design system for low-energy architecture design.
Paper read at Cost C-12 Seminar, 19-20 April 2002, at Parque das Nações, Lisbon.
Caldas, Luisa Gama. 2006. "GENE_ARCH: An evolution-based generative design system for
sustainable architecture." In Intelligent Computing in Engineering and Architecture,
edited by Ian Smith, 109-118. Springer Berlin / Heidelberg.
Caldas, Luisa Gama. 2008. "Generation of energy-efficient architecture solutions applying
GENE_ARCH: An evolution-based generative design system." Advanced Engineering
Informatics no. 22 (1):59-70.
Caldas, Luisa Gama 2001. An evolution-based generative design system : using adaptation to
shape architectural form. Ph. D., Massachusetts Institute of Technology. Dept. of
Architecture., Massachusetts Institute of Technology.
Caldas, Luisa Gama, and Leslie K. Norford. 2002. "A design optimization tool based on a genetic
algorithm." Automation in Construction no. 11 (2):173-184.
Caldas, Luisa Gama, and Leslie K. Norford. 2003a. "Genetic algorithms for optimization of
building envelopes and the design and control of HVAC systems." Journal of Solar Energy
Engineering no. 125 (3):343-351. doi: 10.1007/s00163-008-0059-9.
Caldas, Luisa, and Leslie Norford. 2003b. "Shape generation using Pareto genetic algorithms:
Integrating conflicting design objectives in low-energy architecture." International
Journal of Architectural Computing no. 1 (4):503-515. doi:
10.1260/147807703773633509.
Caldas, Luisa, Leslie Norford, and Joao Rocha. 2003. "An evolutionary model for sustainable
design." Management of Environmental Quality no. 14 (2/3).
233
Caldas, Luisa, and Joao Rocha. 2001. A generative design system applied to Siza's School of
Architecture at Oporto. Paper read at CAADRIA 2001: The Sith Conference on
Computer-Aided Architectural Design Research in Asia, 19-21 April 2001, at Sydney.
Cassol, Fabiano, Paulo Smith Schneider, Francis H. R. França, and Antônio J. Silva Neto. 2011.
"Multi-objective optimization as a new approach to illumination design of interior
spaces." Building and Environment no. 46 (2):331-338. doi:
10.1016/j.buildenv.2010.07.028.
Chantrelle, Fanny Pernodet, Hicham Lahmidi, Werner Keilholz, Mohamed El Mankibi, and Pierre
Michel. 2011. "Development of a multicriteria tool for optimizing the renovation of
buildings." Applied Energy no. 88 (4):1386-1394. doi: 10.1016/j.apenergy.2010.10.002.
Charron, Rémi. 2007. Development of a genetic algorithm optimisation tool for the early stage
design of low and net-zero energy solar homes. Ph.D., Concordia University (Canada),
Canada.
Charron, Rémi, and Andreas Athienitis. 2006. The use of genetic algorithms for a net-zero energy
solar home design optimisation tool. Paper read at PLEA 2006, 6-8 September, 2006, at
Geneva, Switzerland.
Chen, Hong, Ryozo Ooka, and Shinsuke Kato. 2008. "Study on optimum design method for
pleasant outdoor thermal environment using genetic algorithms (GA) and coupled
simulation of convection, radiation and conduction." Building and Environment no. 43
(1):18-30. doi: 10.1016/j.buildenv.2006.11.039.
Cherry, Edith, and John Petronis. Architectural programming. National Institute of Building
Sciences, 22 April 2013 2009. Available from
http://www.wbdg.org/design/dd_archprogramming.php.
Choi, Joon-Ho, and Vivian Loftness. 2012. "Investigation of human body skin temperatures as a
bio-signal to indicate overall thermal sensations." Building and Environment no. 58
(0):258-269. doi: 10.1016/j.enbuild.2011.08.009.
Chong, Yih, Chun-Hsien Chen, and Kah Leong. 2009. "A heuristic-based approach to conceptual
design." Research in Engineering Design no. 20 (2):97-116. doi: 10.1007/s00163-008-
0059-9.
Chouchoulas, Orestes. 2003. Shape evolution: An algorithmic method for conceptual
architectural design combining shape grammars and genetic algorithms. Ph.D.,
University of Bath (United Kingdom), England.
Choudhary, Ruchi. 2004. A hierarchical optimization framework for simulation-based
architectural design. Ph.D. Dissertation, University of Michigan, United States --
Michigan.
Chow, T. T., G. Q. Zhang, Z. Lin, and C. L. Song. 2002. "Global optimization of absorption chiller
system by genetic algorithm and neural network." Energy and Buildings no. 34 (1):103-
109. doi: 10.1016/S0378-7788(01)00085-8.
234
Christensen, Craig, Greg Barker, and Scott Horowitz. 2004. A sequential search technique for
identifying optimal building designs on the path to zero net energy. Paper read at Solar
2004, 11-14 July 2004, at Portland, Oregon.
Clarke, J. A. 1999. Prospects for truly integrated building performance simulation. Paper read at
Building Simulation 1999, 13-15 September 1999, at Kyoto, Japan.
Clarke, J. A., and J. L. M. Hensen. 2000. Integrated simulation for building design: an example
state-of-the-art system. Paper read at CIT2000, at Reykjavik.
Clevenger, Caroline M., and John Haymaker. 2011. "Metrics to assess design guidance." Design
Studies no. 32 (5):431-456. doi: 10.1016/j.destud.2011.02.001.
Clevenger, Caroline Murrie, John Riker Haymaker, and Andrew Ehrich. 2013. "Design exploration
assessment methodology: Testing the guidance of design processes." Journal of
Engineering Design no. 24 (3):165-184. doi: 10.1080/09544828.2012.698256.
Coello Coello, Carlos A. , and Gary B. Lamont, eds. 2004. Applications of multi-objective
evolutionary algorithms. Edited by Xin Yao, Advances in natural computation: World
Scientific Publishing Co. Pte. Ltd.
Coello Coello, Carlos A., Gary B. Lamont, and David A. Van Veldhuisen. 2007. Evolutionary
algorithms for solving multi-objective problems. Edited by David E. Goldberg and John R.
Koza. 2nd ed, Genetic and evolutionary computation series. New York: Springer.
Coley, David A., and Stefan Schukat. 2002. "Low-energy design: Combining computer-based
optimisation and human judgement." Building and Environment no. 37 (12):1241-1247.
doi: 10.1016/s0360-1323(01)00106-8.
Čongradac, Velimir, and Filip Kulić. 2012. "Recognition of the importance of using artificial
neural networks and genetic algorithms to optimize chiller operation." Energy and
Buildings no. 47:651-658. doi: 10.1016/j.enbuild.2012.01.007.
Crawley, D. B. , L. K. Lawrie, F. C. Winkelmann, W. F. Buhl, A. E. Erdem, C. O. Pedersen, R. J.
Liesen, D. E. Fisher, R. K. Strand, and R. D. Taylor. 1997. What next for building energy
simulation - a glimpse of the future. Paper read at Building Simulation 1997, 8-10
September 1997, at Prague, Czech Republic.
Crawley, Drury B., Jon W. Hand, Michaël Kummert, and Brent T. Griffith. 2008. "Contrasting the
capabilities of building energy performance simulation programs." Building and
Environment no. 43 (4):661-673. doi: 10.1016/j.buildenv.2006.10.027.
Cross, Nigel, Henri Christiaans, and Kees Dorst. 1996. Analysing design activity. Chichester ; New
York: Wiley.
D'Cruz, Neville A., and Antony D. Radford. 1987. "A multicriteria model for building performance
and design." Building and Environment no. 22 (3):167-179. doi: 10.1016/0360-
1323(87)90005-9.
D'Cruz, Neville, Antony D. Radford, and John S. Gero. 1983. "A Pareto optimization problem
formulation for building performance and design." Engineering Optimization no. 7
(1):17-33. doi: 10.1080/03052158308960626.
235
Damski, José C., and John S. Gero. 1997. "An evolutionary approach to generating constraint-
based space layout topologies." In CAADFutures 97, edited by Richard Junge, 855-864.
Springer Netherlands.
de Weck, Olivier L. 2012. MIT strategic engineering: Multidisciplinary design optimization, 14
July 2011 2012 [cited 9 May 2012]. Available from
http://strategic.mit.edu/optimization.php.
de Wilde, P. , Augenbroe, G., and van der Voorden, M. 2001. A strategy to provide
computational support for the selection of energy saving building components. Paper
read at Building Simulation 2001, 13-15 August 2001, at Rio de Janeiro, Brazil.
de Wilde, Pieter Jacobus Cornelis Jan. 2004. Computational Support for the Selection of Energy
Saving Building Components. Ph.D. Dissertation, Architecture, Delft University of
Technology, the Netherlands.
Deb, Kalyanmoy. 2001. Multi-objective optimization using evolutionary algorithms. 1st ed,
Wiley-Interscience series in systems and optimization. Chichester ; New York: John Wiley
& Sons.
Diakaki, Christina, Evangelos Grigoroudis, Nikos Kabelis, Dionyssia Kolokotsa, Kostas Kalaitzakis,
and George Stavrakakis. 2010. "A multi-objective decision model for the improvement
of energy efficiency in buildings." Energy no. 35 (12):5483-5496. doi:
10.1016/j.energy.2010.05.012.
Diakaki, Christina, Evangelos Grigoroudis, and Dionyssia Kolokotsa. 2008. "Towards a multi-
objective optimization approach for improving energy efficiency in buildings." Energy
and Buildings no. 40 (9):1747-1754. doi: 10.1016/j.enbuild.2008.03.002.
Djuric, Natasa, Vojislav Novakovic, Johnny Holst, and Zoran Mitrovic. 2007. "Optimization of
energy consumption in buildings with hydronic heating systems considering thermal
comfort by use of computer-based tools." Energy and Buildings no. 39 (4):471-477. doi:
10.1016/j.enbuild.2006.08.009.
DOE. 2013. BESTEST (Building Energy Simulation TEST). U.S. Department of Energy (DOE) 2011a
[cited 28 Feburary 2013 2013]. Available from
http://apps1.eere.energy.gov/buildings/tools_directory/software.cfm/ID=85/pagename
=alpha_list.
DOE. 2013. Building Energy Software Tools Directory - Autodesk Green Building Studio 2011b
[cited 22 Feburary 2013 2013]. Available from
http://apps1.eere.energy.gov/buildings/tools_directory/software.cfm/ID=440/pagenam
e_submenu=/pagename_menu=/pagename=alpha_list.
DOE. 2013. Building energy software tools directory. U.S. Department of Energy (DOE), 13
Feburary 2013 2013 [cited 3 March 2013 2013]. Available from
http://apps1.eere.energy.gov/buildings/tools_directory/.
Drogemuller, Robin M., John Crawford, and Stephen Egan. 2004. Linking early design decisions
across multiple disciplines. Paper read at Conference on Product and Process Modelling
236
in the Building and Construction Industry: ECPPM 2004, 8–10 September 2004, at
Istanbul, Turkey.
Eastman, C., Jae-min Lee, Yeon-suk Jeong, and Jin-kook Lee. 2009. "Automatic rule-based
checking of building designs." Automation in Construction no. 18 (8):1011-1033. doi:
10.1016/j.autcon.2009.07.002.
Eastman, Charles M. 1968. Explorations of the cognitive processes in design. Pittsburgh, PA:
Department of Computer Science Report, Carnegie Mellon University.
Eastman, Chuck, Paul Teicholz, Rafael Sacks, and Kathleen Liston. 2011. BIM Handbook: A Guide
to Building Information Modeling for Owners, Managers, Designers, Engineers and
Contractors. 2 ed: Wiley.
Ellis, Peter G., Paul A. Torcellini, and Drury B. Crawley. 2008. Energy design plugin: An EnergyPlus
plugin for Sketchup. Paper read at SimBuild 2008, 30 July - 1 August 2008, at Berkeley,
California, USA.
Emmerich, Michael T. M., Christina J. Hopfe, Robert Marijt, Jan L. M. Hensen, Christian Struck,
and Paul A. L. Stoelinga. 2008. Evaluating optimization methodologies for future
integration in building performance tools. Paper read at 8th International Conference on
Adaptive Computing in Design and Manufacture (ACDM), 29 April - 1 May, at Bristol.
Evins, Ralph. 2013. "A review of computational optimisation methods applied to sustainable
building design." Renewable and Sustainable Energy Reviews no. 22:230-245. doi:
10.1016/j.rser.2013.02.004.
Evins, Ralph, Philip Pointer, and Stuart Burges. 2012. Multi-objective optimisation of a modular
building for different climate types. Paper read at BSO12, 10-11 September 2012, at
Loughborough, UK.
Evins, Ralph, Philip Pointer, and Ravi Vaidyanathan. 2011a. Multi-objective optimization of the
configuration and control of a double-skin facade. Paper read at Building Simulation
2011, 14-16 November 2011, at Sydney, Australia.
Evins, Ralph, Philip Pointer, and Ravi Vaidyanathan. 2011b. Optimisation for CHP and CCHP
decision-making. Paper read at Building Simulation 2011, 14-16 November 2011, at
Sydney, Australia.
Evins, Ralph, Philip Pointer, Ravi Vaidyanathan, and Stuart Burgess. 2012. "A case study
exploring regulated energy use in domestic buildings using design-of-experiments and
multi-objective optimisation." Building and Environment no. 54 (0):126-136. doi:
10.1016/j.buildenv.2012.02.012.
Fenton, Norman, and Wei Wang. 2006. "Risk and confidence analysis for fuzzy multicriteria
decision making." Knowledge-Based Systems no. 19 (6):430-437. doi:
10.1016/j.knosys.2006.03.002.
Fesanghary, M., S. Asadi, and Zong Woo Geem. 2012. "Design of low-emission and energy-
efficient residential buildings using a multi-objective optimization algorithm." Building
and Environment no. 49:245-250. doi: 10.1016/j.buildenv.2011.09.030.
237
Flager, Forest, David Jason Gerber, and Ben Kallman. 2014. "Measuring the impact of scale and
coupling on solution quality for building design problems." Design Studies no. Accepted.
Flager, Forest, and John Haymaker. 2007. A comparison of multidisciplinary design, analysis and
optimization processes in the building construction and aerospace industries. Paper
read at 24th W78 Conference on Bringing ITC knowledge to work, 26-29 June 2007, at
Maribor, Slovenia.
Flager, Forest, Benjamin Welle, Prasun Bansal, Grant Soremekun, and John Haymaker. 2009.
"Multidisciplinary process integration and design optimization of a classroom building."
ITcon no. 14 (38):595-612.
Fong, K. F., V. I. Hanby, and T. T. Chow. 2003. "Optimization of MVAC systems for energy
management by evolutionary algorithm." Facilities no. 21 (10):223-232. doi:
10.1108/02632770310493599.
Fong, K. F., V. I. Hanby, and T. T. Chow. 2006. "HVAC system optimization for energy
management by evolutionary programming." Energy and Buildings no. 38 (3):220-231.
doi: 10.1016/j.enbuild.2005.05.008.
Fonseca, Carlos M., and Peter J. Fleming. 1993. Genetic algorithms for multiobjective
optimization: Formulation, discussion and generalization. Paper read at The Fifth
International Conference on Genetic Algorithms, 17-21 July 1993, at San Mateo, CA,
USA.
Frazer, John. 1995. An evolutionary architecture. London: Architectural Association.
Gagne, Jaime, and Marilyne Andersen. 2012. "A generative facade design method based on
daylighting performance goals." Journal of Building Performance Simulation no. 5
(3):141-154. doi: 10.1080/19401493.2010.549572.
Gerber, D. J. 2009. The Parametric Affect: Computation, Innovation and Models for Design
Exploration in Contemporary Architectural Practice, Design and Technology Report
Series. Cambridge, MA: Harvard Design School.
Gerber, David Jason. 2007. Parametric practices: Models for design exploration in architecture.
Dissertation, Architecture, Harvard Graduate School of Design, Cambridge, MA.
Gerber, David Jason, and Forest Flager. 2011. Teaching design optioneering: A method for
multidisciplinary design optimization. Paper read at 2011 ASCE International Workshop
on Computing in Civil Engineering, 2011-06-19.
Gero, John S. 1975. "Architectural optimization - A review." Engineering Optimization no. 1
(3):189-199. doi: 10.1080/03052157508960586.
Gero, John S., Neville D'Cruz, and Antony D. Radford. 1983. "Energy in context: A multicriteria
model for building design." Building and Environment no. 18 (3):99-107. doi:
10.1016/0360-1323(83)90001-X.
Gero, John S., and Vladimir A. Kazakov. 1998. "Evolving design genes in space layout planning
problems." Artificial Intelligence in Engineering no. 12 (3):163-176. doi: 10.1016/S0954-
1810(97)00022-8.
238
Gero, John S., and Thomas Mc Neill. 1998. "An approach to the analysis of design protocols."
Design Studies no. 19 (1):21-61. doi: 10.1016/s0142-694x(97)00015-x.
Gero, John S., and Antony D. Radford. 1978. "A dynamic programming approach to the optimum
lighting problem." Engineering Optimization no. 3 (2):71-82. doi:
10.1080/03052157808902379.
Geyer, Philipp. 2009. "Component-oriented decomposition for multidisciplinary design
optimization in building design." Advanced Engineering Informatics no. 23 (1):12-31. doi:
10.1016/j.aei.2008.06.008.
Ghiaus, Christian, and Noel Jabbour. 2012. "Optimization of multifunction multi-source solar
systems by design of experiments." Solar Energy no. 86 (1):593-607. doi:
10.1016/j.solener.2011.11.002.
Gholamhossein Abdollahi, Hoseyn Sayyaadi. 2013. "Application of the multi-objective
optimization and risk analysis for the sizing of a residential small-scale CCHP system."
Energy and Buildings no. 60:330-344. doi: 10.1016/j.enbuild.2013.01.026.
Glymph, James, Dennis Shelden, Cristiano Ceccato, Judith Mussel, and Hans Schober. 2004. "A
parametric strategy for free-form glass structures using quadrilateral planar facets."
Automation in Construction no. 13 (2):187-202. doi: 10.1016/j.autcon.2003.09.008.
Goldberg, David E. 1989. Genetic algorithms in search, optimization, and machine learning.
Reading, Mass.: Addison-Wesley Pub. Co.
Grierson, D. E., and S. Khajehpour. 2002. "Method for conceptual design applied to office
buildings." Journal of Computing in Civil Engineering no. 16 (2):83-103. doi:
10.1061/(ASCE)0887-3801(2002)16:2(83).
Grierson, D. E., and W. H. Pak. 1993. "Optimal sizing, geometrical and topological design using a
genetic algorithm." Structural and Multidisciplinary Optimization no. 6 (3):151-159. doi:
10.1007/bf01743506.
Grierson, Donald E. 2008. "Pareto multi-criteria decision making." Advanced Engineering
Informatics no. 22 (3):371-384. doi: 10.1016/j.aei.2008.03.001.
Grobman, Yasha Jacob, Abraham Yezioro, and I. Guedi Capeluto. 2008. Building form generation
based on multiple performance envelopes. Paper read at PLEA 2008, 22-24 October, at
Dublin, Ireland.
Grobman, Yasha Jacpb, Abraham Yezioro, and Isaac Guedi Capeluto. 2009. "Computer-based
form generation in architectural design - A critical review." International Journal of
Architectural Computing no. 7 (4):535-554. doi: 10.1260/1478-0771.7.4.535.
Hamdy, Mohamed, Ala Hasan, and Kai Siren. 2009. Combination of Optimization algorithms for a
multi-objective building design problem. Paper read at Building Simulation 2009, 27-30
July 2009, at Glasgow, Scotland.
Hamdy, Mohamed, Ala Hasan, and Kai Siren. 2010. "Optimum design of a house and its HVAC
systems using simulation-based optimisation." International Journal of Low-Carbon
Technologies no. 5 (3):120-124. doi: 10.1093/ijlct/ctq010.
239
Hamdy, Mohamed, Ala Hasan, and Kai Siren. 2011a. "Applying a multi-objective optimization
approach for design of low-emission cost-effective dwellings." Building and Environment
no. 46 (1):109-123. doi: 10.1016/j.buildenv.2010.07.006.
Hamdy, Mohamed, Ala Hasan, and Kai Siren. 2011b. "Impact of adaptive thermal comfort
criteria on building energy use and cooling equipment size using a multi-objective
optimization scheme." Energy and Buildings no. 43 (9):2055-2067. doi:
10.1016/j.enbuild.2011.04.006.
Hamdy, Mohamed, Ala Hasan, and Kai Siren. 2013. "A multi-stage optimization method for cost-
optimal and nearly-zero-energy building solutions in line with the EPBD-recast 2010."
Energy and Buildings no. 56:189-203. doi: 10.1016/j.enbuild.2012.08.023.
Hamdy, Mohamed, Matti Palonen, and Ala Hasan. 2012. Implementation of Pareto-Archive
NSGA-II algorithms to a nearly-zero-energy building optimisation problem. Paper read at
BSO12, 10-11 September 2012, at Loughborough, UK.
Hauglustaine, Jean-Marie, and Sleiman Azar. 2001. Interactive tool aiding to optimise the
building envelope during the sketch design. Paper read at Building Simulation 2001,
August 13-15, at Rio de Janeiro, Brazil.
Haupt, Randy L., and Sue Ellen Haupt. 2004. Practical genetic algorithms. 2nd ed. Hoboken, N.J.:
Wiley-Interscience.
Hayter, Sheila J , Paul A Torcellini, Richard B Hayter, and Ron Judkoff. 2001. The energy design
process for designing and constructing high-performance buildings. Paper read at Clima
2000/Napoli 2001 World Congress, 15-18 September 2001, at Naples, Italy.
Hensen, Jan 1994. Energy related design decisions deserve simulation approach. Paper read at
1994 DDSS, 15-19 August 1994, at Vaals, the Netherlands.
Hensen, Jan, Ery Djunaedy, Marija Radošević, and Azzedine Yahiaoui. 2004. Building
performance simulation for better design: Some issues and solutions. Paper read at
PLEA 2004, 19-22 September 2004, at Eindhoven, The Netherlands.
Hensen, Jan L. M. 2004. Towards more effective use of building performance simulation in
design. Paper read at 2004 DDSS, 2-5 July 2004, at St.Michielsgestel, The Netherlands.
Hensen, Jan L. M., and Roberto Lamberts. 2011. "Introduction to building performance
simulation." In Building Performance Simulation for Design and Operation, edited by Jan
L. M. Hensen and Roberto Lamberts, 1-14. New York, NY, USA: Spon Press.
Henze, Gregor P., Clemens Felsmann, and Gottfried Knabe. 2004. "Evaluation of optimal control
for active and passive building thermal storage." International Journal of Thermal
Sciences no. 43 (2):173-183. doi: 10.1016/j.ijthermalsci.2003.06.001.
Hesselgren, Lars, Renos Charitou, and Stylianos Dritsas. 2007. "The Bishopsgate Tower case
study." International Journal of Architectural Computing no. 5 (1):62-81.
Hirsch, James J. 2013. DOE-2: Building Energy Use and Cost Analysis Tool. James J. Hirsch 2009
[cited 28 Feburary 2013 2013]. Available from http://doe2.com/DOE2/index.html.
240
Hoes, P., Trcka, M. , J.L.M. Hensen, and B. Hoekstra Bonnema. 2011. Optimizing building designs
using a robustness indicator with respect to user behavior. Paper read at Building
Simulation 2011, 14-16 November 2011, at Sydney, Australia.
Holland, John H. 1992. Adaptation in natural and artificial systems: An introductory analysis with
applications to biology, control, and artificial intelligence. Ann Arbor: A Bradford Book.
Holst, Johnny N. 2003. Using whole building simulation models and optimizing procedures to
optimize building envelope design with respect to energy consuption and indoor
environment. Paper read at Building Simulation 2003, 11-14 August 11-14 2003, at
Eindhoven, Netherlands.
Holzer, Dominik. 2010. "Optioneering in collaborative design practice." International Journal of
Architectural Computing no. 8 (2):165-182. doi: 10.1260/1478-0771.8.2.165.
Holzer, Dominik, and Steven Downing. 2010. "Optioneering: A new basis for engagement
between architects and their collaborators." Architectural Design no. 80 (4):60-63. doi:
10.1002/ad.1107.
Holzer, Dominik, Yamin Tengono, and Steven Downing. 2007. "Developing a framework for
linking design intelligence from multiple professions in the AEC industry." In Computer-
Aided Architectural Design Futures (CAADFutures) 2007, edited by Andy Dong, Andrew
Vande Moere and John S. Gero, 303-316. Springer Netherlands.
Hopfe, Christina J., Christian Struck, Gülsu Ulukavak Harputlugil, Jan Hensen, and Pieter De
Wilde. 2005. Exploration of the use of building performance simulation for conceptual
design. Paper read at IBPSA-NVL, 20 October 2005, at Delft, The Netherlands.
Hopfe, Christina Johanna. 2009. Uncertainty and sensitivity analysis in building performance
simulation for decision support and design optimization, Eindhoven : Technische
Universiteit Eindhoven.
Huang, Hong, Shinsuke Kato, and Rui Hu. 2012. "Optimum design for indoor humidity by
coupling Genetic Algorithm with transient simulation based on contribution ratio of
indoor humidity and climate analysis." Energy and Buildings no. 47:208-216. doi:
10.1016/j.enbuild.2011.11.040.
Huang, W., and H. N. Lam. 1997. "Using genetic algorithms to optimize controller parameters for
HVAC systems." Energy and Buildings no. 26 (3):277-282. doi: 10.1016/S0378-
7788(97)00008-X.
Jadid, Mansour N., and Mohammad M. Idrees. 2007. "Cost estimation of structural skeleton
using an interactive automation algorithm: A conceptual approach." Automation in
Construction no. 16 (6):797-805. doi: doi: 10.1016/j.autcon.2007.02.007.
Janssen, Patrick, Cihat Basol, and Kian Wee Chen. 2011. Evolutionary developmental design for
non-programmers. Paper read at 29th eCAADe Conference: Respecting Fragile Places,
21-24 September 2011, at University of Ljubljana, Faculty of Architecture (Slovenia).
Janssen, Patrick, John Frazer, and Ming-Xi Tang. 2005. "A framework for generating and evolving
building designs." International Journal of Architectural Computing no. 3 (4):449-470.
doi: 10.1260/147807705777781112.
241
Janssen, Patrick H. T. 2009. An evolutionary system for design exploration. Paper read at Joining
Languages, Cultures and Visions: CAADFutures 2009, 17-19 June 2009, at Montréal.
Janssen, Patrick Hubert Theodoor. 2004. A design method and computational architecture for
generating and evolving building designs. Dissertation, School of Design, Hong Kong
Polytechnic University, Hong Kong.
Jedrzejuk, Hanna, and Wojciech Marks. 2002. "Optimization of shape and functional structure of
buildings as well as heat source utilization. Basic theory." Building and Environment no.
37 (12):1379-1383. doi: 10.1016/s0360-1323(01)00101-9.
Jin, Qian, and Mauro Overend. 2012. Facade renovation for a public building based on a whole-
life value approach. Paper read at BSO12, 10-11 September 2012, at Loughborough, UK.
Jo, Jun H., and John S. Gero. 1998. "Space layout planning using an evolutionary approach."
Artificial Intelligence in Engineering no. 12 (3):149-162. doi: 10.1016/S0954-
1810(97)00037-X.
Johnson, Colin G., and Juan Jesús Romero Cardalda. 2002. "Genetic algorithms in visual art and
music." Leonardo no. 35 (2):175-184. doi: 10.1162/00240940252940559.
Kalay, Yehuda E. 1998. "P3: Computational environment to support design collaboration."
Automation in Construction no. 8 (1):37-48. doi: 10.1016/S0926-5805(98)00064-8.
Kalay, Yehuda E. 1999. "Performance-based design." Automation in Construction no. 8 (4):395-
409. doi: 10.1016/s0926-5805(98)00086-7.
Kämpf, Jérôme Henri, and Robinson. Darren. 2009. Opimisation of urban energy demand using
an evolutionary algorithm. Paper read at Building Simulation 2009, 27-30 July 2009, at
Glasgow, Scotland.
Kämpf, Jérôme Henri, Marylène Montavon, Josep Bunyesc, Raffaele Bolliger, and Darren
Robinson. 2010. "Optimisation of buildings’ solar irradiation availability." Solar Energy
no. 84 (4):596-603. doi: 10.1016/j.solener.2009.07.013.
Kämpf, Jérôme Henri, and Darren Robinson. 2009. "A hybrid CMA-ES and HDE optimisation
algorithm with application to solar energy potential." Applied Soft Computing no. 9
(2):738-745. doi: 10.1016/j.asoc.2008.09.009.
Kato, Shinsuke, and Jeong Hoe Lee. 2004. Optimization of hybrid air-conditioning system with
natural ventilation by GA and CFD. Paper read at AIVC 25th conference, September
2004, at Prague.
Kayo, Genku, and Ryozo Ooka. 2009. Application multi-objective genetic algorithm for optimal
design method of distributed energy system. Paper read at Building Simulation 2009,
27-30 July 2009, at Glasgow, Scotland.
Kayo, Genku, and Ryozo Ooka. 2010. "Building energy system optimizations with utilization of
waste heat from cogenerations by means of genetic algorithm." Energy and Buildings
no. 42 (7):985-991. doi: 10.1016/j.enbuild.2010.01.010.
Keough, Ian, and David Benjamin. 2010. Multi-objective optimization in architectural design.
Paper read at SimAUD 2010, 12-15 April 2010, at Orlando, FL, USA.
242
Khemlani, Lachmi. 1995. "GENWIN: A generative computer tool for window design in energy-
conscious architecture." Building and Environment no. 30 (1):73-81. doi: 10.1016/0360-
1323(94)e0027-o.
Kicinger, Rafal, Tomasz Arciszewski, and Kenneth De Jong. 2005. "Emergent designer: An
integrated research and design support tool based on models of complex systems."
ITcon no. 10:329-347.
Kicinger, Rafal, Tomasz Arciszewski, and Kenneth De Jong. 2005. "Evolutionary computation and
structural design: A survey of the state-of-the-art." Computers & Structures no. 83 (23-
24):1943-1978. doi: 10.1016/j.compstruc.2005.03.002.
Kilian, Axel. 2006. Design exploration through bidirectional modeling of constraints, Department
of Architecture, Massachusetts Institute of Technology.
Kolokotsa, D., C. Diakaki, E. Grigoroudis, G. Stavrakakis, and K. Kalaitzakis. 2009. "Decision
support methodologies on the energy efficiency and energy management in buildings."
Advances in Building Energy Research no. 3 (1):121-146. doi: 10.3763/aber.2009.0305.
Kolokotsa, D., G. S. Stavrakakis, K. Kalaitzakis, and D. Agoris. 2002. "Genetic algorithms
optimized fuzzy controller for the indoor environmental management in buildings
implemented using PLC and local operating networks." Engineering Applications of
Artificial Intelligence no. 15 (5):417-428. doi: 10.1016/S0952-1976(02)00090-8.
Kusiak, Andrew, Guanglin Xu, and Fan Tang. 2011. "Optimization of an HVAC system with a
strength multi-objective particle-swarm algorithm." Energy no. 36 (10):5935-5943. doi:
10.1016/j.energy.2011.08.024.
Laiserin, Jerry. 2008. Digital environments for early design: Form-making versus form finding.
Paper read at Fisrt International Conference on Critical Digital: What Matter(s)?, 18-19
April 2008, at Cambridge, MA, USA.
Lam, K. P., N. H. Wong, A. Mahdavi, K. K. Chan, Z. Kang, and S. Gupta. 2004. "SEMPER-II: an
internet-based multi-domain building performance simulation environment for early
design support." Automation in Construction no. 13 (5):651-663. doi:
10.1016/j.autcon.2003.12.003.
Lam, Khee Poh, Yi Chun Huang, and Chaoqin Zhai. 2004. Energy modeling tools assessment for
early design phase. Pittsburgh, PA: CMU Center for Building Performance and
Diagnostics.
Lam, Khee Poh, Nyuk Hien Wong, and Feriadi Henry. 1999. A study of the use of performance-
based simulation tools for building design and evaluation in Singapore. Paper read at
Building Simulation 1999, 13-15 September 1999, at Kyoto, Japan.
LBNL. 2013. DOE-2. Lawrence Berkeley National Laboratory, January 2008 2008 [cited 28
Feburary 2013 2013]. Available from
http://gundog.lbl.gov/dirsoft/d2whatis.html#Validation%20of%20DOE-2.
Lee, Jeong Hoe. 2007. "Optimization of indoor climate conditioning with passive and active
methods using GA and CFD." Building and Environment no. 42 (9):3333-3340. doi:
10.1016/j.buildenv.2006.08.029.
243
Lee, Kuei-Peng, and Te-Ang Cheng. 2012. "A simulation–optimization approach for energy
efficiency of chilled water system." Energy and Buildings no. 54:290-296. doi:
10.1016/j.enbuild.2012.06.028.
Lee, Wen-Shing, Yi-Ting Chen, and Yucheng Kao. 2011. "Optimal chiller loading by differential
evolution algorithm for reducing energy consumption." Energy and Buildings no. 43 (2–
3):599-604. doi: 10.1016/j.enbuild.2010.10.028.
Li, Hongwei, Razi Nalim, and P. A. Haldi. 2006. "Thermal-economic optimization of a distributed
multi-generation energy system—A case study of Beijing." Applied Thermal Engineering
no. 26 (7):709-719. doi: 10.1016/j.applthermaleng.2005.09.005.
Liu, Y. C., A. Chakrabarti, and T. Bligh. 2003. "Towards an ‘ideal’ approach for concept generation
" Design Studies no. 24 (4):341-355. doi: 10.1016/s0142-694x(03)00003-6.
Loonen, Roel C.G.M. , Marija Trcka, and Jan L.M. Hensen. 2011. Exploring the potential of
climate adaptive building shells. Paper read at Building Simulation 2011, 14-16
November 2011, at Sydney, Australia.
Lu, Lu, Wenjian Cai, Lihua Xie, Shujiang Li, and Yeng Chai Soh. 2005. "HVAC system
optimization—in-building section." Energy and Buildings no. 37 (1):11-22. doi:
10.1016/j.enbuild.2003.12.007.
Magnier, Laurent. 2008. Multiobjective optimization of building design using artificial neural
network and multiobjective evolutionary algorithms. masters, Concordia University.
Magnier, Laurent, and Fariborz Haghighat. 2010. "Multiobjective optimization of building design
using TRNSYS simulations, genetic algorithm, and artificial neural network." Building and
Environment no. 45 (3):739-746. doi: 10.1016/j.buildenv.2009.08.016.
Mahdavi, A. 2001. Distributed multi-disciplinary building performancecomputing. Paper read at
Eighth Europia International Conference, at Delft, the Netherlands.
Mahdavi, Ardeshir. 1999. "A comprehensive computational environment for performance based
reasoning in building design and evaluation." Automation in Construction no. 8 (4):427-
435. doi: 10.1016/s0926-5805(98)00089-2.
Mahdavi, Ardeshir, Silvana Feurer, Alexander Redlein, and Georg Suter. 2003. An inquiry into the
building performance simulation tools usage by architects in Austria. Paper read at
Building Simulation 2003, 11-14 August 11-14 2003, at Eindhoven, Netherlands.
Mahdavi, Ardeshir, and Prechaya Mahattanatawe. 2003. Enclosure systems design and control
support via dynamic simulation-assisted optimization. Paper read at Building Simulation
2003, 11-14 August 11-14 2003, at Eindhoven, Netherlands.
Maher, Mary Lou , and Josiah Poon. 1996. "Modeling design exploration as co-evolution."
Microcomputers in Civil Engineering no. 11:195-209. doi: 10.1111/j.1467-
8667.1996.tb00323.x.
Manzan, Marco , and Francesco Pinto. 2009. Genetic optimization of external shading devices.
Paper read at Building Simulation 2009, 27-30 July 2009, at Glasgow, Scotland.
244
Marks, Wojciech. 1997. "Multicriteria optimisation of shape of energy-saving buildings." Building
and Environment no. 32 (4):331-339. doi: 10.1016/S0360-1323(96)00065-0.
Mela, Kristo, Teemu Tiainen, and Markku Heinisuo. 2012. "Comparative study of multiple
criteria decision making methods for building design." Advanced Engineering Informatics
no. 26 (4):716-726. doi: 10.1016/j.aei.2012.03.001.
Menges, Achim. 2011. Integrative design computation: Integrating material behaviour and
robotic manufacturing processes in computational design for performative wood
constructions. Paper read at ACADIA 2011 - Integration through Computation, 13-16
October, at Banff, Alberta.
Merriam-Webster. 2013. Feedback 2013a [cited 26 September 2013 2013]. Available from
http://www.merriam-webster.com/dictionary/feedback.
Merriam-Webster. 2013. Framework 2013b [cited 11 November 2013 2013]. Available from
http://www.merriam-webster.com/dictionary/framework.
Merriam-Webster. 2013. Optimization 2013c [cited 23 September 2013 2013]. Available from
http://www.merriam-webster.com/dictionary/optimization.
Miller, Brad L., and David E. Goldberg. 1995. "Genetic algorithms, tournament selection, and the
effects of noise." Complex Systems no. 9:193-212.
Mitchell, Melanie. 1998. An introduction to genetic algorithms.
Morbitzer, Christoph Andreas. 2003. Towards the integration of simulation into the building
design process, Department of Mechanical Engineering, University of Strathclyde,
Glasgow, Scotland.
Morbitzer, Christoph, Paul Strachan, Jim Webster, Brian Spires, and David Cafferty. 2001.
Integration of building simulation into the design process of an architectural practice.
Paper read at Building Simulation 2001, 13-15 August 2001, at Rio de Janeiro, Brazil.
Mossolly, M., K. Ghali, and N. Ghaddar. 2009. "Optimal control strategy for a multi-zone air
conditioning system using a genetic algorithm." Energy no. 34 (1):58-66. doi:
10.1016/j.energy.2008.10.001.
Mourshed, Monjur, Inoka Manthilake, and Jonathan Wright. 2009. Automated space layout
planning for environmental sustainability. Paper read at SASBE 2009, 15-19 June 2009,
at Delft, The Netherlands.
Mourshed, Monjur Masum, Denis Kelliher, and Marcus Keane. 2003. ArDOT: A tool to optimise
environmental design of buildings. Paper read at Building Simulation 2003, August 11-
14, 2003, at Eindhoven, Netherlands.
Nakahara, N., and J. L. M. Hensen, eds. 1999. Proceedings of Building Simulation ’99 - Sixth IBPSA
Conference. Kyoto: IBPSA.
Narahara, Taro, and Kostas Terzidis. 2006a. Multiple-constraint genetic algorithm in housing
design, synthetic landscapes. Paper read at ACADIA 2006 - Synthetic Landscapes, 12-15
October, 2006, at Louisville, Kentucky.
245
Narahara, Taro, and Kostas Terzidis. 2006b. Optimal distribution of architecture programs with
multiple-constraint genetic algorithm. Paper read at SIGraDi 2006, 21-23 November
2006, at Santiago, Chile.
Nassif, N., S. Kajl, and R. Sabourin. 2004a. Evolutionary algorithms for multi-objective
optimization in HVAC system control strategy. Paper read at Fuzzy Information, 2004.
Processing NAFIPS '04. IEEE Annual Meeting of the, June 2004.
Nassif, N., S. Kajl, and R. Sabourin. 2004b. "Two-objective on-line optimization of supervisory
control strategy." Building Services Engineering Research & Technology no. 25 (3):241-
251. doi: http://dx.doi.org.libproxy.usc.edu/10.1191/0143624404bt105oa.
Nassif, Nabil, Stanislaw Kajl, and Robert Sabourin. 2005. "Optimization of HVAC Control System
Strategy Using Two-Objective Genetic Algorithm." HVAC&R Research no. 11 (3):459-486.
Nederveen, Sander van, Reza Beheshti, and Wim Gielingh. 2010. "Modelling concepts for BIM."
In Handbook of Research on Building Information Modeling and Construction
Informatics: Concepts and Technologies, edited by Jason Underwood and Umit Isikdag.
IGI Global.
Newman, Damien. 2006. The Design Process Simplified. Design Sojourn.
Nielsen, Toke Rammer. 2002. Optimization of buildings with respect to energy and infoor
environment. PhD Dossertation, Department of Civil Engineering, Technical University of
Denmark.
Nunamaker, Jr Jay F., Minder Chen, and Titus D. M. Purdin. 1990. "Systems development in
information systems research." Journal of Management Information System no. 7
(3):89-106.
O’Sullivan, D. T. J., M. M. Keane, D. Kelliher, and R. J. Hitchcock. 2004. "Improving building
operation by tracking performance metrics throughout the building lifecycle (BLC)."
Energy and Buildings no. 36 (11):1075-1090. doi: 10.1016/j.enbuild.2004.03.003.
Obanye, Ike. 2006. Integrating building energy simulation into the architectural design process.
Paper read at BEECON 2006, 12 – 13 September 2006, at London.
Ooka, Ryozo, and Kazuhiko Komamura. 2009. "Optimal design method for building energy
systems using genetic algorithms." Building and Environment no. 44 (7):1538-1544. doi:
10.1016/j.buildenv.2008.07.006.
Opricovic, Serafim, and Gwo-Hshiung Tzeng. 2007. "Extended VIKOR method in comparison with
outranking methods." European Journal of Operational Research no. 178 (2):514-529.
doi: 10.1016/j.ejor.2006.01.020.
Ouarghi, Ramzi, and Moncef Krarti. 2006. "Building shape optimization using neural network and
genetic algorithm approach." ASHRAE Transactions no. 112 (1):484-494.
Oxman, Rivka. 2008. "Performance-based design: Current practices and research issues."
International Journal of Architectural Computing no. 6 (1):1-17. doi:
10.1260/147807708784640090.
246
Palonen, Matti, Ala Hasan, and Kai Siren. 2009. A genetic algorithm for optimization of building
envelope and HVAC system parameters. Paper read at Building Simulation 2009, 27-30
July 2009, at Glasgow, Scotland.
Pantelic, Jovan, Benny Raphael, and Kwok Wai Tham. 2012. "A preference driven multi-criteria
optimization tool for HVAC design and operation." Energy and Buildings no. 55:118-126.
doi: 10.1016/j.enbuild.2012.04.021.
Papamichael, K., J. LaPorta, and H. Chauvet. 1997. "Building design advisor: Automated
integration of multiple simulation tools." Automation in Construction no. 6 (4):341-352.
doi: 10.1016/S0926-5805(97)00043-5.
Park, Cheol-Soo, Godfried Augenbroe, and Tahar Messadi. 2003. Daylighting optimization in
smart facade systems. Paper read at Building Simulation 2003, 11-14 August 2003, at
Eindhoven, Netherlands.
Park, Cheol-Soo, Godfried Augenbroe, Nader Sadegh, Mate Thitisawat, and Tahar Messadi.
2004. "Real-time optimization of a double-skin façade based on lumped modeling and
occupant preference." Building and Environment no. 39 (8):939-948. doi:
10.1016/j.buildenv.2004.01.018.
Parmee, Ian C. . 2005. Human-centric intelligent systems for design exploration and knowledge
discovery. Paper read at 2005 ASCE International Conference on Computing in Civil
Engineering, 12 July 2005, at Cancun, Mexico.
Parmee, Ian C., Johnson A. R. Abraham, and Azahar Machwe. 2008. "User-centric evolutionary
computing: Melding human and machine capability to satisfy multiple criteria." In
Multiobjective Problem Solving from Nature, edited by Joshua Knowles, David Corne,
Kalyanmoy Deb and Deva Raj Chair, 263-283. Springer Berlin Heidelberg.
Pedrini, Aldomar. 2003. Integration of low energy strategies to the early stages of design process
of office buildings in warm climates, School of Geography, Planning and Architecture,
The University of Queensland, The University of Queensland.
Pedrini, Aldomar, and Steven Szokolay. 2005. The architects approach to the project of energy
efficient office buildings in warm climate and the importance of design methods. Paper
read at Building Simulation 2005, 15-18 August 2005, at Montréal, Canada.
Peippo, K., P. D. Lund, and E. Vartiainen. 1999. "Multivariate optimization of design trade-offs
for solar low energy buildings." Energy and Buildings no. 29 (2):189-205. doi:
10.1016/S0378-7788(98)00055-3.
Pernodet, Fanny, Hicham Lahmidi, and Pierre Michel. 2009. Use of genetic algorithms for
multicriteria optimization of building refurbishment. Paper read at Building Simulation
2009, 27-30 July 2009, at Glasgow, Scotland.
Pilgrim, M., N. Bouchlaghem, D. Loveday, and M. Holmes. 2003. "Towards the efficient use of
simulation in building performance analysis: a user survey." Building Services
Engineering Research & Technology no. 24 (3):149-162. doi:
http://dx.doi.org.libproxy.usc.edu/10.1191/0143624403bt068oa.
247
Plume, Jim, and John Mitchell. 2007. "Collaborative design using a shared IFC building model-
learning from experience." Automation in Construction no. 16 (1):28-36. doi:
10.1016/j.autcon.2005.10.003.
Pohl, J., L. Myers, and A. J. Chapman. 1990. "ICADS: An intelligent computer-aided design
environment." ASHRAE Transactions no. 96 (2):473-480.
Poloni , Carlo, and Valentino Pediroda. 1997. "GA coupled with computationally expensive
simulations: tools to improve efficiency." In Genetic algorithms and evolution strategy in
engineering and computer science: recent advances and industrial applications, edited
by D. Quagliarella, Jacques Périaux, C. Poloni and Gerhard Winter, 225-243. West
Sussex, England: John Wiley & Sons.
Pountney, Christopher. 2012. Better cardon saving: Using a genetic algorithm to optimise
building carbon reductions. Paper read at BSO12, 10-11 September 2012, at
Loughborough, UK.
Radford, A. D., and J. S. Gero. 1979. "On the design of windows." Environment and Planning B:
Planning and Design no. 6 (1):41-45. doi: 10.1068/b060041.
Radford, Antony D., and John S. Gero. 1980. "Tradeoff diagrams for the integrated design of the
physical environment in buildings." Building and Environment no. 15 (1):3-15. doi:
10.1016/0360-1323(80)90024-4.
Rapone, Gianluca, and Onorio Saro. 2012. "Optimisation of curtain wall façades for office
buildings by means of PSO algorithm." Energy and Buildings no. 45:189-196. doi:
10.1016/j.enbuild.2011.11.003.
Reinhart, Christoph, and Annegret Fitz. 2006. "Findings from a survey on the current use of
daylight simulations in building design." Energy and Buildings no. 38 (7):824-835. doi:
10.1016/j.enbuild.2006.03.012.
Riether, Gernot, and Tom Butler. 2008. Simulation space: A new design environment for
architects. Paper read at 26th eCAADe Conference: Computation: The New Realm of
Architectural Design, 17-20 September 2008, at Antwerpen, Belgium.
Robinson, D. 1996. "Energy model usage in building design: A qualitative assessment." Building
Services Engineering Research & Technology no. 17 (2):89-95. doi:
10.1177/014362449601700207.
Rolvink, Anke, Roel van de Straat, and Jeroen Coenders. 2010. "Parametric structural design and
beyond." International Journal of Architectural Computing no. 8 (3):319-336.
Romero, David, José Rincóny, and Nastia Almao. 2001. Optimization of the thermal behavior of
tropical buildings. Paper read at Building Simulation 201, 13-15 August 2001, at Rio de
Janeiro, Brazil.
Rüdenauer, Kai, and Philipp Dohmen. 2007. Heuristic methods in architectural design
optimization: monte rosa shelter: digital optimization and construction system design.
Paper read at 25th eCAADe Conference: Predicting the Future, 26-29 September 2007,
at Frankfurt am Main, Germany.
248
Rudolph, Günter. 1994. "Convergence analysis of canonical genetic algorithms." IEEE
Transactions on Neural Networks no. 5 (1):96-101. doi: 10.1109/72.265964.
Ruiz Pardo, Álvaro, Francisco José Sánchez de la Flor, Christian Suárez Soria, and José Luis Molina
Félix. 2005. Building envelope optimization. Paper read at 12èmes Journées
Internationales de Thermique, 15-17 November 2005, at Tanger, Maroc.
Sahu, M., B. Bhattacharjee, and S. C. Kaushik. 2012. "Thermal design of air-conditioned building
for tropical climate using admittance method and genetic algorithm." Energy and
Buildings no. 53:1-6. doi: 10.1016/j.enbuild.2012.06.003.
Salminen, Markku, Matti Palonen, and Kai Sirén. 2012. Combined energy simulation and multi-
criteria optimisation pf a LEED-certified building. Paper read at BSO12, 10-11 September
2012, at Loughborough, UK.
Sanguinetti, Paola, Marcelo Bernal, Maher El-Khaldi, and Matthew Erwin. 2010. Real-time design
feedback: Coupling performance-knowledge with design. Paper read at SimAUD 2010,
12-15 April 2010, at Orlando, FL, USA.
Schlueter, Arno, and Frank Thesseling. 2009. "Building information model based energy/exergy
performance assessment in early design stages." Automation in Construction no. 18
(2):153-163. doi: 10.1016/j.autcon.2008.07.003.
Schoch, Martin, Chakguy Prakasvudhisarn, and Apichat Praditsmanont. 2011. "Building-volume
designs with optimal life-cycle costs." International Journal of Architectural Computing
no. 9 (1):55-76.
Shah, Jami J. , and Martti Mäntylä. 1995a. Parametric and Feature-Based CAD/CAM: Concepts,
Techniques, and Applications. 1st. ed: John Wiley and Sons.
Shah, Jami J., and Martti Mäntylä. 1995b. Parametric and feature-based CAD/CAM: Concepts,
techniques, and applications. 1 ed. New York, NY, USA: Wiley-Interscience.
Shaneb, O. A., P. C. Taylor, and G. Coates. 2012. "Optimal online operation of residential μCHP
systems using linear programming." Energy and Buildings no. 44:17-25. doi:
10.1016/j.enbuild.2011.10.003.
Shea, Kristina, Robert Aish, and Marina Gourtovaia. 2005. "Towards integrated performance-
driven generative design tools." Automation in Construction no. 14 (2):253-264. doi:
10.1016/j.autcon.2004.07.002.
Shea, Kristina, Andrew Sedgwick, and Giulio Antonuntto. 2006. "Multicriteria optimization of
paneled building envelopes using ant colony optimization." In Intelligent Computing in
Engineering and Architecture, edited by Ian Smith, 627-636. Springer Berlin / Heidelberg.
Shi, Xing. 2011. "Design optimization of insulation usage and space conditioning load using
energy simulation and genetic algorithm." Energy no. 36 (3):1659-1667. doi:
10.1016/j.energy.2010.12.064.
Simon, Herbert A. 1973. "The structure of ill structured problems." Artificial Intelligence no. 4 (3-
4):181-201.
249
Simpson, Timothy W., David Rosen, Janet K. Allen, and Farrokh Mistree. 1996. Metrics for
assessing design freedom and information certainty in the early stages of design. Paper
read at The 1996 ASME Design Engineering Technical Conference and Computer in
Engineering Conference, 18-22 August 1996, at Irvine, California.
Sivanandam, S. N., and S. N. Deepa. 2008. Introduction to genetic algorithms. New York:
Springer-Verlag Berlin Heidelberg.
Smith, Lillian, Kyle Bernhardt, and Matthew Jezyk. 2011. Automated energy model creation for
conceptual design. Paper read at SimAUD 2011, 4-7 April 2011, at Boston, MA, USA.
Sobieszczanski-Sobieski, Jaroslaw. 1993. Multidisciplinary design optimization: An emerging new
engineering discipline. Hampton, Virginia: Langley Research Center, NASA.
Song, Y. H., C. S. Chou, and T. J. Stonham. 1999. "Combined heat and power economic dispatch
by improved ant colony search algorithm." Electric Power Systems Research no. 52
(2):115-121. doi: 10.1016/S0378-7796(99)00011-5.
Spittler, J, and JLM Hensen, eds. 1997. Proceedings of Building Simulation '97 - Fifth IBPSA
Conference. Prague: IBPSA.
Stanescu, Magdalena, Stanislaw Kajl, and Louis Lamarche. 2012. Evolutionary algorithm with
three different permutation options used for preliminary HVAC system design. Paper
read at BSO12, 10-11 September 2012, at Loughborough, UK.
Stoyell, J. L., G. Kane, P. W. Norman, and I. Ritchey. 2001. "Analyzing design activities which
affect the life-cycle environmental performance of large made-to-order products."
Design Studies no. 22 (1):67-86. doi: 10.1016/s0142-694x(00)00013-2.
Struck, Christian, Pieter J. C. J. de Wilde, Christina J. Hopfe, and Jan L. M. Hensen. 2009. "An
investigation of the option space in conceptual building design for advanced building
simulation." Advanced Engineering Informatics no. 23 (4):386-395. doi:
10.1016/j.aei.2009.06.004.
Struck, Christian, Jan Hensen, and Petr Kotek. 2009. "On the application of uncertainty and
sensitivity analysis with abstract building performance simulation tools." Journal of
Building Physics no. 33 (1):5-27. doi: 10.1177/1744259109103345.
Stump, Gary M., Mike Yukish, Timothy W. Simpson, and John J. O‘Hara. 2004. Trade space
exploration of satellite datasets using a design by shopping paradigm. Paper read at
2004 IEEE Aerospace Conference, 6-13 March 2004, at Big Sky, MT, USA.
Suga, Kentaro, Shinsuke Kato, and Kyosuke Hiyama. 2010. "Structural analysis of Pareto-optimal
solution sets for multi-objective optimization: An application to outer window design
problems using multiple objective genetic algorithms." Building and Environment no. 45
(5):1144-1152. doi: 10.1016/j.buildenv.2009.10.021.
Swift Lee Office. Global Holcim Awards finalist 2012: Zero net energy school building, Los
Angeles, CA, USA 2012 [cited 31 January 2013. Available from
http://www.holcimfoundation.org/Portals/1/docs/A12/finalists/A12GLfiUSca/A12GLfiU
Sca-posterhigh.pdf.
250
Tanaka, Yoichi, Yoshito Umeda, Tomoyuki Hiroyasu, and Mitsunori Miki. 2007. Optimal design of
combined heat and power system using a genetic algorithm. Paper read at ISETS07, 23 -
25 November 2007, at Nagoya, Japan.
The European Union. 2012. "Directive 2012/27/EU of the European Parliament and of the
Council of 25 October 2012." Official Journal of the European Union no. 55. doi:
10.3000/19770677.L_2012.315.eng.
Torcellini, Paul , Shanti Pless, Michael Deru, and Drury Crawley. 2006. Zero energy buildings: A
critical look at the definition. Paper read at 2006 ACEEE Summer Study on Energy
Efficiency in Buildings - Less is More: En Route to Zero Energy Buildings, 16-18 August
2006, at Pacific Grove, CA.
Torres, Santiago L., and Yuzo Sakamoto. 2007. Facade design optimization for daylighting with a
simple genetic algorithm. Paper read at Building Simulation 2007, 3-6 September 2007,
at Beijing, China.
Toth, Bianca, Flora Salim, Jane Burry, John H. Frazer, Robin Drogemuller, and Mark Burry. 2011.
"Energy-oriented design tools for collaboration in the cloud." International Journal of
Architectural Computing no. 9 (4):339-360. doi: 10.1260/1478-0771.9.4.339.
Trebilcock, Maureen, Brian Ford, and Robin Wilson. 2006. Integration of sustainability in the
design process of contemporary architectural practice. Paper read at PLEA 2006, 6-8
September 2006, at Geneva, Switzerland.
Tuhus-Dubrow, Daniel, and Moncef Krarti. 2010. "Genetic-algorithm based approach to optimize
building envelope design for residential buildings." Building and Environment no. 45
(7):1574-1581. doi: 10.1016/j.buildenv.2010.01.005.
Turrin, Michela, Peter von Buelow, and Rudi Stouffs. 2011. "Design explorations of performance
driven geometry in architectural design using parametric modeling and genetic
algorithms." Advanced Engineering Informatics no. 25 (4):656-675. doi:
10.1016/j.aei.2011.07.009.
Underwood, Jason , and Umit Isikdag, eds. 2009. Handbook of Research on Building Information
Modeling and Construction Informatics: Concepts and Technologies: Information Science
Publishing.
USGBC. 2003. Building momentum: National trends and prospects for high-performance green
buildings. Washington, DC, USA: U.S. Green Building Council (USGBC).
Vasebi, A., M. Fesanghary, and S. M. T. Bathaee. 2007. "Combined heat and power economic
dispatch by harmony search algorithm." International Journal of Electrical Power &
Energy Systems no. 29 (10):713-719. doi: 10.1016/j.ijepes.2007.06.006.
Verbeeck, Griet, and Hugo Hens. 2007. "Life cycle optimization of extremely low energy
dwellings." Journal of Building Physics no. 31 (2):143-177. doi:
10.1177/1744259107079880.
Vetterli, Jan, and Michael Benz. 2012. "Cost-optimal design of an ice-storage cooling system
using mixed-integer linear programming techniques under various electricity tariff
schemes." Energy and Buildings no. 49:226-234. doi: 10.1016/j.enbuild.2012.02.012.
251
Wang, Shengwei, and Xinqiao Jin. 2000. "Model-based optimal control of VAV air-conditioning
system using genetic algorithm." Building and Environment no. 35 (6):471-487. doi:
10.1016/S0360-1323(99)00032-3.
Wang, Weimin, Hugues Rivard, and Radu Zmeureanu. 2006. "Floor shape optimization for green
building design." Advanced Engineering Informatics no. 20 (4):363-378. doi:
10.1016/j.aei.2006.07.001.
Wang, Weimin, Hugues Rivard, and Radu G. Zmeureanu. 2003. Optimizing building design with
respect to life-cycle environmental impacts. Paper read at Building Simulation 2003, 11-
14 August 2003, at Eindhoven, Netherlands.
Wang, Weimin, Radu Zmeureanu, and Hugues Rivard. 2005. "Applying multi-objective genetic
algorithms in green building design optimization." Building and Environment no. 40
(11):1512-1525. doi: 10.1016/j.buildenv.2004.11.017.
Welle, Benjamin, John Haymaker, Martin Fischer, and Vladimir Bazjanac. 2012. CAD-centric
attribution methodology for multidisciplinary optimization (CAMMO): Enabling
designers to efficiently formulate and evaluate large design spaces. CIDE, Standford
University.
Welle, Benjamin, John Haymaker, and Zack Rogers. 2011. "ThermalOpt: A methodology for
automated BIM-based multidisciplinary thermal simulation for use in optimization
environments." Building Simulation no. 4 (4):293-313. doi: 10.1007/s12273-011-0052-5.
Welle, Benjamin, Zack Rogers, and Martin Fischer. 2012. "BIM-centric daylight profiler for
simulation (BDP4SIM): A methodology for automated product model decomposition and
recomposition for climate-based daylighting simulation." Building and Environment no.
58 (0):114-134. doi: 10.1016/j.buildenv.2012.06.021.
Wetter, Michael, and Elijah Polak. 2005. "Building design optimization using a convergent
pattern search algorithm with adaptive precision simulations." Energy and Buildings no.
37 (6):603-612. doi: 10.1016/j.enbuild.2004.09.005.
Wetter, Michael, and Jonathan Wright. 2003. Comparison of a generalized pattern search and a
genetic algorithm optimization method. Paper read at Building Simulation 2003, 11-14
August 2003, at Eindhoven, Netherlands.
Wetter, Michael, and Jonathan Wright. 2004. "A comparison of deterministic and probabilistic
optimization algorithms for nonsmooth simulation-based optimization." Building and
Environment no. 39 (8):989-999. doi: 10.1016/j.buildenv.2004.01.022.
Weytjens, Lieve, Shady Attia, Griet Verbeeck, and André De Herde. 2011. "The ‘architect-
friendliness’ of six building performance simulation tools: A comparative study."
International Journal of Sustainable Building Technology and Urban Development no. 2
(3):237-244. doi: 10.5390/susb.2011.2.3.237.
Weytjens, Lieve, and Griet Verbeeck. 2010. Towards 'architect-friendly' energy evaluation tools.
Paper read at SimAUD 2010, 12-15 April 2010, at Orlando, FL, USA.
Wright, J. A. 1996. "HVAC optimisation studies: Sizing by genetic algorithm." Building Services
Engineering Research & Technology no. 17 (1):7-14. doi: 10.1177/014362449601700102.
252
Wright, Jonathan , and Monjur Mourshed. 2009. Geometric optimization of fenestration. Paper
read at Building Simulation 2009, 27-30 July 2009, at Glasgow, Scotland.
Wright, Jonathan A., Heather A. Loosemore, and Raziyeh Farmani. 2002. "Optimization of
building thermal design and control by multi-criterion genetic algorithm." Energy and
Buildings no. 34 (9):959-972. doi: 10.1016/S0378-7788(02)00071-3.
Wright, Jonathan, and Ali Alajmi. 2005. The robustness of genetic algorithms in solving
unconstrained building optimization problems. Paper read at Building Simulation 2005,
15-18 August 2005, at Montréal, Canada.
Wright, Jonathan, and Raziyeh Farmani. 2001. The simultaneous optimization of building fabric
construction, HVAC system size, and the plant control strategy. Paper read at Building
Simulation 2001, 13-15 August 2001, at Rio de Janeiro, Brazil.
Wright, Jonathan, and Heather Loosemore. 2001. The multi-criterion optimization of building
thermal design and control. Paper read at Building Simulation 2001, 13-15 August 2001,
at Rio de Janeiro, Brazil.
Wright, Jonathan, and Yi Zhang. 2005. An "ageing" operator and its use in the highly constrained
topological optimization of HVAC system design, 2005.
Wright, Jonathan, Yi Zhang, Plamen Angelov, Victor Hanby, and Richard Buswell. 2008.
"Evolutionary synthesis of HVAC system configurations: Algorithm development (RP-
1049)." HVAC&R Research no. 14 (1):33-55.
Xie, Y. M., P. Felicetti, J. W. Tang, and M. C. Burry. 2005. "Form finding for complex structures
using evolutionary structural optimization method." Design Studies no. 26 (1):55-72. doi:
10.1016/j.destud.2004.04.001.
Yang, Fan, and Dino Bouchlaghem. 2010. "Genetic algorithm-based multiobjective optimization
for building design." Architectural Engineering and Design Management no. 6 (1):68-82.
doi: 10.3763/aedm.2008.0077.
Yessios, Christ I. 2013. Are we forgetting design? AECbytes, 24 November 2004 2004 [cited 29
September 2013 2013]. Available from
http://www.aecbytes.com/viewpoint/2004/issue_10.html.
Yi, Yun Kyu, and Ali M. Malkawi. 2009. "Optimizing building form for energy performance based
on hierarchical geometry relation." Automation in Construction no. 18 (6):825-833. doi:
10.1016/j.autcon.2009.03.006.
Yi, Yun Kyu, and Ali M. Malkawi. 2012. "Site-specific optimal energy form generation based on
hierarchical geometry relation." Automation in Construction no. 26:77-91. doi:
10.1016/j.autcon.2012.05.004.
Yildiz, Yusuf , Koray Korkmaz, Türkan Göksal Özbalta, and Zeynep Durmus Arsan. 2012. "An
approach for developing sensitive design parameter guidelines to reduce the energy
requirements of low-rise apartment buildings." Applied Energy no. 93:337-347. doi:
10.1016/j.apenergy.2011.12.048.
253
Zemella, Giovanni, Davide De March, Matteo Borrotti, and Irene Poli. 2011. "Optimised design of
energy efficient building façades via Evolutionary Neural Networks." Energy and
Buildings no. 43 (12):3297-3302. doi: 10.1016/j.enbuild.2011.10.006.
Zhou, G., P. Ihm, M. Krarti, S. Liu, and G.P. Henze. 2003. Integration of an internal optimization
module within EnergyPlus. Paper read at Building Simulation 2003, 11-14 August 2003,
at Eindhoven, Netherlands.
Znouda, Essia, Nadia Ghrab-Morcos, and Atidel Hadj-Alouane. 2007. "Optimization of
mediterranean building design using genetic algorithms." Energy and Buildings no. 39
(2):148-153. doi: 10.1016/j.enbuild.2005.11.015.
254
APPENDIX A SUMMARY OF SELECTED WORKS ON APPLYING
OPTIMIZATION IN BUILDING DESIGN
TABLE A - 1: SUMMARY OF SELECTED WORKS ON APPLYING OPTIMIZATION IN BUILDING DESIGN FOR
PERFORMANCE FEEDBACK.
ABBREVIATION KEY
OPTIMIZING SYSTEM: B: whole building; CHP: combined heat and power system; E: envelope (no room, including roof); F: façade (no
room, including window); H: HVAC system; L: lighting; R: typical room/zone; S: structure; SP: floor/space layout (no room or zone);
U: urban/Outdoor
OBJECTIVES: A: area; AC: annual cost; BH: building height; C: capital cost; CC: construction cost; CL: cooling load; CS: control strategy;
D: daylight; DEC: daily energy cost; DHW: domestic hot water; DS: density; E: whole building energy consumption; EC: energy cost;
EE: exergetic efficiency; ES: exterior space; FC: façade construction; G: glare; H: humidity; HC: heating cost; HL: heating load; HVAC;
HS: HVAC system size; IAQ: indoor air quality; IC: indoor comfort; IR: income revenue; L: lighting; LL: light level; LP: lighting power;
LC: layout cost; LCC: life cycle cost; LCEI: life cycle environmental impact; LCI: life cycle inventory; MS: mean squared error; O:
overshoot; OC: operation cost; OT: outdoor thermal; P: pollution; RC: retrofit cost; RE: renewable energy; RH: relative humidity; SA:
shape adjacency; SC: shape connection; SG: solar gain; SO: shape overlapping; SR: solar radiance; SS: structural system; SW:
structure weight; T: time; TC: thermal comfort; TL: thermal load; VC: visual comfort; VF: ventilation flow.
OPTIMIZATION ALGORITHM: AC: ant colony algorithm; ANN: artificial neutral network; ATC: Analytic Target Cascade; BF: brute force;
DoE: design of experiment; DP: dynamic programming; DS: Direct Search; ES: evolutionary system; ESO: evolution structural
optimization; GA: genetic algorithm (including evolutionary algorithm); HC: hill-climbing; HJ: Hook and Jeeves; MACO: multi-criteria
ant colony optimization; MOGA: multi-objective genetic algorithm; NN: neural network; NS-DP: non-series dynamic programming;
PODP: Pareto optimal dynamic programming; PS: particle swarm; SA: simulated annealing; SGA: segregated genetic algorithm
VARIABLES: B: building form; C: constructions; CD: construction date; CT: control; D: distance; DHW: domestic hot water; G: glazing; H:
HVAC system; I: interaction; L: lighting info; M: cost; NV: natural ventilation; O: occupancy; OS: operational/control strategies; P:
position; R: renewables; RS: renovation strategies; S: sizing (except overall building form); SP: set-points; ST: structure system; T:
topology; TR: tree; UR: utility rate
DESIGN PHASE: ED: early design stage; DD: design development, OM: operation maintenance
ACTOR: A: architect/designer; AS: architectural science; AE: architectural engineering; BS: building science, building engineer; CE: civil
engineer; CS: computer science; EE: electrical engineering; ENE: environmental engineering; ME: mechanical engineer; PE: physics
scientist/engineering, SE: structural engineer
EXPLORED GEOMETRY: B: Building form; E: Envelope (including roof); F: façade (including window); R: room; S: structural form/frame;
SP: space layout/topology; U: urban layout
* MCDM
NO.
REFERENCES
PROJECT
MO (Y/N/WS)
OPTIMIZING
SYSTEM
OBJECTIVES
OPTIMIZATION
ALGORITHM
OPTIMIZATION
TOOL
VARIABLES
DESIGN PHASE
ACTOR
FOR DESIGNER?
EXPLORED
GEOMETRY
1. Gero and Radford
(1978)
N L C DP ? L, P, S ? AS ? E, R
2. Radford and Gero
(1979)
N F C, E DP ? S ? AS ? F
3. D'Cruz, Radford,
and Gero (1983)
Y B TL, D,
A, C
PODP ? B, C, G,
S
? AS ? B, E
(Box)
4. D'Cruz and Radford
(1987)
Y* B TL, D,
A, C
PODP FORTRAN B, C, G,
S
ED AS Y B (Box)
5. Bouchlaghem and
Letherman (1990)
N B TC DS (Simplex;
non-random
complex)
? C, G, S ? BS Y F
6. Grierson and Pak
(1993)
N S SW GA ? S, T ? CE N S
7. Khemlani (1995) N R E BF C (GENWIN) C, L, S, T ? A Y F
8. Wright (1996) N H C GA ? H, S ? CE N N/A
9. Al-Homoud (1997) N B
E DS ? B, C, G,
S
ED AE ? B, F
(Box)
10. Damski and Gero
(1997)
Y SP
SA, SC,
SO
GA (ES) ? S, T ES AS Y SP
11. Huang and Lam
(1997)
N H O, T,
MS
GA ? CT OM ME N N/A
12. Marks (1997) Y E C, HC GA (Nonlinear) CAMOS B, G, S ? BS Y B
255
NO.
REFERENCES
PROJECT
MO (Y/N/WS)
OPTIMIZING
SYSTEM
OBJECTIVES
OPTIMIZATION
ALGORITHM
OPTIMIZATION
TOOL
VARIABLES
DESIGN PHASE
ACTOR
FOR DESIGNER?
EXPLORED
GEOMETRY
13. Gero and Kazakov
(1998)
N SP
LC GA ? D, I, T ES AS Y SP
14. Jo and Gero (1998) N SP LC GA ? D, I, T ES AS Y SP
15. Peippo, Lund, and
Vartiainen (1999)
Y B AC, E DS (HJ) ? B, G, H,
L, R
? PE N B, F
(Box)
16. Song, Chou, and
Stonham (1999)
N CHP OC AC ? OS OM EE N N/A
17. Asiedu, Besant, and
Gu (2000)
N H LLC GA (SGA) ? OS,
S,UR
N/A ME N N/A
18. Bouchlaghem
(2000)
N B, F TC DS (Simplex;
non-random
complex)
? C, G, S ? BS Y F
19. Wang and Jin
(2000)
WS H OC GA ? SP FM BS N N/A
20. Hauglustaine and
Azar (2001)
WS* B
H, L,
CC, OC
GA Allegro©
Common Lisp
(AMCE)
B, F ED PE N SP
21. Caldas (2001) Y B E, HE, L GA Generative
System
B, S ? A Y B, F
22. Caldas and Rocha
(2001)
Y B D, E GA Generative
System
B, S ? A Y F
23. Romero, Rincóny,
and Almao (2001)
N E TC DoE, NN, SA, GA ? C ? BS ? E
24. Wright and
Loosemore (2001)
Y R DEC, TC GA (MOGA) ? C, CT, H ? CE N N/A
25. Wright and
Farmani (2001)
N R EC (CS,
HS, FC)
GA ? C, H,
OS, S,
SP
? CE N N/A
26. Caldas (2002a) Y B D, E GA GS B, S, T ? A Y B
27. Caldas (2002b) Y B D, E GA GS B, S, T ? A Y B
28. Caldas and Norford
(2002)
N B E GA GS C, G, S ? A Y F
29. Chow et al. (2002) N H OC GA, NN ? SP OM ENE N N/A
30. Coley and Schukat
(2002)
N B E GA + Human ? C ? PE N N/A
31. Grierson and
Khajehpour (2002)
Y B
C, IR,
OC
GA ? B, C, G,
S, ST
ED CE ? B (box)
32. Jedrzejuk and
Marks (2002)
Y B CC, HL,
P
GA ? B, C, G,
S H
? PE ? B (box)
33. Kolokotsa et al.
(2002)
WS B C( IC, E) GA ? CT, H,
SP
OM EE N N/A
34. Nielsen (2002) N B, R LCC, D,
E, IC
DS, SA Matlab B, C, G,
S
ED CE Y B
35. Wright, Loosemore,
and Farmani (2002)
Y R EC, TC GA (MOGA) ? H, SP,
OS
OM CE N N/A
36. Caldas and Norford
(2003a)
Y B C, E GA ? B, C, G,
H, S, OS
? A Y B, F
37. Caldas and Norford
(2003b)
Y B L, HL GA GS B, C, G,
S
? A Y B, F
38. Caldas, Norford,
and Rocha (2003)
Y B, F E, L, HL GA GS B, F, S ED A Y B, F
39. Chouchoulas (2003) Y E BH, DS,
ES
GA ? B, S ED AE Y E
40. Fong, Hanby, and
Chow (2003)
N H E GA (EA) Matlab CT, H,
SP
OM BS N N/A
41. Holst (2003) WS B E, IC DS (HJ) GenOpt C, G, S ? BE ? F
42. Mahdavi and
Mahattanatawe
(2003)
N F VC, E DS (HC), SA ? G, S ? BE ? F
43. Mourshed, Kelliher,
and Keane (2003)
ArDOT
Y B ? DoE, GA (PS) C in VisualDOC ? ? CE Y ?
44. Park, Augenbroe,
and Messadi (2003)
WS F D DS (Nonlinear) ? G, S ? A ? F
45. Wang, Rivard, and
Zmeureanu (2003)
WS E LCC,
LCEI
GA ? B, C, G,
S
? CE F
46. Wetter and Wright
(2003)
N B EC DS(HJ) vs. GA GenOpt G, S, H,
SP
? ENE,
CE
? F
256
NO.
REFERENCES
PROJECT
MO (Y/N/WS)
OPTIMIZING
SYSTEM
OBJECTIVES
OPTIMIZATION
ALGORITHM
OPTIMIZATION
TOOL
VARIABLES
DESIGN PHASE
ACTOR
FOR DESIGNER?
EXPLORED
GEOMETRY
47. Zhou et al. (2003) N B E DS (Nelder-
Mead)
? SP OM AE N N/A
48. Choudhary (2004) N H TC ATC Matlab G, H, S,
SP
? A ? F
49. Christensen,
Barker, and
Horowitz (2004)
Y B AC, EC DS BEopt C, G, H ? BS,
PE
N N/A
50. Drogemuller,
Crawford, and Egan
(2004)
N* B N/A N/A N/A D, H, P,
ST
ED BS N R, S,
SP
51. Henze, Felsmann,
and Knabe (2004)
N H EC DP GenOpt H, SP, S ? AE N N/A
52. Kato and Lee
(2004)
N H EC GA ? H, OT, S ? ME N N/A
53. Nassif, Kajl, and
Sabourin (2004b)
Y H E, TC GA (NSGA-II) ? SP, H,
OS
? ME N N/A
54. Nassif, Kajl, and
Sabourin (2004a)
Y H E, TC GA (NSGA-II) ? SP, H,
OS
? ME N N/A
55. Park et al. (2004) WS F E, TC,
VC
DS (Nonlinear) Matlab H, P, S OM AE,
ME
? F
56. Wetter and Wright
(2004)
N H EC DS (CS, HJ), PSO,
PSO-HJ, GA, DS
GenOpt G, S, SP ? CE,
ENE
N F
57. Al-Homoud (2005) N E TC DS (Nelder-
Mead)
ENEROPT C, G,
OS, P
ES AE ? N/A
58. Caldas (2005) Y B E, D GA GDS B, T ? A Y B
59. Kicinger,
Arciszewski, and De
Jong (2005)
N S SS, CC EA Emergent
Designer (Java)
ST ? SE N S
60. Lu et al. (2005) N H E GA ? CT, H ? EE N N/A
61. Nassif, Kajl, and
Sabourin (2005)
Y H E, TC GA (NSGA-II) ? CT, H ? ME N N/A
62. Ruiz Pardo et al.
(2005)
N B IR, EC,
T
? GenOpt C, G, M ? BS,
PE
? F
63. Wang, Zmeureanu,
and Rivard (2005)
Y B LCC,
LCEI
GA ? B, C, P ? CE ? B
64. Wetter and Polak
(2005)
N B E DS (CS, HJ) ? G, S ? EE N F
65. Wright and Alajmi
(2005)
N B E GA ? G, S, SP ? CE N F
66. Wright and Zhang
(2005)
N H E GA ? H ? CE N N/A
67. Xie et al. (2005) N S SS ESO ? ST ? A,
CE
N B
68. Anderson,
Christensen, and
Horowitz (2006)
N B C, E, RE DS BEopt C, G, H ? PE N N/A
69. Caldas (2006) Y B C, E, L GA GENE_ARCH B, C, G,
S
? A Y B
70. Fong, Hanby, and
Chow (2006)
N H E EP ? H, SP ? BS N N/A
71. Li, Nalim, and Haldi
(2006)
N CHP C GA (Matlab
GAOT)
? H, SP ? ME N N/A
72. Narahara and
Terzidis (2006a)
N B CC, SR,
VC
GA ? B, S ? A Y B
73. Narahara and
Terzidis (2006b)
N B CC, SR,
VC
GA ? B, S ? A Y B
74. Ouarghi and Krarti
(2006)
N B EC, CC NN-GA ? B, C, G,
T
? ME N B
75. Shea, Sedgwick,
and Antonuntto
(2006)
Y E L, SR MACO ? G ? A Y N/A
76. Wang, Rivard, and
Zmeureanu (2006)
Y B LLC,
LCE
GA ? B, C, S,
ST
? CE ? B
77. Adamski (2007) N E
HL, CC ? ? B, S ? PE ? B
78. Charron (2007) N B C, E GA GenOpt B, C, H,
S
ES CE ? N/A
79. Djuric et al. (2007) N B C, TC GA GenOpt C, H ? ME,
PE
N N/A
257
NO.
REFERENCES
PROJECT
MO (Y/N/WS)
OPTIMIZING
SYSTEM
OBJECTIVES
OPTIMIZATION
ALGORITHM
OPTIMIZATION
TOOL
VARIABLES
DESIGN PHASE
ACTOR
FOR DESIGNER?
EXPLORED
GEOMETRY
80. Lee (2007) N HVAC TC GA ? H, NV ? ME N N/A
81. Rüdenauer and
Dohmen (2007)
N S C, SW GA ? ST ? A Y S
82. Tanaka et al. (2007) N CHP E GA ? H, OS ? CS,
PE
N N/A
83. Torres and
Sakamoto (2007)
N F D GA ? C, G, S ? A Y F
84. Vasebi, Fesanghary,
and Bathaee (2007)
N CHP OC Harmony Search ? OS ? EE N N/A
85. Verbeeck and Hens
(2007)
Y B E, LCC,
LCI
GA Matlab C, NV, R ? A ? N/A
86. Znouda, Ghrab-
Morcos, and Hadj-
Alouane (2007)
N B C, E GA (Simple) ? C, G ? PE ? B (box)
87. Caldas (2008) Y B C, E GA GENE_ARCH C, S, T ? A Y B
88. Chen, Ooka, and
Kato (2008)
N U OT GA ? B, TR ? BS ? U
89. Diakaki,
Grigoroudis, and
Kolokotsa (2008)
Y E CC, E DS ? C ? BS,
PE
? F
90. Emmerich et al.
(2008)
Y B EC, TC NSGA-II, HJ Topgui B, G, H,
L,S, NV
DD PE ? B
91. Grobman, Yezioro,
and Capeluto
(2008)
GenPOD
N E E, SR,
SS, VC
? GenPOD B, S
? A Y B
92. Magnier (2008) Y B E, TC GA, ANN, EA
(GAINN)
Matlab, GenOpt C, G, S ? CE ? F
93. Wright et al. (2008) N HVAC E GA ? H, OS ? ME N N/A
94. Al-Homoud (2009) N B
E, TC DS (Nelder-
Mead)
? C, G ? AE ? F
95. Flager et al. (2009)
PIDO
Y B, S
CC, LCC MDO-GA
PHX
ModelCenter
B, G, S,
ST
? CE ? B (box)
96. Geyer (2009) Y S C, E MOGA ? C, S ? AE Y B, S
97. Hamdy, Hasan, and
Siren (2009)
Y B C, E GA Matlab C, S, H ? ME ? N/A
98. Hopfe (2009) Y B E, C, TC GA (NSGA-II) Topgui B, C, G,
S
DD A ? N/A
99. Janssen (2009) Y B E EA EDDE B, G, S ? A Y B
100. Kämpf and Darren
(2009)
N B HL, CL, CMA-EA, HDE ? C, G ? BS ? U
101. Kämpf and
Robinson (2009)
N B E CMA-EA, HDE ? C, G ? BS ? U
102. Kayo and Ooka
(2009)
Y CHP E, C GA ? H ? BS N N/A
103. Manzan and Pinto
(2009)
N E E GA (MOGA-II) modeFRONTIER Shading ? PE ? F
104. Mossolly, Ghali,
and Ghaddar
(2009)
Y H TC, IAQ GA ? CT, SP ? ME N N/A
105. Mourshed,
Manthilake, and
Wright (2009)
Y B E, SA GA ArDOT P, S, T ? A,
CE
? SP
106. Ooka and
Komamura (2009)
N CHP CO
2
, E GA (MIGA) ? H, S, OS ? BS ? N/A
107. Palonen, Hasan,
and Siren (2009)
N E E, LCC GA (NSGA-II) ? C, H ? ME N F
108. Pernodet, Lahmidi,
and Michel (2009)
Y E E, C GA
(GenetikSolver)
? C, L OM PE N N
109. Wright and
Mourshed (2009)
N B E GA ? T ? CE N F
110. Yi and Malkawi
(2009)
N B E, TC GA ? B, C, S ? A Y B
111. Bucking et al.
(2010)
N B E EA Clojure, Matlab CT OM CE,
ENE
? N/A
112. Bukhari, Frazer,
and Drogemuller
(2010)
? B E, TC EA, GA ? ? ? A Y B
258
NO.
REFERENCES
PROJECT
MO (Y/N/WS)
OPTIMIZING
SYSTEM
OBJECTIVES
OPTIMIZATION
ALGORITHM
OPTIMIZATION
TOOL
VARIABLES
DESIGN PHASE
ACTOR
FOR DESIGNER?
EXPLORED
GEOMETRY
113. Diakaki et al. (2010) Y* B C, E,
CO
2
? ? C, H ? ENE N N/A
114. Hamdy, Hasan, and
Siren (2010)
Y H C, CO
2
GA IDA-ICE 3.0 C, G, H,
S
? PE N N/A
115. Kämpf et al. (2010) Y B SR CMA-EA, HDE ? E ? BS ? B, U
116. Kayo and Ooka
(2010)
N H E GA ? H, SP ? A N N/A
117. Keough and
Benjamin (2010)
Y S SS, SW GA (MDO) CatBot (C#) -
ModeFRONTIER
SS ? A,
SE
Y ST
118. Magnier and
Haghighat (2010)
Y B E, TC GA, ANN GenOpt H, SP ? CE N N/A
119. Suga, Kato, and
Hiyama (2010)
Y F C, E, L,
TC
MOGA ? C, G, S ? BS Y S
120. Tuhus-Dubrow and
Krarti (2010)
N B E, LCC
GA ? B, C ? AE Y B
121. Yang and
Bouchlaghem
(2010)
Y N/A N/A GA ? N/A ? CE N N/A
122. Abdollahi and
Meratizaman
(2011)
Y CHP
C, EE GA ? H, S ? ME N N/A
123. Bichiou and Krarti
(2011)
N B LLC GA, PSO, Brute
force
? B, C, H,
SP
? AE,
CE
N B
124. Caldas (2011)
N B E, SC GA GENE_ARCH B, C, S ? A Y B, SP
125. Cassol et al. (2011) Y L LL, LP Heuristic (GEO) ? L, P ? ME N N/A
126. Chantrelle et al.
(2011)
Y* B CC, E,
TC
GA (NSGA-II) ? CT, C OM BS ? N/A
127. Evins, Pointer, and
Vaidyanathan
(2011a)
Y E E GA (NSGA-II) ? G, SP ? PE N F
128. Evins, Pointer, and
Vaidyanathan
(2011b)
Y CHP CC, E,
OC
GA (NSGA-II) ? H, CD ? PE N N/A
129. Hamdy, Hasan, and
Siren (2011a)
Y B CO
2
, CC
GA (NSGA-II) ? C, G ? BS N N/A
130. Hamdy, Hasan, and
Siren (2011b)
Y B E, C, TC GA (NSGA-II) ? C, G BS N N/A
131. Hoes et al. (2011) Y* H E, C, TC GA (NSGA-II) ? H, SP ? PE N N/A
132. Janssen, Basol, and
Chen (2011)
Y B D, E EA ? G, C, S ? A Y E
133. Kusiak, Xu, and
Tang (2011)
Y
H EC, TC PS ? H, SP ? ME N N/A
134. Lee, Chen, and Kao
(2011)
N H TC EA, GA, PS ? H, SP ? ME N N/A
135. Loonen, Trcka, and
Hensen (2011)
Y B EC, TC GA (NSGA-II) modeFRONTIER C, G ? BS ? F
136. Schoch,
Prakasvudhisarn,
and Praditsmanont
(2011)
N B LCC ? ? B, S ? A Y B
137. Shi (2011) Y R HL, CL GA modeFRONTIER C, H ? A ? N/A
138. Turrin, von Buelow,
and Stouffs (2011)
N E D, SR, S GA ParaGen ST, T ? A Y B, E
139. Welle, Haymaker,
and Rogers (2011)
Y B D, E, TC GA PHX
ModelCenter
ThermalOpt
B, C, G ? CE Y B
140. Zemella et al.
(2011)
Y R CO
2
, E NN ? C, G ? PE N F
141. Asadi et al. (2012a) Y B RC, EC DS (Tchebycheff) GenOpt,
MATLAB
C, G, R OM ME N N/A
142. Asadi et al. (2012b) Y B RC, EC DS (Tchebycheff) GenOpt,
MATLAB
C, G, R OM ME N N/A
143. Asadi et al. (2012c) Y B RC, EC DS (Tchebycheff) GenOpt,
MATLAB
C, G, R OM ME N N/A
144. Brownlee and
Wright (2012)
Y N/A N/A N/A N/A C, OS ? CE N N/A
259
NO.
REFERENCES
PROJECT
MO (Y/N/WS)
OPTIMIZING
SYSTEM
OBJECTIVES
OPTIMIZATION
ALGORITHM
OPTIMIZATION
TOOL
VARIABLES
DESIGN PHASE
ACTOR
FOR DESIGNER?
EXPLORED
GEOMETRY
145. Caldas and Santos
(2012)
Y B E, SC GA GENE_ARCH B, S ? A Y SP, U
146. Čongradac and
Kulić (2012)
N H E GA Matlab H, ST ? BS N N/A
147. Evins et al. (2012) Y B E GA (NAGA-II) ? T, C, R ? PE N N/A
148. Evins, Pointer, and
Burges (2012)
Y R CC, E GA (NSGA-II) ? T, C, R ? PE N N/A
149. Fesanghary, Asadi,
and Geem (2012)
Y B LLC
CO
2
Harmony Search ? C ? CE,
ME
N N/A
150. Gagne and
Andersen (2012)
Y E D, G GA (Micro-GA) ? C, G ? BS ? F
151. Ghiaus and Jabbour
(2012)
WS B CC, OC Design of
Experiment
? H, P ? ME N N/A
152. Hamdy, Palonen,
and Hasan (2012)
Y B E, LCC GA (NSGA-II
variant)
GenOpt C, G, R ? PE N N/A
153. Huang, Kato, and
Hu (2012)
Y R H, CC GA (Simple) ? H, P ? PE N N/A
154. Jin and Overend
(2012)
Y B C, CO
2
,
TC
GA (NSGA-II) ? C, RS OM PE N N/A
155. Lee and Cheng
(2012)
N H E PSO, DS (HJ) ? H, SP ? ME N N/A
156. Pountney (2012) Y B CC, CO
2
GA (SPEA2) ? C, L, OS ? AE N N/A
157. Rapone and Saro
(2012)
N B CO
2
PSO ? C, S ? EE,
ME
N E
158. Sahu,
Bhattacharjee, and
Kaushik (2012)
N E E GA ? C, H ? CE N N/A
159. Salminen, Palonen,
and Sirén (2012)
Y B E, CC GA (PA-NSGA-II) IDA-ICE C, L, OS ? PE N N/A
160. Shaneb, Taylor, and
Coates (2012)
N CHP OC DS (Linear) ? OS ? CS N N/A
161. Stanescu, Kajl, and
Lamarche (2012)
N B E GA ? H ? PE N N/A
162. Vetterli and Benz
(2012)
N R C DS (Mixed
integer linear)
? H, S ? ME N N/A
163. Welle et al. (2012) Y B E, D ? PHX
ModelCenter
CAMMO
C, G, S ? CE N F
164. Welle, Rogers, and
Fischer (2012)
Y R E, D ? PHX
ModelCenter
BPF4SIM
C, G, S ? CE N F
165. Gholamhossein
Abdollahi (2013)
Y CHP CC, EE,
OC
MOEA ? H ? ME N N/A
166. Hamdy, Hasan, and
Siren (2013)
N B E, LCC GA ? B, C, H,
S, R
? PE N N/A
260
APPENDIX B EXPERIMENT FRAMEWORK + QUESTIONNAIRE
B.1 DESIGN PROFESSION CASE-BASED EXPERIMENT FRAMEWORK
261
262
263
264
265
B.2 DESIGN PROFESSION CASE-BASED EXPERIMENT QUESTIONNAIRE
266
267
268
269
270
271
APPENDIX C H.D.S BEAGLE ASSUPTIONS + TEMPLATE
C.1 CEA BUILDING TYPE OPTIONS AND LOAD PROPERTIES
TABLE C - 1: CEA BUILDING TYPE OPTIONS AND RELATED ENERGY LOAD PROPERTIES AS AVAILABLE THROUGH
REVIT’S CEA AT THE TIME OF THE RESEARCH. DATA IS REORGANIZED FROM SOURCE PROVIDED BY
AUTODESK’S CEA REFERENCE (AUTODESK 2012B).
EAM_BuildingType_id
EAM_SpaceType_id
gbXML-Building-Definition
People Per 100 Sq Meter
People Activity Level
People Sensible Heat Gain
(W/person)
People Latent Gain
(W/person)
People Sensible Heat Gain
(Btuh/person)
People Latent Gain
(Btuh/person)
Unknown
1 62 AutomotiveFacility 15 Standing, light work; walking 73 59 250 200
2 56 ConventionCenter 25 Standing, light work; walking 73 59 250 200
3 31 Courthouse 3.5 Moderately active office work 73 59 250 200
4 37 DiningBarLoungeOrLeisure 35 Sedentary office work 81 81 275 275
5 34 DiningCafeteriaFastFood 50 Sedentary office work 81 81 275 275
6 36 DiningFamily 35 Sedentary office work 81 81 275 275
7 76 Dormitory 10 Standing, light work; walking 73 59 250 200
8 54 ExerciseCenter 10 Walking 3 mph; light machine work 110 183 375 625
9 105 FireStation 15 Standing, light work; walking 73 59 250 200
10 55 Gymnasium 33.5 Standing, light work; walking 73 59 250 200
11 53 HospitalOrHealthcare 10 Standing, light work; walking 73 59 250 200
12 78 Hotel 2.5 Standing, light work; walking 73 59 250 200
13 107 Library 10 Standing, light work; walking 73 59 250 200
14 64 Manufacturing 2.5 Walking 3 mph; light machine work 110 183 375 625
15 77 Motel 2.5 Standing, light work; walking 73 59 250 200
16 12 MotionPictureTheatre 75 Standing, light work; walking 73 59 250 200
17 76 MultiFamily 2.5 Standing, light work; walking 73 59 250 200
18 65 Museum 33.5 Standing, light work; walking 73 59 250 200
19 94 Office 3.5 Standing, light work; walking 73 59 250 200
20 98 ParkingGarage 2.5 Standing, light work; walking 73 59 250 200
21 24 Penitentiary 10 Standing, light work; walking 73 59 250 200
22 13 PerformingArtsTheater 75 Standing, light work; walking 73 59 250 200
23 105 PoliceStation 15 Standing, light work; walking 73 59 250 200
24 116 PostOffice 3.5 Standing, light work; walking 73 59 250 200
25 124 ReligiousBuilding 75 Standing, light work; walking 73 59 250 200
26 121 Retail 10 Standing, light work; walking 73 59 250 200
27 22 SchoolOrUniversity 25 Standing, light work; walking 73 59 250 200
34 76 SingleFamily 0.945 Standing, light work; walking 73 59 250 200
28 30 SportsArena 75 Standing, light work; walking 73 59 250 200
29 31 TownHall 33.5 Standing, light work; walking 73 59 250 200
30 4 Transportation 50 Standing, light work; walking 73 59 250 200
31 88 Warehouse 5 Standing, light work; walking 73 59 250 200
32 123 Workshop 10 Light bench work 81 139 275 475
272
EAM_BuildingType_id
gbXML-Building-Definition
Lighting Load Density
(W/sqMeter)
Lighting Load Density
(W/SqFt)
Power Load Density
(W/sqMeter)
Power Load Density
(W/SqFt)
Electrical Equipment
Radiant Percentage
Carpet (Y/N)
Infiltration Flow(ACH)
Infiltration (CFM/sq ft)
ConditionType: Heated (H),
Cooled (C), Vented (V)
OA L/S person
OA Flow Per Area
(M3/hr/M2)
Unoccupied Cooling Set
Point
Unknown 82
1 AutomotiveFacility 9.69 0.90 16.14 1.50 0.5 N 0.25 0.038 H N/A 27.4 82
2 ConventionCenter 12.91 1.20 15.06 1.40 0.5 Y 0.10 0.038 H&C 8 3.7 82
3 Courthouse 12.91 1.20 15.06 1.40 0.5 Y 0.10 0.038 H&C 8 3.7 82
4 DiningBarLoungeOrLeisure 13.99 1.30 16.14 1.50 0.5 N 0.25 0.038 H&C 15 3.7 82
5 DiningCafeteriaFastFood 15.06 1.40 19.37 1.80 0.5 N 0.25 0.038 H&C 10 3.7 82
6 DiningFamily 17.22 1.60 20.44 1.90 0.5 Y 0.10 0.038 H&C 10 3.7 82
7 Dormitory 10.76 1.00 16.14 1.50 0.5 Y 0.25 0.038 H&C 7.5 3.7 82
8 ExerciseCenter 10.76 1.00 15.06 1.40 0.5 Y 0.25 0.038 H&C 10.5 3.7 82
9 FireStation 10.76 1.00 13.99 1.30 0.5 N 0.10 0.038 H 8 3.7 82
10 Gymnasium 11.84 1.10 18.29 1.70 0.5 N 0.10 0.038 H 13 3.7 82
11 HospitalOrHealthcare 12.91 1.20 17.22 1.60 0.5 N 0.10 0.038 H&C 10 3.7 82
12 Hotel 10.76 1.00 18.29 1.70 0.5 Y 0.10 0.038 H&C 9 3.7 82
13 Library 13.99 1.30 16.14 1.50 0.5 Y 0.10 0.038 H&C 8 3.7 82
14 Manufacturing 13.99 1.30 23.67 2.20 0.5 N 0.10 0.038 H 8 3.7 82
15 Motel 10.76 1.00 21.52 2.00 0.5 Y 0.25 0.038 H&C 9 3.7 82
16 MotionPictureTheatre 12.91 1.20 17.22 1.60 0.5 Y 0.10 0.038 H&C 9 3.7 82
17 MultiFamily 7.53 0.70 10.76 1.00 0.5 Y 0.25 0.038 H&C 7.5 3.7 82
18 Museum 11.84 1.10 17.22 1.60 0.5 Y 0.10 0.038 H&C 8 3.7 82
19 Office 10.76 1.00 13.99 1.30 0.3 Y 0.10 0.038 H&C 10 3.7 82
20 ParkingGarage 3.23 0.30 3.23 0.30 0.5 N 5.00 0.038 V N/A 27.4 82
21 Penitentiary 10.76 1.00 12.91 1.20 0.5 N 0.25 0.038 H 9 3.7 82
22 PerformingArtsTheater 17.22 1.60 16.14 1.50 0.5 Y 0.25 0.038 H&C 9 3.7 82
23 PoliceStation 10.76 1.00 13.99 1.30 0.3 Y 0.10 0.038 H&C 8 3.7 82
24 PostOffice 11.84 1.10 17.22 1.60 0.5 N 0.10 0.038 H&C 8 3.7 82
25 ReligiousBuilding 13.99 1.30 23.67 2.20 0.5 N 0.10 0.038 H&C 8 3.7 82
26 Retail 16.14 1.50 20.44 1.90 0.5 N 0.10 0.038 H&C 7.75 3.7 82
27 SchoolOrUniversity 12.91 1.20 16.14 1.50 0.5 N 0.25 0.038 H 8 3.7 82
34 SingleFamily 10.76 1.00 10.76 1.00 0.5 Y 0.50 0.038 H&C 7.5 3.7 82
28 SportsArena 11.84 1.10 16.14 1.50 0.5 N 0.10 0.038 H&C 8 3.7 82
29 TownHall 11.84 1.10 15.06 1.40 0.5 Y 0.10 0.038 H&C 8 3.7 82
30 Transportation 10.76 1.00 12.91 1.20 0.5 N 0.10 0.038 H&C 8 3.7 82
31 Warehouse 8.61 0.80 12.91 1.20 0.5 N 0.10 0.038 H 6 0.9 82
32 Workshop 15.06 1.40 18.29 1.70 0.5 N 0.10 0.038 H&C 8 3.7 82
273
C.2 CEA SPACE TYPE OPTIONS AND LOAD PROPERTIES
TABLE C - 2: CEA SPACE TYPE OPTIONS AND RELATED ENERGY LOAD PROPERTIES AS AVAILABLE THROUGH
REVIT’S CEA AT THE TIME OF THE RESEARCH. DATA IS REORGANIZED FROM SOURCE PROVIDED BY
AUTODESK’S CEA REFERENCE (AUTODESK 2012B).
EAM_SpaceType_id
gbXML-Space-Definitions
People Per 100 Sq Meter
People Sensible Heat Gain
(W/person)
People Latent Gain
(W/person)
People Sensible Heat Gain IP
(Btuh/person)
PeopleLatentHeatGainIP
(Btuh/person)
Lighting Load Density
(W/sqMeter)
Lighting Load Density
(W/SqFt)
Power Load Density
(W/sqMeter)
Power Load Density
(W/SqFt)
Electrical Equipment
Radiant Percentage
Carpet (Y/N)
1 ActiveStorage 3 73 59 250 200 8.61 0.80 3.2 0.30 0.5 N
2
ActiveStorageHospitalOrHealt
hcare
3 73 59 250 200 9.69 0.90 3.2 0.30 0.5 N
3 AirOrTrainOrBusBaggageArea 100 73 59 250 200 10.76 1.00 10.8 1.00 0.5 N
4 AirportConcourse 100 73 59 250 200 6.46 0.60 10.8 1.00 0.5 Y
5 AtriumEachAdditionalFloor 33 73 59 250 200 2.15 0.20 5.8 0.54 0.5 N
6 AtriumFirstThreeFloors 33 73 59 250 200 6.46 0.60 5.8 0.54 0.5 N
7
AudienceOrSeatingAreaPenit
entiary
150 66 31 225 105 7.53 0.70 5.8 0.54 0.5 N
8
AudienceOrSeatingAreaExerci
seCenter
150 66 31 225 105 3.23 0.30 5.8 0.54 0.5 N
9
AudienceOrSeatingAreaGymn
asium
150 66 31 225 105 4.31 0.40 5.8 0.54 0.5 N
10
AudienceOrSeatingAreaSport
sArena
150 66 31 225 105 4.31 0.40 10.8 1.00 0.5 N
11
AudienceOrSeatingAreaConve
ntionCenter
150 66 31 225 105 7.53 0.70 10.8 1.00 0.5 N
12
AudienceOrSeatingAreaMotio
nPictureTheatre
150 66 31 225 105 12.91 1.20 5.8 0.54 0.5 Y
13
AudienceOrSeatingAreaPerfor
mingArtsTheatre
150 66 31 225 105 27.99 2.60 5.8 0.54 0.5 Y
14
AudienceOrSeatingAreaReligi
ous
120 66 31 225 105 18.29 1.70 5.8 0.54 0.5 Y
15
AudienceOrSeatingAreaPolice
OrFireStations
50 66 31 225 105 10.00 0.93 10.8 1.00 0.5 Y
16
AudienceOrSeatingAreaCourt
House
70 66 31 225 105 10.00 0.93 10.8 1.00 0.5 Y
17
AudienceOrSeatingAreaAudit
orium
150 66 31 225 105 10.00 0.93 10.8 1.00 0.5 N
18 BankCustomerArea 30 66 31 225 105 16.00 1.49 16.1 1.50 0.5 Y
19 BankingActivityAreaOffice 5 73 59 250 200 16.14 1.50 16.1 1.50 0.3 Y
20 BarberAndBeautyParlor 25 73 59 250 200 10.76 1.00 21.5 2.00 0.5 N
21
CardFileAndCataloguingLibrar
y
10 73 59 250 200 11.84 1.10 16.1 1.50 0.5 Y
22
ClassroomOrLectureOrTrainin
gPenitentiary
65 73 59 250 200 13.99 1.30 5.8 0.54 0.5 N
23
ClassroomOrLectureOrTrainin
g
65 73 59 250 200 15.06 1.40 10.8 1.00 0.5 N
24
ComfinementCellsPenitentiar
y
25 73 59 250 200 10.00 0.93 10.8 1.00 0.5 N
25
ComfinementCellsCourtHous
e
25 73 59 250 200 9.69 0.90 16.1 1.50 0.5 N
274
EAM_SpaceType_id
gbXML-Space-Definitions
People Per 100 Sq Meter
People Sensible Heat Gain
(W/person)
People Latent Gain
(W/person)
People Sensible Heat Gain IP
(Btuh/person)
PeopleLatentHeatGainIP
(Btuh/person)
Lighting Load Density
(W/sqMeter)
Lighting Load Density
(W/SqFt)
Power Load Density
(W/sqMeter)
Power Load Density
(W/SqFt)
Electrical Equipment
Radiant Percentage
Carpet (Y/N)
26
ConferenceMeetingOrMultip
urpose
50 73 59 250 200 13.99 1.30 10.8 1.00 0.5 Y
27 CorridorOrTransition 10 73 59 250 200 5.38 0.50 3.2 0.30 0.5 Y
28
CorridorOrTransitionManufac
turingFacility
10 73 59 250 200 5.38 0.50 3.2 0.30 0.5 N
29
CorridorsWithPatientWaiting
ExamHospitalOrHealthcare
10 73 59 250 200 10.76 1.00 3.2 0.30 0.5 N
30 CourtSportsAreaSportsArena 30 73 59 250 200 24.76 2.30 10.8 1.00 0.5 N
31 CourtroomCourtHouse 70 73 59 250 200 20.45 1.90 16.1 1.50 0.5 Y
32
DepartmentStoreSalesAreaRe
tail
15 73 59 250 200 23.00 2.14 10.8 1.00 0.5 N
33 DetailedManufacturingFacility 10 73 59 250 200 22.60 2.10 10.8 1.00 0.5 N
34 DiningArea 70 81 81 275 275 9.69 0.90 5.8 0.54 0.5 Y
35 DiningAreaHotel 70 81 81 275 275 13.99 1.30 5.8 0.54 0.5 Y
36 DiningAreaFamilyDining 70 81 81 275 275 22.60 2.10 5.8 0.54 0.5 Y
37
DiningAreaLoungeOrLeisureDi
ning
70 81 81 275 275 15.06 1.40 5.8 0.54 0.5 N
38 DiningAreaMotel 70 81 81 275 275 12.91 1.20 5.8 0.54 0.5 N
39 DiningAreaTransportation 100 81 81 275 275 10.00 0.93 5.8 0.54 0.5 N
40 DiningAreaPenitentiary 100 81 81 275 275 13.99 1.30 5.8 0.54 0.5 N
41 DiningAreaCivilServices 100 81 81 275 275 10.00 0.93 5.8 0.54 0.5 N
42 DormitoryBedroom 10 73 45 250 155 12.00 1.11 5.8 0.54 0.5 Y
43 DormitoryStudyHall 10 73 45 250 155 12.00 1.11 5.8 0.54 0.5 Y
44
DressingOrLockerOrFittingRo
omGymnasium
20 73 59 250 200 6.46 0.60 5.8 0.54 0.5 N
45
DressingOrLockerOrFittingRo
omCourtHouse
20 73 59 250 200 6.00 0.56 5.8 0.54 0.5 N
46
DressingOrLockerOrFittingRo
omPerformingArtsTheatre
20 73 59 250 200 6.00 0.56 5.8 0.54 0.5 N
47
DressingOrLockerOrFittingRo
omAuditorium
20 73 59 250 200 6.00 0.56 5.8 0.54 0.5 N
48
DressingOrLockerOrFittingRo
omExerciseCenter
20 73 59 250 200 6.00 0.56 5.8 0.54 0.5 N
49 ElectricalOrMechanical 3 73 59 250 200 16.14 1.50 3.2 0.30 0.5 N
50 ElevatorLobbies 10 73 59 250 200 14.00 1.30 2.2 0.20 0.5 Y
51
EmergencyHospitalOrHealthc
are
20 73 59 250 200 29.06 2.70 16.1 1.50 0.5 N
52
EquipmentRoomManufacturi
ngFacility
3 73 59 250 200 12.91 1.20 10.8 1.00 0.5 N
53
ExamOrTreatmentHospitalOr
Healthcare
20 73 59 250 200 16.14 1.50 16.1 1.50 0.5 N
54 ExcerciseAreaExerciseCenter 30 208 319 710 1090 10.00 0.93 5.8 0.54 0.5 Y
55 ExcerciseAreaGymnasium 30 208 319 710 1090 9.69 0.90 5.8 0.54 0.5 N
56
ExhibitSpaceConventionCente
r
50 73 59 250 200 13.99 1.30 16.1 1.50 0.5 Y
275
EAM_SpaceType_id
gbXML-Space-Definitions
People Per 100 Sq Meter
People Sensible Heat Gain
(W/person)
People Latent Gain
(W/person)
People Sensible Heat Gain IP
(Btuh/person)
PeopleLatentHeatGainIP
(Btuh/person)
Lighting Load Density
(W/sqMeter)
Lighting Load Density
(W/SqFt)
Power Load Density
(W/sqMeter)
Power Load Density
(W/SqFt)
Electrical Equipment
Radiant Percentage
Carpet (Y/N)
57
FellowshipHallReligiousBuildi
ngs
120 66 31 225 105 9.69 0.90 5.8 0.54 0.5 N
58 FineMaterialWarehouse 10 73 59 250 200 15.06 1.40 3.2 0.30 0.5 N
59
FineMerchandiseSalesAreaRe
tail
15 73 59 250 200 23.00 2.14 10.8 1.00 0.5 Y
60
FireStationEngineRoomPolice
OrFireStation
10 73 59 250 200 8.61 0.80 16.1 1.50 0.5 N
61 FoodPreparation 20 73 59 250 200 12.91 1.20 16.1 1.50 0.5 N
62
GarageServiceOrRepairAutom
otiveFacility
10 73 59 250 200 7.53 0.70 10.8 1.00 0.5 N
63
GeneralHighBayManufacturin
gFacility
10 73 59 250 200 18.29 1.70 10.8 1.00 0.5 N
64
GeneralLowBayManufacturin
gFacility
10 73 59 250 200 12.91 1.20 10.8 1.00 0.5 N
65 GeneralExhibitionMuseum 40 73 59 250 200 10.76 1.00 16.1 1.50 0.5 Y
66
HospitalNurseryHospitalOrHe
althcare
10 73 59 250 200 6.46 0.60 16.1 1.50 0.5 N
67
HospitalOrMedicalSuppliesHo
spitalOrHealthcare
3 73 59 250 200 15.06 1.40 3.2 0.30 0.5 N
68
HospitalOrRadiologyHospital
OrHealthcare
20 73 59 250 200 4.31 0.40 16.1 1.50 0.5 N
69
HotelOrConferenceCenterCon
ferenceOrMeeting
50 66 31 225 105 14.00 1.30 10.8 1.00 0.5 Y
70 InactiveStorage 3 73 59 250 200 3.23 0.30 3.2 0.30 0.5 N
71 JudgesChambersCourtHouse 50 73 59 250 200 13.99 1.30 10.8 1.00 0.5 Y
72 LaboratoryOffice 5 73 59 250 200 15.06 1.40 16.1 1.50 0.5 N
73 LaundryIroningAndSorting 20 81 139 275 475 6.00 0.56 32.3 3.00 0.5 N
74
LaundryWashingHospitalOrHe
althcare
10 81 139 275 475 6.46 0.60 3.2 0.30 0.5 N
75
LibraryAudioVisualLibraryAud
ioVisual
25 73 59 250 200 13.99 1.30 16.1 1.50 0.5 Y
76 LivingQuartersDormitory 10 73 45 250 155 11.84 1.10 5.8 0.54 0.5 Y
77 LivingQuartersMotel 10 73 45 250 155 12.00 1.11 5.8 0.54 0.5 Y
78 LivingQuartersHotel 10 73 45 250 155 11.84 1.10 5.8 0.54 0.5 Y
79 Lobby 150 73 59 250 200 13.99 1.30 5.8 0.54 0.5 Y
80 LobbyReligiousBuildings 150 73 59 250 200 14.00 1.30 5.8 0.54 0.5 Y
81 LobbyMotionPictureTheatre 150 73 59 250 200 11.84 1.10 5.8 0.54 0.5 Y
82 LobbyAuditorium 150 73 59 250 200 14.00 1.30 5.8 0.54 0.5 Y
83 LobbyPerformingArtsTheatre 150 73 59 250 200 35.52 3.30 5.8 0.54 0.5 Y
84 LobbyPostOffice 150 73 59 250 200 14.00 1.30 5.8 0.54 0.5 N
85 LobbyHotel 30 73 59 250 200 11.84 1.10 5.8 0.54 0.5 Y
86 LoungeOrRecreation 25 73 59 250 200 13.00 1.21 5.8 0.54 0.5 Y
87
MallConcourseSalesAreaRetai
l
40 73 59 250 200 18.29 1.70 10.8 1.00 0.5 N
88
MassMerchandisingSalesArea
Retail
15 73 59 250 200 12.00 1.11 1.1 0.10 0.5 N
276
EAM_SpaceType_id
gbXML-Space-Definitions
People Per 100 Sq Meter
People Sensible Heat Gain
(W/person)
People Latent Gain
(W/person)
People Sensible Heat Gain IP
(Btuh/person)
PeopleLatentHeatGainIP
(Btuh/person)
Lighting Load Density
(W/sqMeter)
Lighting Load Density
(W/SqFt)
Power Load Density
(W/sqMeter)
Power Load Density
(W/SqFt)
Electrical Equipment
Radiant Percentage
Carpet (Y/N)
89
MediumOrBulkyMaterialWar
ehouse
10 73 59 250 200 9.69 0.90 3.2 0.30 0.5 N
90
MerchandisingSalesAreaRetai
l
15 73 59 250 200 23.00 2.14 10.8 1.00 0.5 N
91 MuseumAndGalleryStorage 5 73 59 250 200 9.00 0.84 2.2 0.20 0.5 N
92
NurseStationHospitalOrHealt
hcare
10 73 59 250 200 10.76 1.00 16.1 1.50 0.3 Y
93 OfficeEnclosed 5 73 59 250 200 11.84 1.10 16.1 1.50 0.3 Y
94 OfficeOpenPlan 5 73 59 250 200 11.84 1.10 16.1 1.50 0.3 Y
95
OfficeCommonActivityAreasIn
activeStorage
3 73 59 250 200 3.00 0.28 2.2 0.20 0.3 N
96
OperatingRoomHospitalOrHe
althcare
20 73 59 250 200 23.67 2.20 16.1 1.50 0.5 N
97
OtherTelevisedPlayingAreaSp
ortsArena
150 73 59 250 200 15.06 1.40 10.8 1.00 0.5 N
98
ParkingAreaAttendantOnlyPa
rkingGarage
5 73 59 250 200 2.00 0.19 3.2 0.30 0.5 N
99
ParkingAreaPedestrianParkin
gGarage
5 73 59 250 200 2.15 0.20 3.2 0.30 0.5 N
100
PatientRoomHospitalOrHealt
hcare
10 73 59 250 200 7.53 0.70 16.1 1.50 0.5 Y
101
PersonalServicesSalesAreaRet
ail
15 73 59 250 200 23.00 2.14 10.8 1.00 0.5 Y
102
PharmacyHospitalOrHealthca
re
10 73 59 250 200 12.91 1.20 16.1 1.50 0.5 Y
103
PhysicalTherapyHospitalOrHe
althcare
20 73 59 250 200 9.69 0.90 16.1 1.50 0.5 N
104 PlayingAreaGymnasium 30 208 319 710 1090 15.06 1.40 5.8 0.54 0.5 N
Plenum 0 0 0 0 0 0.00 0.00 0.0 0.00 0.5 N
105
PoliceStationLaboratoryPolice
OrFireStations
25 73 59 250 200 15.00 1.39 10.8 1.00 0.5 N
106
PublicAndStaffLoungeHospital
OrHealthcare
25 73 59 250 200 9.00 0.84 10.8 1.00 0.5 Y
107 ReadingAreaLibrary 10 73 59 250 200 12.91 1.20 16.1 1.50 0.5 Y
108
ReceptionOrWaitingTransport
ation
100 73 59 250 200 11.84 1.10 5.8 0.54 0.5 N
109 ReceptionOrWaitingMotel 30 73 59 250 200 11.84 1.10 5.8 0.54 0.5 Y
110 ReceptionOrWaitingHotel 30 73 59 250 200 11.84 1.10 5.8 0.54 0.5 Y
111
RecoveryHospitalOrHealthcar
e
20 73 59 250 200 8.61 0.80 16.1 1.50 0.5 N
112 RestorationMuseum 40 73 59 250 200 18.29 1.70 16.1 1.50 0.5 N
113 Restrooms 10 73 59 250 200 9.69 0.90 3.2 0.30 0.5 N
114 RingSportsAreaSportsArena 70 73 59 250 200 29.06 2.70 10.8 1.00 0.5 N
115
SleepingQuartersPoliceOrFire
Station
20 73 31 250 105 3.23 0.30 16.1 1.50 0.5 Y
116 SortingAreaPostOffice 5 73 59 250 200 12.91 1.20 10.8 1.00 0.5 N
117 SpecialtyStoreSalesAreaRetail 15 73 59 250 200 23.00 2.14 10.8 1.00 0.5 N
277
EAM_SpaceType_id
gbXML-Space-Definitions
People Per 100 Sq Meter
People Sensible Heat Gain
(W/person)
People Latent Gain
(W/person)
People Sensible Heat Gain IP
(Btuh/person)
PeopleLatentHeatGainIP
(Btuh/person)
Lighting Load Density
(W/sqMeter)
Lighting Load Density
(W/SqFt)
Power Load Density
(W/sqMeter)
Power Load Density
(W/SqFt)
Electrical Equipment
Radiant Percentage
Carpet (Y/N)
118 StacksLibrary 10 73 59 250 200 18.29 1.70 16.1 1.50 0.5 Y
119 StairsInactive 10 73 59 250 200 6.00 0.56 2.2 0.20 0.5 N
120 Stairway 10 73 59 250 200 6.46 0.60 3.2 0.30 0.5 N
121 SupermarketSalesAreaRetail 8 73 59 250 200 22.60 2.10 10.8 1.00 0.5 N
122
TerminalTicketCounterTransp
ortation
100 73 59 250 200 16.14 1.50 5.8 0.54 0.5 Y
123 WorkshopWorkshop 20 73 59 250 200 20.45 1.90 10.8 1.00 0.5 N
124 WorshipPulpitChoirReligious 120 73 59 250 200 25.83 2.40 5.8 0.54 0.5 Y
278
C.3 CEA CONCEPTUAL CONSTRUCTION OPTIONS AND THERMAL PROPERTIES
TABLE C - 3: CEA CONCEPTUAL CONSTRUCTION OPTIONS AND THERMAL PROPERTIES AS AVAILABLE THROUGH
REVIT’S CEA AT THE TIME OF RESEARCH. THE DATA IS REORGANIZED FROM SOURCE PROVIDED BY
AUTODESK’S CEA ENERGY SETTING REFERENCE (AUTODESK 2012C).
Construction Type(s)
Conceptual Construction
IP Units => ft²-hr ºF/Btu
SI Units = > W/(m² • °K)
lbm/ft2
kg/m2
Btu/(ft2•°F)
J/(m² • °K)
R-Value (IP Units)
R Value Metric
Unit Density (IP Units)
Unit Density metric
Heat Capacity (IP Units)
Heat Capacity Metric
Mass Exterior Wall, Mass
Exterior Wall - Underground,
Mass Interior Wall
Lightweight Construction – High Insulation 25.4 4.47 28.53 139.54 4.788 0.234
Lightweight Construction – Typical Cold Climate Insulation 17.3 3.05 31.89 155.97 4.372 0.214
Lightweight Construction – Typical Mild Climate Insulation 9.8 1.73 37.65 184.15 3.947 0.193
Lightweight Construction – Low Insulation 7.81 1.38 61.64 301.46 4.001 0.196
Lightweight Construction – No Insulation/Interior 2.81 0.49 93.14 455.55 3.947 0.193
High Mass Construction – High Insulation 16.55 2.91 101.34 495.67 22.804 1.116
High Mass Construction – Typical Cold Climate Insulation 14.65 2.58 97.56 477.14 22.095 1.081
High Mass Construction – Typical Mild Climate Insulation 10.85 1.91 104.82 512.67 22.063 1.080
High Mass Construction – No Insulation/Interior 1.35 0.24 136.86 669.35 21.963 1.075
Mass Roof
High Insulation - Cool Roof 31.99 5.63 22.13 108.25 3.223 0.158
High Insulation - Dark Roof 31.99 5.63 22.13 108.25 3.223 0.158
Typical Insulation - Cool Roof 21.99 3.87 14.93 73.04 2.518 0.123
Typical Insulation - Dark Roof 21.99 3.87 14.93 73.04 2.518 0.123
Low Insulation - Cool Roof 11.99 2.11 20.77 101.56 2.233 0.109
Low Insulation - Dark Roof 11.99 2.11 20.77 101.56 2.233 0.109
No Insulation - Dark Roof 1.99 0.35 46.58 227.82 1.948 0.095
Mass Floor
Mass Floor - Slab
Lightweight Construction – High Insulation 29.41 5.18 4.81 23.54 1.417 0.069
Lightweight Construction – Typical Insulation 20.83 3.67 6.77 33.13 1.381 0.068
Lightweight Construction – Low Insulation 14.08 2.48 9.93 48.57 1.355 0.066
Lightweight Construction – No Insulation/Interior 4.2 0.74 32.00 156.51 0.660 0.032
High Mass Construction – Frigid Climate Slab Insulation 16.14 2.84 123.27 602.93 24.581 1.203
High Mass Construction – Cold Climate Slab Insulation 11.14 1.96 123.27 602.93 24.581 1.203
High Mass Construction – No Insulation 6.14 1.08 123.27 602.93 24.581 1.203
279
Construction Type
Conceptual Construction
IP Units => Btu/(ft2·oF·h)
SI Units => W/(m2·oC)
unitless
unitless
U-Value
U-Value Metric
Solar Heat Gain
Coefficient
(SHGC)
Visible Transmittance
(Tvis)
Mass Glazing
Mass Skylight
Single Pane Clear – No coating 1.09 6.18 0.81 0.880
Single Pane – Tinted 1.11 6.32 0.71 0.610
Single Pane – Reflective 0.89 5.06 0.28 0.130
Double Pane Clear – No coating 0.56 3.17 0.69 0.780
Double Pane - Tinted 0.57 3.24 0.61 0.550
Double Pane - Reflective 0.42 2.40 0.19 0.100
Double Pane Clear – LowE Cold Climate, High SHGC 0.35 1.96 0.67 0.720
Double Pane Clear – LowE Hot Climate, Low SHGC 0.30 1.68 0.44 0.700
High Performance Double Pane Clear - High Performance, LowE, High Tvis, Low SHGC 0.29 1.63 0.27 0.643
Triple Pane - Clear, LowE Hot or Cold Climate 0.22 1.26 0.47 0.640
Quad Pane - Clear, LowE Hot or Cold Climate 0.12 0.66 0.45 0.620
280
C.4 H.D.S. BEAGLE EXCEL TEMPLATE SAMPLE
This section is to provide a sample of the generated Excel temple for H.D.S. Beagle v20120903.
The corresponding parametric model is illustrated in Figure C - 1.
FIGURE C - 1: A SAMPLE PARAMETRIC MODEL FOR H.D.S. BEAGLE. DIAGRAM BY THE AUTHOR.
281
Sheet1: Home Sheet, Figure C - 2. - This worksheet is to provide the brief introduction of the
project, project team members and each sheet’s functionality.
FIGURE C - 2: SCREENSHOT OF H.D.S. BEAGLE’S EXCEL TEMPLATE – SHEET 1: HOME SHEET.
Sheet 2: Geometry Parameter Worksheet (GeometryParam), Figure C - 3. - This worksheet is for
users to define the modifiable parameters’ names and variation ranges. The names of the
parameters can be copied and pasted from the shared parameter file (.txt).
FIGURE C - 3: SCREENSHOT OF H.D.S. BEAGLE’S EXCEL TEMPLATE – SHEET 2: GEOMETRYPARAM.
282
Sheet 3: Level Setting Worksheet (LevelSetting), Figure C - 4. - This worksheet is for users to define
the level setting of each mass geometry. In order to allow H.D.S. Beagle 1.0 to read the setting
from the constraint file but also to update the geometry accordingly, the level category
parameters have to be set during the parameterization process.
FIGURE C - 4: SCREENSHOT OF H.D.S. BEAGLE’S EXCEL TEMPLATE – SHEET 3: LEVELSETTING.
Sheet 4: Project Constraints Worksheet (ProjectConstraints), Figure C - 5. - This worksheet is for
users to define the project constraint information. These settings will allow H.D.S. Beagle to
determine if an offspring is valid or not.
FIGURE C - 5: SCREENSHOT OF H.D.S. BEAGLE’S EXCEL TEMPLATE – SHEET 4: PROJECTCONSTRAINTS.
283
Sheet 5: SPC Score Parameter Worksheet (SPCScoreParam), Figure C - 6. - This worksheet is for
users to input the parameters and values that are used to calculate the SPC score.
FIGURE C - 6: SCREENSHOT OF H.D.S. BEAGLE’S EXCEL TEMPLATE – SHEET 5: SPCSCOREPARAM.
Sheet 6: SPC Formula Worksheet (SPCFormula), Figure C - 7. - This worksheet defines the design
score calculation formula. Currently, H.D.S. Beagle can only calculate the SPC in this format.
FIGURE C - 7: SCREENSHOT OF H.D.S. BEAGLE’S EXCEL TEMPLATE – SHEET 6: SPCFORMULA.
284
Sheet 7: Financial Parameter Worksheet (FinancialParam), Figure C - 8. - This worksheet is for
users to input the financial calculation related parameters.
FIGURE C - 8: SCREENSHOT OF H.D.S. BEAGLE’S EXCEL TEMPLATE – SHEET 7: FINANCIALPARAM.
Sheet 8: Financial Pro Forma Worksheet (FinanicalProForma), Figure C - 9. - This worksheet
defines the current H.D.S Beagle financial model used to calculate the NPV accordingly.
FIGURE C - 9: SCREENSHOT OF H.D.S. BEAGLE’S EXCEL TEMPLATE – SHEET 8: FINANCIALPROFORMA.
285
APPENDIX D SCENARIOS OF THE HYPOTHETICAL CASE-BASED
EXPERIMENTS
D.1 SCENARIO 1
286
287
D.2 SCENARIO 2
288
289
290
D.3 SCENARIO 3
291
292
293
D.4 SCENARIO 4
294
295
296
D.5 SCENARIO 5
297
298
299
D.6 SCENARIO 6
300
301
D.7 SCENARIO 7
302
303
D.8 SCENARIO 8
304
305
306
D.9 SCENARIO 9
307
308
D.10 SCENARIO 10
309
310
D.11 SCENARIO 11
311
312
313
D.12 SCENARIO 12
314
315
APPENDIX E PEDAGOGICAL EXPERIMENT I
E.1 PEDAGOGICAL EXPERIMENT I PROPOSAL
316
E.2 PEDAGOGICAL EXPERIMENT I WORKSHEET DESIGN
317
318
319
320
321
322
323
324
325
326
E.3 PEDAGOGICAL EXPERIMENT I - PART B: STUDENTS’ PARAMETRIC MODEL DESIGN
Map of Student Recorded and Modified Data Summary: This table indicates the graphic
representation and the data provided in this section.
Student’s Original Parametric Model Design
Student’s geometric parameter settings and
ranges
3 D image of the parametric model
Student’s Exploration
Design Alternative 1
Student’s Exploration
Design Alternative 2
Student’s Exploration
Design Alternative 3
Student’s Exploration
Design Alternative 4
Modified Parametric Model by the author
Modified geometric parameter settings and
ranges. Modifications made in order keep the
integrity of the student’s design intent while
ensuring compliance with assignment
requirements and for implementation with
H.D.S. Beagle. Modifications vary per student
project as identified by the author.
3D image of the modified parametric model
Selected design
alternative 1 from
Rank 1 pool of 3 hour
run time
Selected design
alternative 2 from
Rank 1 pool of 3 hour
run time
Selected design
alternative 3 from
Rank 1 pool of 3 hour
run time
Selected design
alternative 4 from
Rank 1 pool of 3 hour
run time
Note: The consistent GA settings of the Beagle during exploration: Initial population size: 10;
population size 20; Maximum iteration number: 50; crossover ration: 60%; mutation ratio: 0;
selection size: 20. The consistently explored energy related parameters and variation ranges
during exploration: Target percent glazing [0.2, 0.83]; Shade Depth [0.3, 3]; Target Percentage
Skylight [0, 0.45].
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
E.4 PEDAGOGICAL EXPERIMENT I - PART B: STUDENTS’ PARAMETRIC MODEL DATA
TABLE E - 1: PEDAGOGICAL I – PART B – STUDENTS’ ANSWER & AUTHORS’ EVALUATION.
Required Parameters Customized Parameters
ID
Parameter
Name
Quality of the
Parameter
Settings
Parameter
Name
Parameter
Translation
%
Quality of the
Parameter
Settings
ST01 FloorNumber 0 total height Y 70 0
SiteSetback 1 target % skylight Y 80 0
shade depth Y 60 0
ST02 FloorNumber 1 TopW Y 30 1
SiteSetback 1 TopL Y 30 1
Couryard Y 40 0.5
ST03 FloorNumber 0 Top Radius Y 60 0.5
SiteSetback 0.5 North Setback Y 5 0.5
South Setback Y 5 0.5
ST04 FloorNumber 1 angle Y 30 1
SiteSetback 1 radius Y 20 1
with Y 10 1
ST05 FloorNumber 1 TCourtyard Y * 1
SiteSetback 1 Courtyard Y * 1
RoofSize Y * 1
ST06 FloorNumber 1 R3 N N/A 1
SiteSetback 1 R4 N N/A 0
R5 N N/A 0
ST07 FloorNumber 1 Setback2 N 20 0.5
SiteSetback 1 Setback3 Y 40 0.5
Width2 Y 20 0.5
ST08 FloorNumber 0 Bottom Y 40 0
SiteSetback 0.5 rotation 1 Y 70 1
rotation 2 Y 80 1
ST09 FloorNumber 1 TopScale Y 100 1
SiteSetback 1 CornerAngle N N/A 1
AngleDepth N N/A 1
ST10 FloorNumber 0 Tilt Y * 1
SiteSetback 0.5 Stretch N N/A 1
Cantilever Y * 1
ST11 FloorNumber 1 target percent skylight/R2 Y 50 N/A
SiteSetback 0.5 topscale Y 70 0
midscale Y 80 0.5
ST12 FloorNumber 1 light well-width Y 100 1
SiteSetback 1 light well-legth Y 100 1
plaza height Y 50 1
ST13 FloorNumber 1 mid level length Y 25 1
SiteSetback 1 top level length Y 25 1
353
Required Parameters Customized Parameters
ID
Parameter
Name
Quality of the
Parameter
Settings
Parameter
Name
Parameter
Translation
%
Quality of the
Parameter
Settings
mass displacements Y 10 1
ST15 FloorNumber 1 Top Legnth Y 70 1
SiteSetback 0.5 Bottom Void Y 40 1
Top Void Y 30 1
ST16 FloorNumber 1 Lobby Level Y 100 1
SiteSetback 1 VoidR Y 100 1
VoidSetback Y 70 1
ST17 FloorNumber 1 keyfloor Y 33 1
SiteSetback 1 rotation Y 33 1
proportion Y 33 1
ST18 FloorNumber 1 OverhangEW Y 100 1
SiteSetback 1 OverhangNS Y 100 1
CourtI Y 70 0.5
ST19 FloorNumber 0.5 Chamfer Height Y 40 1
SiteSetback 1 Chamfer Angle Y 40 1
Building Height Y 20 1
ST20 FloorNumber 1 W1 N N/A 1
SiteSetback 0.5 W2 Y 75 1
Y axis Y 90 1
ST21 FloorNumber 1 Courtyard width Y * 0.5
SiteSetback 1 courtyard depth Y * 0.5
height Y * 0
ST22 FloorNumber 1 Low Angle Y 40 0.5
SiteSetback 1 Low Width Y 40 1
Hole Radius Y 20 1
ST23 FloorNumber 1 Scale Top Y 100 1
SiteSetback 1 ScaleCourtyard Y 66 1
ScaleMid N 33 1
ST24 FloorNumber 0 TopScale N 0 0
SiteSetback 1 Angle Y 100 0.5
MidScale Y 100 0.5
ST25 FloorNumber 1 building thickness1 Y * 0.5
SiteSetback 1 building thickness 2 Y * 0.5
building thickness 3 Y * 0.5
ST26 FloorNumber 0 HEIGHT Y 50 0.5
SiteSetback 0.5 COURTYARD Y 50 0.5
N/A N/A N/A N/A
354
TABLE E - 2: EVALUATION OF STUDENTS’ PARAMETRIC MODELS FOR PEDAGOGICAL EXPERIMENT I – PART B.
Parameter
Accuracy
Model Robustness Driving
Parameter
Compliance
Accuracy of
Calculated Data Final
Evaluation
Comparable
with H.D.S.
Beagle Rules
Variation
Range
Accuracy of
Model Setup
Accuracy of
Calculations
Scale
(Poor,
Acceptable,
Good)
(Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (A-D) (Y/N)
ST01 Poor Y N N N N D N
ST02 Acceptable N N Y N Y B Y
ST03 Poor N N N Y N D N
ST04 Good Y N N Y Y B N
ST05 Acceptable N N N Y Y B N
ST06 Poor N N N Y Y C N
ST07 Poor N N Y N N D N
ST08 Poor N N N N N C N
ST09 Good Y N Y Y N B N
ST10 Poor Y N N N N D N
ST11 Poor Y N N N N D N
ST12 Good Y N Y Y Y B N
ST13 Good Y N Y Y Y B Y
ST15 Acceptable N N Y Y Y C N
ST16 Acceptable Y N Y N N B N
ST17 Good Y N Y Y Y B Y
ST18 Good Y N Y N N B N
ST19 Acceptable N N N Y Y B N
ST20 Good Y N Y Y Y B Y
ST21 Poor N N N N N C Y
ST22 Good Y N N Y Y B N
ST23 Good Y N Y Y Y B N
ST24 Acceptable N N Y Y N C N
ST25 Good Y N N N N B N
ST26 Poor N N N N N C N
Summary
P: 9(36%)
A: 6(24%)
G: 10(40%)
N: 11 (44%)
Y: 14 (56%)
N: 25
(100%)
Y: 0 (0%)
N: 13 (52%)
Y: 12 (48%)
N: 11 (44%)
Y: 14 (56%)
N: 13 (52%)
Y: 12 (48%)
A: 0 (0%)
B: 14 (56%)
C: 6 (24%)
D: 5 (20%)
N: 20 (80%)
Y: 5 (20%)
355
E.5 PEDAGOGICAL EXPERIMENT I – RAW ANSWERS PROVIDED THROUGH THE FINAL
QUESTIONNAIRE
Question 1: Do you think the parametric model can help you to explore more design iteration
more effectively? Why or why not?
ST01 Yes I think it can help, it is easy to work with
ST02 It is a quick way of getting through a couple different ideas and variations on the same basic
model. It would be helpful if more time was spent creating a larger amount of different
parameters and that the model overall was a bit more adjustable.
ST03 Yes. I think so. For now, in school, it may be less useful. But when we get to work, we need
some changeable data and make the best result to support our design. As an Architect, how to
deal with the number in design and our concept are the most important things.
ST04 I believe parametric model can contribute more design iteration. In parametric design you will
have to set up the inputs and simulation rules. In this way the computer can automatically
calculate the result. Sometimes the loop and multiple command hierarchy is so complicate
that we cannot anticipate the result. With parametric methods, we are capable to perform
effective designs.
ST05 Yes. As long as you set efficient parameters in the beginning, different variations of the model
can be made quickly and easily.
ST06 Yes, I was able to see much quicker how changing the parameters would affect my scores.
Although, I do not feel that I was able to set up the parameters that I wanted to be able to
achieve the design I wanted because of lack of knowledge/experience with Revit.
ST07 Depends on what your values are and what your skills with the software are. If you value
design more than performance, then maybe not. If you are not able to create complex,
adaptive geometries with the software, then also probably not. More time teaching the use of
the software is required. The timeframe is not adequate.
ST08 Yes, the control that you get over the design in any stage including design, tectonic, and
construction makes more room for a better and more developed system, where less effort is
put to redo some unnecessary changes, and automatically update the file based on a minor
effect. It also saves time, and provides the opportunity of design development.
ST09 The conceptual stage of design is the most important when the energy, financial, and design
factors are considered. For this reason, parametric modeling is very important because it
allows designers to make quick modifications and see how those changes influence to overall
building visually as well as performance wise. By changing elements of the parametric model, I
was able to improve upon each design and see my results instantaneously which influenced
my next design.
ST10 I think it is a very useful tool because it is very efficient in changing parameters and allowing
for quick changes. Instead of having to remake multiple parts of the model, simply clicking
one button can do tremendous things. Though it is difficult to learn at first, parametric
modeling is a huge help.
ST11 Yes, It is helpful to predict the most efficient overall form of the building within knowing the
number of the floors, glazing, etc.
ST12 Yes, because the parametric model allows you to explore various design iterations faster than
conventional method. Everything is interoperable with each other that different iterations can
be created with only one click. This will improve speed and accuracy.
ST13 In my opinion parametric model can help a great deal to perform design iterations. By defining
the parameters we have the ability to control how the building morphs and defines spaces.
Also sometimes by doing so one may come across design iterations that he may not be
356
thinking about. Other advantages of designing this way are timesaving, better idea of building
orientations, parameter specific changes, etc.
ST14 N/A
ST15 Yes I definitely think that parametric design allows a user to explore more design iterations
effectively. By quickly switching a conceptual mass properties one can instantly visualize the
changes, not only aesthetically, but also energy wise. One can spend an infinite amount of
time tweaking small numbers to achieve a certain score without having to redraw and start a
model from the beginning.
ST16 I think the parametric model can really help me explore my design iteration effectively. The
best part of Revit and not for grasshopper is that Revit can automatically generate building
wall, skylight schedule and other information we need to generate a building optimization
report. In this part, we can decide our building shape in preliminary design phases. With excel
files, we can actually balance the each factor to get a relatively optimist result.
ST17 Yes. Because I can get multiple design options just by changing parameters.
ST18 Yes, I strongly believe parametric model can help in exploring more design iterations more
effectively, because it is easier to bring about the design changes on computer using a
parametric modelling software such as Revit and then being able to see the changes that occur
as a result of the inputs, instantly.
ST19 Yes it can. I do not know much and did not the possibilities. But i am happy that i have been
taught - and I am learning this too.
Yes - this can help. When there is a complex geometry and when each building function
depends on bunch of other constraints, this method is beneficial to save time as well as to
explore different possibilities.
ST20 I believe it can be a useful tool when doing iteration runs. In my opinion, the flexibility of
changing the parameters of an existing model, rather than trying to model from scratch each
time can save a lot of valuable time for the designer. To add to that it allows making slight
changes in one or a couple of the parameters, when you have settled on a specific form to
determine which the best solution for the project is.
ST21 N/A
ST22 Absolutely, knowing how to set parametric greatly enhances our ability to create structures
and buildings quickly and then easily be able to adjust various parameters of that object in
order to see how modifying various dimensions of the building changes the object. At first,
the parametric were confusing and quite frankly they were just making me angry because it
seemed like no matter how careful I was I could not create the kind of design I was looking for.
However, after finally getting help from Eve I was able to truly grasp how to work with
parametric and use them to my benefit in order to build this object.
ST23 I think the process makes it very easy to manipulate parameters on the fly and see the results
relatively quickly. However, the going back and forth between and excel calculation sheet, the
actual model in Vasari/Revit, and uploading to green building studio is quite a hassle.
Furthermore, to have the process workflow going smoothly, the parameters need to be pretty
well created.
ST24 N/A
ST25 It can help in exploring iterations because once parameters are carefully (this is the key)
thought out, numerous iterations can easily be computed.
ST26 Yes, it definitely can, probably not in school though. We aren't forced to use sustainable
means and methods as much as we should, but this type of optimization is definitely helpful.
ST27 N/A
357
Question 2: Do you think the feedback results (design, financial and energy) can support your
decision making?
ST01 Yes it was helpful having those numbers and trying to change them was crucial.
ST02 Yes definitely while adjusting your program or building to match your program the feedback can
make this
whole process much easier. Instead of large moves you can adjust things by using little tweaks.
ST03 Yes. Maybe I'm not very familiar with that, I cannot support my design fully. But I still try to
make it better and better to get the higher design score.
ST04 Yes. All these aspects are crucial issues to be considered in decision making. After all, we need
the performance evaluation to judge whether certain decision we made is correct. As architects
we cannot limited our visions to ourselves. The feedback can give us other people's opinion in
order to reach objective.
ST05 Yes. From a sustainable standpoint, generating energy consumption statistics that easy is a
huge advantage with the initial design process.
ST06 Yes, I felt more confident with the decisions I was making when changing the parameters based
off the scores that I was getting.
ST07 Yes of course. This is fairly clear. Especially at the very early stages of design. Again, it is
information; the important part is how to apply it to a set of values.
ST08 Yes, they may change the design and energy intents. In terms of financial issues I would say
there is much more detail analysis required, and besides these items, there are several other
factors affecting.
ST09 Yes and no. It's easy to look at one number and say, the design is good, the financials are bad.
These results can help influence the next iteration's inputs. But, how those three numbers are
factored together to create one overall score is a bit more arbitrary it seems, so even if all three
values "look" good, the overall score
could be poor.
ST10 The feedback results can support feedback in the decision making process, though it would is
more useful to show a client or builder. As an architect, it is still important to pay attention to
processes that parametric modeling and the computer can still not account for.
ST11 Yes. It might not still be the best result we can get, but it helps to make better decision to be
more efficient in terms of energy saving and also financially.
ST12 Yes, I think it is important to have these various feedback results from early stage of designing.
That way, it would be easier to alter the design decisions. However, at the same time, I'm afraid
that too many feedback results might limit architect's freedom and design.
ST13 Yes, these results can definitely help decide on a particular design. This feedback can make the
design more logical, sell-able and environmentally responsive. Depending on the project
objectives such results can be very useful to explain the design to all the stakeholders in the
project.
ST14 N/A
ST15 Yes, if the design, financial and energy are somewhat accurate it will definitely help the decision
making process. One can tweak values around to achieve a certain design score in order to
optimize the capacity and design something that one would be more knowledgeable of how
close to the end result it would be. Architects don’t like uncertainty and if they can reduce it by
predicting how something reacts, more power to them.
ST16 I think so. But I think this is a little bit hard for use to compare different schemes and try to find
the optimist scheme.
I guess the better way to use this is make it suit for the specific requirement. For example, set a
line level for financial score, if the score exceed the line, just ignore the scheme. And then try to
find the greatest design score building and energy-efficiency building. I think it is already much
358
easier for us to compare the different scheme with simple score number than a lot of
characters. But there is no reason not to make it easier.
I really expect this function apply to some simulation software. In China, the simulation process
often likes this: First, must ensure the design is the same with the requirement (can't delete the
windows), add the wall insulation; second, add shading device; third, increase window
performance because good windows are really money costing. If we can apply the three factors
here, we can avoid the human comparison between different software, and the optimist
results. (Although normally, we just follow the preference of the construction company boss)
ST17 Yes. They can help me to evaluate the different options.
ST18 Yes, the feedback results tell us whether the design alteration is a viable one or not depending
on the EUI etc. If I am to build an energy efficient design, then I can model it run the energy
calculations in Revit (or other software) and see my ultimate EUI values. If they are too high
then I'll have to work out a way to bring them to a lower value. A common example of this
might be by reducing the amount of glazing on the structure.
ST19 Yes - as per my discussion with EVE i realize the importance of this process. It is a vital and even
time consuming analysis which can be extremely beneficial for large scale and complex projects.
It gives a quick guide to the architect, owner and the building analyst to find better and the best
possible solution.
ST20 I think they provide a solid base to further research the my decision process. It does take a lot
of runs to understand how they work and how to lower or increase your values and therefore
to improve the overall score. But I did try to avoid using the same strategies whenever I realized
that my decisions had a negative impact on the analysis.
ST21 N/A
ST22 Without a doubt, understanding the design, financial, and energy aspects of a building is hugely
useful in the decision-making process. Being able to see results without having to rebuild or
redesign an object is a drastically more efficient and time saving means of design. Changing
various parameters of a building and seeing instant results of what those changes do, is
incredibly effective in helping design the best building possible. Now I'll be able to design
buildings, seeing my various ideas for what the building could be, and see the results of each of
my ideas.
ST23 The feedback is okay in that you can directly compare to the initial building and any
modifications you had done thus far. However, those three values (design, financial, and
energy) are not all equal to each other and one is more important than the other for whatever
reason (client, etc.). It is hard to evaluate which design is better where there is no weighting
involved and where not all the options can be explored to the fullest.
ST24 N/A
ST25 Yes definitely. Being able to compute energy and financial outcomes of each iteration allows
the designer to put more thought into final outcomes by weighing all important design factors
at the same time rather than the usual after thought
ST26 Yes, but a more clear explanation as to what this "information" means would be far more
helpful! I don't think throwing microeconomic theory at us really does it justice when the values
we're studying for efficiency are without meaning currently.
ST27 N/A
359
APPENDIX F PEDAGOGICAL EXPERIMENT III
F.1 PEDAGOGICAL EXPERIMENT III – SURVEY QUESTIONNAIRES
F.1.1 D1-Q1.pdf
360
F.1.2 D2-Q1.pdf
361
362
F.1.3 D2-Q2.pdf
363
364
365
F.1.4 D3-Q1.pdf
366
367
F.1.5 D3-Q2.pdf
368
369
370
F.1.6 D4-Q1.pdf
371
372
F.1.7 D4-Q2.pdf
373
374
375
F.2 PEDAGOGICAL EXPERIMENT III – HANDOUTS TO STUDENTS: D2-
3_ST_DM_FROM_EEPFD.PDF
376
377
378
379
380
381
382
F.3 PEDAGOGICAL EXPERIMENT III – PARAMETER SETTINGS OF STUDENTS’ AAC-3 MODELS
383
384
385
386
387
388
APPENDIX G PUBLICATIONS GENERATED FROM THIS RESEARCH
Gerber, David Jason, and Shih-Hsin Eve Lin. 2012a. Designing-in performance through
parameterization, automation, and evolutionary algorithms: ‘H.D.S. BEAGLE 1.0’. Paper read at
CAADRIA 2012: Beyond Codes and Pixels, 25-28 April 2012, at Chennai, India.
Gerber, David Jason, and Shih-Hsin Eve Lin. 2012b. Synthesizing design performance: An
evolutionary approach to multidisciplinary design search. Paper read at ACADIA 2012 - Synthetic
Digital Ecologies, 18-21 October 2012, at San Francisco, California, USA.
Gerber, David Jason, and Shih-Hsin Eve Lin. 2013a. "Designing in complexity: Simulation,
integration, and multidisciplinary design optimization for architecture." Simulation no. Published
online before print April 9, 2013. doi: 10.1177/0037549713482027.
Gerber, David Jason, and Shih-Hsin Eve Lin. 2013b. Geometric complexity & energy simulation:
Evolving performance driven architectural form. Paper read at CAADRIA 2013: Open Systems, 15-
18 May 2013, at Singapore.
Gerber, David Jason, Shih-Hsin Eve Lin, and Xinyue Ma. 2013. Designing in performance: A case
study of applying evolutionary energy performance feedback for design. Paper read at ACADIA
2013 - Adaptive Architecture, 24-27 October 2013, at Cambridge, Ontario, Canada.
Gerber, David Jason, Shih-Hsin Eve Lin, Bei Penny Pan, and Aslihan Senel Solmaz. 2012. Design
optioneering: Multi-disciplinary design optimization through parameterization, domain
integration and automation of a genetic algorithm. Paper read at SimAUD 2012, 26-30 March
2012, at Orlando, FL, USA.
Lin, Shih-Hsin Eve, and David Jason Gerber. 2013a. Designing-in performance: A case study of a
net zero energy school design. Paper read at PLEA 2013: Sustainable Architecture for a Renewable
Future, 10-12 September 2013, at Munich, Germany.
Lin, Shih-Hsin Eve, and David Jason Gerber. 2013b. Designing-in performance: A pedagogical
benchmark experiment of an application of multidisciplinary design optimization framework for
energy performance feedback. Paper read at 2013 eg-ice, 1-3 July 2013, at Vienna, Austria.
Lin, Shih-Hsin Eve, and David Jason Gerber. 2013c. Designing-in performance: Towards cloud
based simulation and multidisciplinary design solution space search. Paper read at SimAUD 2013,
7-10 April 2013, at San Diego, CA, USA.
Lin, Shih-Hsin Eve, and David Jason Gerber. 2013d. Evolutionary energy performance feedback for
design (EEPFD): Interaction and automation for a design exploration process framework. Paper
read at 31st eCAADe Conference: Computation and Performance, 18-20 September 2013, at The
Netherlands: Delft University of Technology.
Lin, Shih-Hsin Eve, and David Jason Gerber. 2013e. A pedagogical benchmark experiment for
application of multidisciplinary design optimization in early stage building design. Paper read at
2013 ASCE International Workshop on Computing in Civil Engineering, 23- 25 June 2013, at Los
Angeles, California, USA.
389
Lin, Shih-Hsin Eve, and David Jason Gerber. 2014. "Designing-in performance: A framework for
evolutionary energy performance feedback in early stage design." Automation in Construction no.
38:59-73. doi: 10.1016/j.autcon.2013.10.007.
Lin, Shih-Hsin, and David Jason Gerber. 2013f. Designing-in performance: Evolutionary energy
performance feedback for early stage design. Paper read at Building Simulation 2013, 25-28
August 2013, at Chambéry, France.
Lin, Shih-Hsin, Karen Kensek, and Laura Haymond. 2010. Analytical building information modeling:
What is the gap between BIM and energy simulation tools’ performance feedback loops? Paper
read at Ecobuild 2010, 7-9 December 2010, at Washigton, DC, USA.
Abstract (if available)
Abstract
With the continued advancement of computational tools for building design, performance has gradually been allowed to claim a more prominent role as a driving force behind design decisions. However, there is currently only limited direct energy performance feedback available for designers early in the design process where such decision making has the highest potential impact on the overall design’s energy performance. Therefore, this research aims to propose a design process framework that can provide designers a “designing‐in performance” environment, where design decisions can be influenced by energy performance feedback during the early stages of the design process. ❧ In response to the overall aim of this research, the first objective is to identify the most potentially suitable method through investigating current and past efforts. Extensive literature review revealed that time constraints and interoperability issues between tools and expert domain knowledge are the primary obstacles faced by designers in exploring design alternatives with consideration for energy performance. Moreover, evidence suggests that Multidisciplinary Design Optimization (MDO) methodology presents the most potential to overcome these obstacles. This determination stems from the aerospace and automobile industries successfully integrating multiple engineering domains through MDO to optimize and identify the best‐fit design among various competing objectives during the design process. As a result, it is the position of this research that providing the designers with a designer‐oriented MDO framework during the early stages of design will allow energy performance feedback to influence their design decision‐making based on their design goals, thereby resulting in higher-performing designs. However, the applications of MDO to the building industry are still in infancy, especially in relation to bridging energy performance and design form exploration during the early stages of the design process. As the applicability of this approach during the design process is yet to be fully explored, this task is the second objective of this research. ❧ More specifically, the second research objective is to identify the proposed framework characteristics that would assist the designers during the early stage design process and enable a “designing‐in performance” environment. Also included in the second objective is the validation of the proposed framework against the identified criteria. In order to achieve this objective, this research first synthesizes the pertinent research findings in order to isolate the criteria for “designing‐in performance” and identify the gaps in the extant approaches, which hindered their applicability to the design process. Based on these results, the research presents the theoretical structure of the proposed early stage designer-centered MDO framework, entitled Evolutionary Energy Performance Feedback for Design (EEPFD), which incorporates conceptual energy analysis and design exploration of complex geometry through an automated evolutionary searching method. EEPFD is a semi‐automated design exploration process, enabled by a customized genetic algorithm (GA)-based multi‐objective optimization (MOO) approach, to provide energy performance feedback in assisting design decision-making. In order to realize EEPFD for the purpose of validation and evaluation against the previously identified criteria, a prototype tool, H.D.S. Beagle, is developed to host the customized GA‐based MOO algorithm. In H.D.S. Beagle, energy use intensity (EUI) is selected as the energy objective function. Also included are spatial programming compliance (SPC) and a schematic net present value (NPV) calculation for consideration in performance tradeoff studies. A series of hypothetical cases are used to form the initial framework, as well as obtain and evaluate the technology affordance of H.D.S. Beagle. These hypothetical cases are also used as a means to assess whether EEPFD demonstrates the potential to meet the needs of early stage design, where rapid design exploration and accommodation of varying degrees of geometric complexity are needed. Based on these results, EEPFD can be considered as suitable for further exploration in early stage design applications. Finally, the hypothetical cases are used to reaffirm the need of incorporating energy performance feedback during the early stages of the design process. ❧ The last research objective is to evaluate the impact of the proposed framework and availability of energy performance feedback on the early stage design process. To achieve this objective, evaluation metrics are first established to provide the means and measurements by which to conduct process evaluation and comparative studies in both the design profession and pedagogical case-based experiments. In the design profession case‐based experiment, EEPFD is applied to a design competition open to professional design firms. In this case study, the chosen design firm utilizes three approaches in pursuing higher performance design: (1) collaboration with mechanical electrical and plumbing (MEP) consultants
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Evaluation of daylighting circadian effects: Integrating non-visual effects of lighting in the evaluation of daylighting designs
PDF
Facade retrofit: enhancing energy performance in existing buildings
PDF
Structural design tool for performative building elements: a semi-automated Grasshopper plugin for design decision support of complex trusses
PDF
Behavioral form finding using multi-agent systems: a computational methodology for combining generative design with environmental and structural analysis in architectural design
PDF
Understanding human-building interactions through perceptual decision-making processes
PDF
Streamlining sustainable design in building information modeling: BIM-based PV design and analysis tools
PDF
Landscape and building solar loads: development of a computer-based tool to aid in the design of landscape to reduce solar gain and energy consumption in low-rise residential buildings
PDF
Energy simulation in existing buildings: calibrating the model for retrofit studies
PDF
A simplified building energy simulation tool: material and environmental properties effects on HVAC performance
PDF
Energy efficient buildings: a method of probabilistic risk assessment using building energy simulation
PDF
The power of flexibility: autonomous agents that conserve energy in commercial buildings
PDF
Automatic conversion from flip-flop to 3-phase latch-based designs
PDF
Building energy performance estimation approach: facade visual information-driven benchmark performance model
PDF
Decision support systems for adaptive experimental design of autonomous, off-road ground vehicles
PDF
User-centric smart sensing for non-intrusive electricity consumption disaggregation in buildings
PDF
Algorithmic aspects of energy efficient transmission in multihop cooperative wireless networks
PDF