Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Simplified acoustic simulation - Rutabaga Acoustics: a Grasshopper plug-in for Rhino
(USC Thesis Other)
Simplified acoustic simulation - Rutabaga Acoustics: a Grasshopper plug-in for Rhino
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Copyright [2021] SIMPLIFIED ACOUSTIC SIMULATION RUTABAGA ACOUSTICS: A GRASSHOPPER PLUG-IN FOR RHINO by Maira Ahmad A Thesis Presented to the FACULTY OF THE USC SCHOOL OF ARCHITECTURE UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree MASTER OF BUILDING SCIENCE August 2021 ii ACKNOWLEDGEMENTS I would like to thank the MBS faculty for their assistance with this thesis. I would like to particularly thank Professors Marc Schiler, Douglas Noble and Karen Kensek for encouraging me to pursue this topic, and Jerry Christoff and Jon Swan for their support during my research. COMMITTEE MEMBERS Chair Douglas E. Noble, Ph.D., FAIA Associate Dean for Academic Affairs School of Architecture University of Southern California dnoble@usc.edu Second Committee Member Karen M. Kensek, LEED AP BD+C Professor of Practice in Architecture School of Architecture University of Southern California kensek@usc.edu Third Committee Member Marc Schiler, FASES Professor School of Architecture University of Southern California marcs@usc.edu iii Contents ACKNOWLEDGEMENTS ................................................................................................................................. ii ABSTRACT ..................................................................................................................................................... vi CHAPTER 1: INTRODUCTION AND CONTEXT................................................................................................. 1 1.1 OVERVIEW ........................................................................................................................................... 1 1.1.1 Basics of Sound Propagation ........................................................................................................ 4 1.1.2. Principles of Performance Acoustics ........................................................................................... 6 1.2. ACOUSTIC SIMULATION ..................................................................................................................... 7 1.2.1. Geometric Methods .................................................................................................................... 8 1.2.2. Wave-Based Method ................................................................................................................ 12 1.2.3. Acoustic Simulation Tools ......................................................................................................... 15 1.3. OUTDOOR ACOUSTIC SIMULATION ................................................................................................ 17 1.3.1. Outdoor Sound Propagation ..................................................................................................... 17 1.3.2. Outdoor Acoustic Simulation Tools .......................................................................................... 19 1.4. PROPOSED SOLUTIONS .................................................................................................................... 19 1.4.1. Preferred Simulation Techniques ............................................................................................. 20 1.5. SUMMARY ....................................................................................................................................... 25 2. CHAPTER 2: BACKGROUND AND LITERATURE REVIEW .......................................................................... 28 2.1. OVERVIEW ........................................................................................................................................ 28 2.2. ACOUSTIC SIMULATION ................................................................................................................... 29 2.2.1. Limitations of Commercial Proprietary Software ..................................................................... 30 2.3. OUTDOOR ACOUSTIC SIMULATION ................................................................................................. 39 2.3.1. Outdoor Spaces Within an Urban Context................................................................................ 39 2.3.2. Outdoor Performance Spaces ................................................................................................... 41 2.3.3. Observations and Limitations ................................................................................................... 43 2.4. PROPOSED SOLUTIONS .................................................................................................................... 43 2.4.1. Ray Tracing Equations for Acoustic Simulations ....................................................................... 44 2.4.2. Scripting Interfaces for Acoustics.............................................................................................. 48 2.4.3. Plugins for Acoustic Simulation ................................................................................................. 49 2.4.4. Pachyderm Acoustic Simulation................................................................................................ 51 2.4.5. Observations and Limitations ................................................................................................... 53 2.5. SUMMARY ........................................................................................................................................ 54 iv 3. CHAPTER 3: METHODOLOGY .................................................................................................................. 56 3.1. OVERVIEW ........................................................................................................................................ 58 3.2. MODEL GEOMETRY .......................................................................................................................... 59 3.3. PROGRAMMING ALGORITHM .......................................................................................................... 60 3.3.1. High Frequency Sound .............................................................................................................. 61 3.3.2. Waveguide Method for Low Frequency Sounds ....................................................................... 73 3.4. GRASSHOPPER COMPONENT ........................................................................................................... 74 3.4.1. Load Component into Grasshopper ......................................................................................... 74 3.4.2. Component Build in Visual Studio ............................................................................................. 76 3.5. ALGORITHM EXECUTION .................................................................................................................. 81 3.5.1. Implementation of the Image-Source Method ........................................................................ 81 3.5.2. Implementation of the Ray Tracing Method ............................................................................ 83 3.5.3. Implementation of the Waveguide Method ............................................................................ 84 3.5.4. Hybrid Methodology ................................................................................................................. 85 3.6. OUTPUT ............................................................................................................................................ 85 3.6.1. Auralisation ............................................................................................................................... 86 3.6.2. Visualization .............................................................................................................................. 87 3.7. APPLIED METHODOLOGY ................................................................................................................. 88 3.8. SUMMARY ........................................................................................................................................ 90 4. CHAPTER 4: PLUG-IN DEVELOPMENT ..................................................................................................... 92 4.1. PROGRAMMING SCRIPT ................................................................................................................... 92 4.1.1. Scriptable Component Within Grasshopper ............................................................................. 93 4.2. C# SCRIPT COMPONENT LAYOUT ..................................................................................................... 96 4.2.1. References (Using) .................................................................................................................... 97 4.2.2. Utility Functions and Members ................................................................................................. 98 4.2.3. Methods .................................................................................................................................... 99 4.2.4. Classes ..................................................................................................................................... 110 4.3. COMPONENT OUTPUTS ................................................................................................................. 121 4.3.1. Visual Outputs ......................................................................................................................... 122 4.3.2. Calculated Data Outputs ......................................................................................................... 127 4.4. SUMMARY ...................................................................................................................................... 130 5. CHAPTER 5: DATA VALIDATION............................................................................................................. 132 5.1. TESTING THE LOGIC ........................................................................................................................ 133 v 5.1.1. Testing Geometric Logic and Accuracy ................................................................................... 134 5.1.2. Testing Mathematical Logic and Accuracy .............................................................................. 139 5.1.3. Validating SPL Calculations ..................................................................................................... 142 5.1.4. Validating Absorption Coefficients ......................................................................................... 144 5.1.5. Considerations for Decibel Addition ....................................................................................... 146 5.2. CASE STUDIES ................................................................................................................................. 149 5.2.1. Indoor Case Study ................................................................................................................... 150 5.2.2. Outdoor Case Study ................................................................................................................ 156 5.3. SUMMARY ...................................................................................................................................... 168 6. CHAPTER 6: CONCLUSIONS AND FUTURE WORK .................................................................................. 170 6.1. CONCLUSIONS ................................................................................................................................ 171 6.2. METHODOLOGY ANALYSIS AND EVALUATION .............................................................................. 173 6.2.1. Low Frequency Sounds ........................................................................................................... 174 6.2.2. Visual Studio Component Build .............................................................................................. 175 6.2.3. Convolution and Auralisation ................................................................................................. 177 6.3. FUTURE WORK ............................................................................................................................... 178 6.3.1. Functionality Improvements ................................................................................................... 178 6.3.2. Applicability Improvements .................................................................................................... 180 6.4. SUMMARY ...................................................................................................................................... 181 REFERENCES .............................................................................................................................................. 183 APPENDIX A ............................................................................................................................................... 188 Plug-in c# code ...................................................................................................................................... 188 APPENDIX B ............................................................................................................................................... 204 User Manual for Rutabaga Acoustics .................................................................................................... 204 vi ABSTRACT Acoustic simulation software has been widely utilized by acousticians, engineers and architects for the design of performance spaces (i.e. auditoriums, theaters, arenas). Primarily used by professionals, these simulation engines are often independent programs that are not integrated into 3D modeling software and require the model to be imported and exported between platforms. In order to allow a smooth workflow that incorporates acoustic simulations, designers have created plug-ins for 3D modeling software that visualize results within the model, making it easy to understand and update while altering the model. Despite this, there is a lack of acoustic simulation tools that are easy to use. To improve the seamless integration of outdoor acoustic considerations within the design workflow, a simulation tool was created for Rhino’s Grasshopper. Rhino was chosen after comparing a total of five 3D modeling software programs based on their operating system compatibility and plug-in architecture. Rhino displayed the highest capacity for third-party plug-in integration and geometric model complexity. The availability of the Grasshopper plug-in contributed to Rhino’s selection. Within Grasshopper, an acoustic simulation component was created called Rutabaga Acoustics. The algorithms applied within Rutabaga are image-source and ray tracing for early and late reflections, respectively, of high frequency sounds. These collective algorithms applied by Rutabaga consider the texture of the surrounding surfaces and calculate for the varying absorption coefficients. The energy lost to the open atmosphere is also accounted for. This allows Rutabaga to be applicable within outdoor spaces. vii The final calculations provide an SPL-Time, a T60 value, and collective Sound Power Level at the receiver. Since providing legible results through Grasshopper and Rhino’s interfaces was one of the main objectives of Rutabaga, the resultant output of the plug-in development was a graphical visualization within the Rhino 3D model, displaying the fall-off of propagated sound. To validate the results, Rutabaga was tested using the 3D models of two sites (an enclosed hallway and a central public piazza) and the Sound Pressure Levels simulated were compared to previous research results. A comparative analysis proved Rutabaga provides correctly propagated sound ray visualization, and calculated decibel values were of negligible difference. KEY WORDS: Acoustic simulation, Grasshopper plug-in, exterior acoustics, Sound Pressure Level, high frequency sound, outdoor simulation. Research Objectives • Determine the shortcomings of existing acoustic simulation software • Design an outdoor acoustic simulation plug-in component for Rhino Grasshopper • Test component’s simulation on an outdoor site and validate results 1 CHAPTER 1: INTRODUCTION AND CONTEXT Chapter 1 introduces the concepts of acoustics and their simulation tools and methods. Some commonly used terms are defined, and the basics of sound paths and performance acoustics are illustrated and elaborated. For acoustic simulation, some currently utilized simulation methods are presented. The difference between indoor and outdoor acoustic simulation is highlighted, with an overview of the available simulation tools for outdoor spaces and how they are lacking in comparison to indoor simulation tools. The chapter concludes with the proposed solutions to compensate for the lack of outdoor acoustic simulation tools, with emphasis on the integration of a plug-in within a 3D modeling software. 1.1 OVERVIEW Architectural acoustics is the science of achieving appropriate sound qualities within a building and is a branch of acoustical engineering (Rindel, 2000). For indoor spaces, acoustics can be measured through the reflections and reverberations and the absorption coefficients of the enclosing surfaces. Normally, within an enclosed space, the pressure is homogenous and continuous throughout the space and over time. Indoor spaces also tend to have flat surfaces that make calculations easier. 2 In order to measure the acoustic performance of a space, an impulse response can be graphed. An impulse response (IR) is a very loud but short sound that contains all frequencies. An impulse (e.g. a balloon pop or a guitar string being plucked) emits a burst of energy into a space that fades quickly. This impulse is informative of the reverberant properties of a space (Soundassured, 2020). The impulse response technique is useful as it includes almost all the physical information of a linear system (Tachibana et al., 1991). It is predominantly used for indoor spaces. The data acquired through the IR technique includes direct sound, initial time delay gap, early reflections and the reverberant tail. An impulse response represents (graphically) the direct sound, early reflections and decay rate of the amplitude over time (Figure 1.1, Table 1.1). Figure 1.1: A theoretical room impulse response (Välimäki et al. 2012). 3 Table 1.1: Subjective vs Objective Parameters (Schiler, 2019) For indoor acoustics, simulation engines require the geometry of a space, the sound and receiver locations, along with the absorption coefficients of the surrounding surfaces (Honeycutt, 2012). These are then processed to simulate an impulse response, which is further processed and convoluted to create auralisation outputs which allow the user to listen to a sound using headphones and perceive what it would sound like at different locations within their 3D model. However, in outdoor spaces, pressure and humidity levels fluctuate. The presence of natural fixtures also results in complicated reflections and reverberation times. With the complexity that comes with acoustics simulations, two basic methods can be employed.; wave-based methods and geometric methods (Wayverb, 2019). The geometric method of acoustic simulation 4 considers sound propagation as rays traveling from the sound source to the receiver. The sound particles are often considered collectively in this method. The wave-based method considers sound traveling in waves and is capable of simulating low frequency sounds, which the geometric method fails to do. The basics of sound propagation and the principles of performance acoustics are introduced and discussed in this section. 1.1.1 Basics of Sound Propagation Acoustics in architecture are intended to accommodate a good distribution of sound in large spaces, appropriate speech intelligibility and clarity, and to suppress noise to make spaces pleasant and productive. In the design for performance spaces, acoustics are required to enhance the performance by ensuring the sound is legible to the audience and outside noise is cancelled out to prevent distractions. Sound, when transmitted, follows direct and indirect paths from the source to the receiver. The behavior of a sound wave can be understood through a better understanding of transmission, absorption and reflection (Colette, 2015). Transmission, in the context of sound, is the movement of sound energy from one place to another through a medium (e.g. air). Sound is conducted more effectively through solids as opposed to air (The Soundry: The Physics of Sound, n.d.). 5 Absorption is the measure of the amount of energy removed from the sound wave as it passes through the thickness of a material. Any sound that is absorbed by a surface is not reflected towards the receiver (Shrivastava, 2018). Reflection of sound waves is the reaction of the sound projected from a source when it bounces off a surface (Shrivastava, 2018). Sound travels outwards from the source in a spherical motion, which reflects off surfaces upon reaching them. This produces reflections and reverberations (Digital Sound and Music Chapter 4, n.d.). Reflections of sound waves off surfaces result in early reflections and late reflections, depending on the path. The buildup and overlapping of these reflections result in reverberation. A reverberation is created when a sound is reflected off one or more surfaces, resulting in numerous reflections. These build up and then decay as sound absorption occurs within the space. The reverberation time of a sound wave is dependent on the volume of the space, the available surface area and the absorption coefficients of the surfaces (Larson Davis, 2019) (Figure 1.2). 6 Figure 1.2: Sound reflections 1.1.2. Principles of Performance Acoustics Performance spaces can be categorized into arenas, theaters, amphitheaters, auditoriums and concert halls. For these spaces, configurations of the seating layout play an important role since they define where the sound source and receivers will be located (Guyer, 2014). The audience members’ listening experience is affected by the reflections of sound from the surrounding surfaces. The design goals of a successful performance space include, but are not limited to, absorbing or cancelling out reflections that are excessively delayed and reinforcing those reflections that arrive at the receiver at roughly the same time as direct sound. Two characteristics that are important when evaluating a music venue are intimacy and aliveness. Intimacy, in acoustics, is how close the audience feels to the performer (Thompson, 2007). The audience feels more distant from the source of the music when the initial time delay is large. Initial time delay is defined as the time elapsed between perceiving the first direct sound and the last noticeable reflection. 7 The two factors that must be manipulated for successful acoustic design are initial time delay and reverberation time. Therefore, a good performance space manipulates the shape and size of the stage and audience to redirect the sound reflection towards the listener as early as possible. ISO 3382 states that “While reverberation time continues to be regarded as a significant parameter, there is reasonable agreement that other types of measurements, such as relative sound pressure levels, early/late energy ratios, lateral energy fractions, interaural cross- correlation functions and background noise levels, are needed for a more complete evaluation of the acoustical quality of rooms.” (ISO, 2012) 1.2. ACOUSTIC SIMULATION Commonly used acoustic simulation methods are divided into two approaches: geometric acoustic (GA method) and wave-Based methods (Wayverb, 2019). These are further categorized by the algorithms that are applied (Table 1.2). Table 1.2: An overview of different acoustic simulation methods, grouped by category. 8 1.2.1. Geometric Methods Geometric methods consider sound paths as sound particles moving along sound rays. These methods are often applied to high frequencies and large structures. Geometric methods of acoustic simulation can be further divided into two categories, namely stochastic and deterministic. 1.2.1.1. Stochastic Methods Stochastic methods function by randomly and repeatedly sampling the space, while only recording the results which fulfill the criteria specified and discarding the rest. This speeds up the 9 process and reduces the required computing power. In order to increase the accuracy, the user can simply increase the number of samples (Rindel, 2000). Stochastic methods, therefore, can be executed through the ray tracing and the beam tracing methods by using reflection paths. Ray Tracing Method The Ray Tracing method is classified as a geometric method for studying rays and their travel paths. This method uses particles in large quantities that are emitted from a source in multiple directions. By tracing the paths of these “particles” as they bounce around a room or a space, the Ray Tracing method takes into account the energy lost at each interaction with the spatial geometry and calculates the resultant energy remaining in the paths as they reach their final destination. It considers the source and receiver location, the geometry and the absorption coefficient of each surface. The main advantage of Ray Tracing over other methods is the accuracy and realistic rendering of reflections, refractions and shadows (Abi-Chahla, Fedy, 2009) For modeling sound propagation, Ray Tracing techniques are often the simplest and most intuitive methods. Ray Tracing is used to calculate the trajectories of the paths of the sound rays 10 from the source to the receiver. Ray Tracing simplifies the acoustic calculations for a 3D space by considering sound particles to be travelling along sound rays (Figure 1.3). Figure 1.3: Ray Tracing for Sound Propagation (Suomela, 2012) The Ray Tracing method uses particles in large numbers, emitted from a source point, travelling in various directions. These particles lose energy upon every reflection, according to the surface’s absorption coefficient. Using Snell’s Law, each particle’s new direction is determined. The receiver’s position is critical, and an area is defined around it to catch the particles moving past or around it. For calculating the minimum number of rays, a formula is used. N ≥ 8πc²t²/A (Equation 1.1) A = area t = time travelled 11 c = speed of sound in air 1 A typical room normally contains many rays. To calculate the point response, the rays (containing the sound particles) are considered as circular cones with special density functions that compensate for any overlap. Beam Tracing Method The Beam Tracing method is another acoustic computation that is widely used. It works similarly to the Ray Tracing method, but it is repetitively checking the traced beams throughout the scene or model. Each beam is checked for its intersection with scene polygons, ensuring subsequent processing of partially hidden polygons (Abi-Chahla, 2009). Upon intersection of the beam and the scene polygon, the beam is clipped to remove the shadow region. The Beam Tracing method then constructs sub-beams for representing reflections and refractions. It provides the freedom of constructing more sub-beams to specify different acoustic phenomena, proving that this method provides more flexibility in its input parameters and more accuracy in its calculations. It reduces the number of images that need to be treated created through the Image Source method, because each beam represents the region for which the corresponding virtual source is visible (Abi-Chahla, 2009). However, the complexity of the calculations of beams intersecting with scene polygons results in high computing power usage. 1.2.1.2. Deterministic Methods 12 Image-Source Method The Image Source method employs the concept of inserting new sound sources at the location of reflections. These new sound sources have the same acoustical effect as the source, with the subtraction of the absorption amount. After this process, the reflections that are reaching the listener are summed up to build the acoustical impression at the listener (Figure 1.4, blue dot). Figure 1.4: One reflection from the Image Source Method The Image Source method is advantageous since it is extremely accurate, however, for it to work, the room must be a rectangular box. Therefore, when considered for acoustic simulations, the Image Source method can be employed for the late reflections. 1.2.2. Wave-Based Method Wave models for sound propagation provide efficient methods for solving the wave equation, through processes like the Finite Element Method (FEM) and the Boundary Element Method (BEM). The advantage that wave-based methods have over geometric methods is their 13 consideration for diffraction and interference. Wave-based methods consider the ability of sound to bend around corners. Another significant difference is that the geometric method or any form of ray tracing does not bend around corners, but is instead reflected at the normal direction of the incoming ray. Wave based methods consider the ability for sound to ignore small objects in the “path” of the sound, which then cast very little “shadow.”. This is true because the lower frequencies are the longer wavelengths and, unlike light waves, often are longer than things at the building scale. That is why the wave-based method is better for low frequency (and thus long wavelength) sound. Wave based methods can be further categorized into Element Methods and the Finite-Difference Time-Domain (FDTD) Method. 1.2.2.1. Element Methods For element methods, the FEM and BEM can be implemented. The FEM uses iterative numerical methods for finding the natural resonances of an enclosure that is bounded. A grid of interconnected nodes is used to simulate the air pressure within an enclosed space. The connection between the nodes in the grid allow for a set of simultaneous equations, which in turn affect those closest to them. This process can be used to calculate pressure values. Unfortunately, the FEM is limited to bounded spaces (Wayverb, 2019). The BEM, on the other hand, works similarly to the FEM but instead of modeling nodes within an enclosed space, it models the grid of interconnected nodes on the surfaces, and therefore can be applied to unbounded spaces. 14 Figure 1.5: Dividing a space into voxels for wave-based calculations 1.2.2.2. FDTD Methods The FDTD method shares certain qualities with the element methods, including the fact that computing power and expense increases with the complexity of the applied grid. The method works by dividing the space into a regular grid, and calculates certain parameters (such as pressure or particle velocity) at each grid over time (Cizek, 2007). FDTD has a “parallel” nature, which means that each node on the grid can be updated without the need for an overall synchronization (Schneider, n.d.). This indicates that changes can be made without affecting the overall simulation time, while still providing correctly calculated results. This proves to be a major advantage (Savioja, 2010). The main disadvantage of the FDTD method is that it often is susceptible to numerical dispersion, which considers waves travelling at different frequencies, depending on their frequency and direction. This results in a need for high computation load and often slows down 15 the simulation (Savioja, 2010). There are methods to overcome this high computation load, such as using different mesh topologies, post-processing the output and oversampling the mesh. FDTD is the preferred method within the wave-based category, due to its straightforward nature, parallelism, and intuitive behavior. Its ability to process pressure parameters makes it a significant option for acoustic simulation in outdoor spaces, which often have differential humidity in the environment. 1.2.3. Acoustic Simulation Tools There is a large amount of software available for acoustic simulation. These range from commercial software to free open-source programs. The commercial software often uses geometric methods, which concluded earlier, are not the most accurate. They often disregard low frequencies and do not take into consideration many of the air pressure and humidity parameters when calculating the IRs for spaces. Of these available acoustic tools, only a few utilize wave-based methods for their calculations. Predominantly, the available software is utilized for outdoor acoustic simulation (Table 1.3). Table 1.3: Overview of available acoustic simulation software 16 As mentioned earlier, the commercial software that is available for acousticians and designers often provides complicated results that are indecipherable by students and designers who do not have extensive knowledge of acoustics. Commercially available software is also more expensive, and small firms, individual architects, and students often cannot afford it. These limitations often deter users from taking acoustics into consideration during their initial design process. The available free open-source plugins that are available are often more approachable and can be run in 3D modeling software platforms that architects and students are already familiar with. Still, the results provided by these plugins can still be complicated. If made easily available, legible and with simple user interfaces, acoustic simulation through plug-ins can be more widely used by architects, designers and students and integrated into their workflow. However, they might not be updated if the author or community of users lose interest. 17 1.3. OUTDOOR ACOUSTIC SIMULATION When considering the application of acoustic simulation techniques for predicting sound propagation in an outdoor space, there are some external factors that must be considered. For example, outdoor spaces tend to have rough irregular surfaces, vegetation, and varying weather conditions. Background noise from nearby roads can affect the acoustic performance of an outdoor space, along with wind speeds, air humidity and pressure. These factors would influence the propagation and reflections of sound. Thus, the question arises whether the acoustic simulation techniques discussed above in subsection 1.2 can be applied to outdoor environments to achieve accurate results. 1.3.1. Outdoor Sound Propagation Outdoor sound propagation can be complicated by multiple factors, including refraction, atmospheric turbulence, differential humidity and irregular terrain. The scattering of sound rays by turbulence results in multiplied rays with shorter paths. Sound rays can also be warped due to humidity differences in the atmosphere on site, resulting in curved paths that must be calculated and accounted for (Figure 1.6, path 3). The amplified scattering of sound waves occurs in outdoor performance due to tall vegetation, rigid rock surfaces and soil. Since each surface has a different absorption coefficient, there are greater variations between early and late reflections. For example, the sound reflected off the ground (Fig. 1.6, Path 2) may arrive at the receiver either before or after the direct sound (Fig. 1.6, Path 1) and may either increase or decrease the 18 sound level at the receiver. This depends on whether the ground is hard or soft, which dictates whether it is reflective or absorbent. This will affect the magnitude and phase of the reflected path. (Figure 1.6) Figure 1.6: Sound propagation paths outdoors (Brown, 2007) Classical reverberation theory refers to a reference room volume and is clearly different from reverberation time behavior in an open-air performance space. A listening test with monaural and binaural auralisation of an open-air space was conducted in a case study, and it was concluded that unroofed spaces affect the perceived reverberance during the decay process, as do the spatial characteristics due to the unorganized distribution of reflections (Elena, 2018) . The conventional RT in ISO 3382 only deals with sound energy decay rate and is not suitable for evaluating the reverberations in an unroofed space with irregular surfaces. Therefore, more attention and insight need to be provided when adopting indoor acoustic measurement standards for open air theaters. 19 1.3.2. Outdoor Acoustic Simulation Tools Outdoor acoustic simulation, if considered as more complicated than indoor acoustic simulation, would require a more specific and accurate simulation technique. In order to calculate for multiple irregular surfaces and low frequency sound, outdoor acoustic simulation might benefit from a hybrid method of both geometric and wave-based simulation methods. Currently, this method is not applied within any acoustic software (Wayverb, 2019), except an open-source simulation engine called Wayverb. Wayverb, however, is designed for indoor spaces. Currently, outdoor acoustic simulation is predominantly used for noise calculation in urban spaces. This is achieved through software applications such as NoizCalc and EASE. However, NoizCalc simply uses the geographic location of the space, the layout and the environmental factors for its calculation. Without utilizing simulation methods, NoizCalc cannot be identified as an accurate predictor for outdoor acoustics. There is much work and improvement to be done in the field of outdoor acoustic simulation and its incorporation into the workflow of architects and designers. 1.4. PROPOSED SOLUTIONS Outdoor acoustic simulation is slightly more complex than indoor acoustics due to the complexity of the surrounding surfaces and their rugged textures. More sound barriers exist 20 outdoors, in nature, that disrupt the smooth propagation of sound rays. Multiple absorption coefficients and scattering coefficients must be considered. The lack of a ceiling or roof during the simulation makes the calculations more complex. Unless there are several nearly vertical surfaces, reverberation is much less likely. However, such surfaces become much more important. A 3D modeling software must be considered that can handle complex geometry and has plug-in capabilities. 1.4.1. Preferred Simulation Techniques A possible solution for outdoor acoustic simulation is a plug-in that can be integrated into a 3D modeling software. This would not only speed up the acoustic simulation workflow but will also encourage the use of acoustic simulation in the early design process. Using open-source programs to study and/or re-tool an existing acoustic simulation engine could make it easier to create a new tool. Grasshopper within Rhino allows the build of a custom component that can run scripts on the existing 3D model within Rhino. It also allows easy integration of plug-ins for complex geometric models is Rhino. The outputs of these scripts, if applicable, can be displayed within the 3D model. Grasshopper has parametric capabilities, supports third party plug-ins, and can use .NET programming languages. 1.4.1.1. Grasshopper for Rhino 21 Grasshopper is a plug-in for 3D modeling software Rhinoceros. It provides a visual interface for building algorithms and scripts that create and edit geometry within Rhino. The user interface of Grasshopper is easy to interpret (Figure 1.7). 22 Figure 1.7: Grasshopper User Interface (Modelab, 2015) The Component palettes (Figure 1.7, item 4) provide freedom for customization. Here, a custom component can be categorized into its own folder, with a custom name. This can also be categorized according to its performance. The ease of being able to drag the component onto the canvas makes it easy to implement the components and to use them in conjunction with others. 1.4.1.2. Custom Components in Visual Studio Microsoft Visual Studio is an integrated development environment from Microsoft. It is used to develop computer programs, as well as websites, web apps, web services and mobile apps (Lynda, 2020). Visual Studio is a scripting environment that allows easy development of plug- ins and apps. For this thesis, it is being used for its C# capabilities. Scripting components allow Grasshopper to bypass its limitations of running recursive functions. 23 The use of the Grasshopper Extension within Visual Studio allows an easy build, edit and running of algorithms within Grasshopper (Figure 1.8). The syntax of Visual Studio (Visual Basics) is similar to RhinoScript. 24 Figure 1.8: Visual Studio Grasshopper Component Build (Vestartas, 2016) 25 1.4.1.3. C# Programming C# is designed for Common Language Infrastructure (CLI), which consists of the executable code and runtime environment that allows the use of various high-level languages on different computer platforms and architectures. It is a modern, general-purpose, object-oriented programming language (Samual, 2018). For Grasshopper within Rhino, C# allows more freedom with control over execution. Used to build a Grasshopper component, C# allows it to work with other nodes and components. For the large amounts of data collected from the 3D model and it surfaces, C# is a good choice as a programming language because it processes large amounts of data at a faster rate due to the proximity between the C# syntax and the processor’s. C# has several advantages: object- oriented, cross platform, backward compatibility, and better integration and interoperability between different software platforms, for example a C# script can be implemented with Rhino, Revit and Sketchup (Microsoft documentation, 2015). 1.5. SUMMARY 26 Acoustic simulation software plays a significant role in the design of performance spaces. The initial design processes of indoor performance spaces allow architects and engineers to predetermine the sound quality within a space by predicting the sound reflection and scattering, the reverberation and absorption of the surfaces and designing the layout to provide listeners with a high-quality sound experience. Researchers use these predicting methods to create acoustics that promote the wellbeing and efficiency of humans (e.g. in offices and work environments) and to optimize the acoustic quality of performance spaces such as auditoriums, theaters and concert halls. Unfortunately, software that allows the simulation of accurate sound propagation is not widely available to architects, designers, and students due to the high costs. The computing power required to run the acoustic simulation is high when accuracy is required. Using most simulation software requires extensive acoustics knowledge and the results produced can be difficult to decipher. These limitations often deter architects and designers from seriously considering acoustics in their designs. In order to overcome these obstacles, some open-source plugins have been developed that allow students and architects to run acoustic simulations within their 3D modeling software, while producing comprehensible results and auralisation. These acoustic simulation tools are, however, limited to enclosed indoor spaces. For acoustic software to run simulations in outdoor spaces, higher processing power and additional time would be required. These limitations exist due to the high irregularity of surfaces and large number of early and late reflections of sound. Outdoor spaces also have differential humidity and pressure parameters that need to be considered for accurate sound prediction. 27 With the increasing utilization of outdoor performance spaces, especially considering the social distancing requirements of the COVID-19 era, designers need to be able to simulate acoustic propagation in outdoor spaces correctly and swiftly. By using the algorithms adopted by free, open-source programs, a plugin for Grasshopper is proposed that manipulates room acoustic simulation techniques to be applied to outdoor spaces. Rather than proposing a new simulation method, this plugin implements an amalgamation of methods that collectively provide reasonably accurate results that are presented clearly. 28 2. CHAPTER 2: BACKGROUND AND LITERATURE REVIEW Chapter 2 will provide summaries of a few papers that review acoustic simulations. Due to the complexity and accuracy problems of acoustic simulation software, researchers, designers and programmers have delved into the algorithms employed by the available software. The accuracy of their respective outputs with relation to their usage and the interoperability between room acoustics and outdoor performance spaces have been documented in the case studies below. Within these case studies, existing solutions for outdoor acoustic simulation are discussed. This chapter contains a consensus towards the end, concluding the most appropriate methods and algorithms that can be applied to this thesis project. 2.1. OVERVIEW Performance spaces have specific acoustic requirements as opposed to other indoor spaces. In order to simulate the acoustic conditions within a space before it is built, designers employ acoustic simulation software such as Odeon, EASE, and CATT-Acoustics. These simulation engines have been documented and their inputs, algorithms and outputs described in detail. An analysis of this documentation indicates that these are primarily used for indoor, enclosed space. In some instances, architects have attempted to employ these simulation engines for purposes other than performance space design. In some scenarios, ODEON has been used to calculate the noise within open spaces in a housing complex and in another scenario, ODEON and CATT 29 Acoustics have both been used to simulate the sound in an ancient open-air theatre. These experiments are conducted to test whether the concepts of indoor acoustics suffice for predicting sound propagation outdoors. In both case studies considered in this section, results from on-site recorded impulse responses differ from those simulated. Further research has also been conducted on the efforts made by programmers to build plug-ins for 3D modeling software, after identifying that the lack of integration and intuition between existing acoustic simulation software and 3D modeling software disrupts the design workflow. The research conducted on these plug-ins encompasses a variety of scripting languages, modeling software compatibilities, applied algorithms and levels of success. An analysis of the available research indicates that the primary limitations of acoustic simulation are threefold. 1. There is a lack of acoustic simulation engines for outdoor spaces 2. Existing acoustic simulation software is often not integrated within 3D modeling software, causing an interruption in the workflow of designers. 3. The results collected from acoustic simulation software are often complicated, alienating users that have minimal acoustics knowledge. 2.2. ACOUSTIC SIMULATION Commonly used acoustic simulation software primarily employs geometric methods. ODEON, CATT-Acoustics and EASE are primarily utilized by acousticians, engineers and architects in 30 their initial stages of performance space design. These performance spaces are indoor, enclosed spaces that utilize ray tracing and occasionally the image-source method. For acoustic simulation in rooms and performance spaces, the inputs cover three basic types: geometry, the sound file, and source and receiver positions. The model geometry to be imported can be a both a 2D and a 3D file. Some software allows the user to create the model within the application. The resultant geometry consists of an enclosed space, with a minimum of six faces, increasing depending on the complexity of the space. Ray Tracing hybrid methods use the absorption coefficients of the six surface materials to predict the acoustic quality within the model. For indoor acoustic simulation, these engines perform well. The outputs vary from Impulse Response graphs to visuals and auralisation. One of the shortcomings of these software programs is their lack of intuition and seamless integration with 3D modeling software. Another is the complexity of the output results. Besides auralisation, the average designer would require prior knowledge on the subject of acoustics to interpret the results produced by these software engines. The following research has been conducted on studying the inputs, methodology and outputs of these simulation engines, with focus on their limitations. 2.2.1. Limitations of Commercial Proprietary Software The evolution of predictive acoustic software has been researched in detail to document the progress and inventiveness in acoustic simulation techniques. The uses of common acoustic simulation software and the accuracy with which they measure sound quality in spaces are documented. Many of the algorithms within existing simulation software revisit Sabine’s 31 equation for reverberation time, linking it to the paths followed by sound waves between their reflection and absorption within room boundaries, impulse responses and the auralisation techniques used by modern computers (e.g. Ray Tracing, convolution reverberation etc.) The three most commonly used acoustic simulation software (ODEON, CATT-Acoustics and EASE) produce complicated results and primarily employ geometric methods of sound simulation (Honeycutt, 2014). 2.2.1.1. ODEON Room Acoustics Software ODEON is primarily used in architectural acoustics, and due to its ability to simulate sound transmission through semi-transparent surfaces, ODEON can also be utilized for evaluation of industrial noise and noise isolation. All versions of the software support ISO 3382. For its input, ODEON accepts 2D surfaces on a selected plane and has the capability of extruding them into 3D geometry directly recognized by ODEON. It has the capability of importing model geometry from Sketchup, .3ds and .dxf. The ODEON editor also allows users to build the geometric model within the software (Koutsouris et al. 2018). Other inputs include • Material for the surface that can be chosen from the Material List Screen • Source and Receiver positions using X, Y and Z coordinates • Sound files (e.g. normal speech, music instruments) 32 Once the geometry has been entered along with the assigned materials and the source and listener positions, the model is ready for analysis. ODEON then calculates a variety of acoustical parameters including reverberation time, EDT, decay curves, reflectogram, echo reflections, and visualizations and auralisation of the results. For the analyses, ODEON provides a Room Parameters List than can be modified (Figure 2.1). Desired parameters can also be created. Figure 2.1: Room Parameter List dialogue box, displaying the equations used for the parameters (Honeycutt, 2014). ODEON’s calculation methods are limited to three kinds: single-point response, multi-point response and grid response. Both single and multi-point response calculations provide results for 33 specific listener positions. The grid response calculation allows the user to define a grid, upon which it provides mapping data. These three calculations can be run separately or in batches. ODEON automatically incorporates scattering, according to the reflection-based scattering method. The simulation will always contain a reasonable degree of scattering, depending on the distance between the source and the scattering reflector. ODEON uses an algorithm called Receiver-independent Ray Tracing (rayradiosity), which transmits rays all over the room. Secondary sources are created at the reflection points of these rays. (Honeycutt, 2014) The representations provided by ODEON’s acoustic calculations include values of each energy parameter at each receiver, grid response and parameter mapping, ray path visualizations and the progression of wave fronts through a room (Figure 2.2). Figure 2.2: Visualizations of ODEON’s calculations (Honeycutt, 2014) 34 2.2.1.2. CATT Acoustics Initially, Computer-Aided Theater Technique (CATT) focused on software for theater décor design and lighting. CATT-Lighting was released first in 1986, followed by CATT-Décor in 1987. CATT-Acoustics came to be in 1988. Nowadays, it is used largely by acoustic engineers and for architectural acoustics (Honeycutt, 2014) For its input, CATT-Acoustics uses three text-file types to input models. These are • One or more files for the geometry. (As .GEO files). Alternatively, model geometry can be imported as. cad, .dxf or Sketchup file. • One for the source. (As a .LOC file). The source file contains information about the type, location, aim, delay, and gain of the sound sources. • One for the receiver’s positions (Also as a .LOC file) CATT-Acoustic was initially based on an image-source method as opposed to CATT-Lighting’s Ray Tracing method. However, with the evolution of computing capabilities, hybrid approaches were adopted for CATT-Acoustic to retain the geometrical accuracy of the image-source method and to incorporate the computational efficiency of the Ray Tracing method. In this hybrid approach, the image-source method is used for early reflections, and then the algorithm switches to Ray Tracing for late reflections (Figure 2.3). 35 Figure 2.3: Visualization of CATT-Acoustic’s calculations (Honeycutt, 2014) CATT-Acoustic v. 7.0 introduced Random Tail-Corrected cone tracing (RTC). This method compensates for the late reflection loss associated with cone tracing (Honeycutt, 2014). After this, in v. 8.0, CATT-Acoustic released a new analysis engine called TUCT to be used alongside RTC. TUCT was meant to address RTC’s shortcomings. It excels in auralisation of large venues with high absorption and open spaces and rooms with flutter echo problems. TUCT splits up the emitted rays into multiple diffuse sub-rays that continue propagation. Though this kind of algorithm is time consuming, Dalenbäck presented a method that provides reasonable calculation times. It also removes the need for post-processing (CATT TUCT Overview, n.d.). With TUCT, the effects on the Speech Transmission Index (STI) of changing the background noise, overall level and EQ, and STI type can be studied interactively, including the effect on map statistics (Adrian James Acoustics, 2016). 36 2.2.1.3. EASE – Enhanced Acoustic Simulator for Engineers The EASE software suite is primarily used for professional practice, designed for system designers and consultants. It provides realistic simulations of venue acoustics and sound system performance assessment and verification. Model geometry can me imported as .dxf or Sketchup files. 3D CAD models can also be built within the software. Materials can be selected from a material properties file provided with EASE, that contains over 700 materials. Users can also create their own materials containing absorption and scattering coefficients. Loudspeaker data is available from loudspeaker manufacturers, already in the EASE format for easy use (Adrian James Acoustics, 2016). Other inputs include the following: • Vertices • Faces • Sources • Listener seats • Audience areas • Simple loudspeakers or multi-way loudspeakers (EASE 4.3. Tutorial, n.d.) 37 EASE v. 4.4. uses hybrid ray-tracing techniques for accuracy and computational efficiency. Ray- tracing tools within EASE can help identify which faces are involved in creating unwanted echoes (Honeycutt, 2014). By using the Sabine method, or the Eyring method, EASE provides information output including effective surface area, room volume, mean sound path length and time, reverberation time (RT) and average absorption. There is a plethora of results provided within EASE (Figure 2.4). These results are often difficult to decipher. Figure 2.4: Visualization of EASE calculations (Honeycutt, 2014) 38 2.2.1.4. Observations and Limitations In Ray Tracing, a sound wave strikes a surface after which is it reflected, and a certain amount of its energy is absorbed by the surface (Shrivastava, 2018). However, considering surface roughness and the scale of the surface is also important to calculate the amount of the sound ray is scattered. Scattering data coupled with diffraction handling allows Ray Tracing methods to be more accurate since scattered sound contributes to the reverberant field without creating echoes (Abi-Chahla, 2009). Pure Ray Tracing methods prove to be computationally efficient but less accurate geometrically. Yet Ray Tracing is preferred over the image-source method because some Ray Tracing variants can include diffuse reflections or scattering, resulting in more accurate acoustic predictions. Overall, software applications that use either Ray- Tracing variants or hybrid methods of raytracing and the image-source method are preferred over image-source methods. (Honeycutt, 2014) In some acoustic simulation software, predominantly hybrid methods or Ray Tracing are employed. It can be concluded that the software applications listed in these articles are appropriate for room acoustics and spaces with regular surfaces (Honeycutt, 2014). 39 2.3. OUTDOOR ACOUSTIC SIMULATION For acoustic simulation of outdoor spaces, where the top surface is absent and the space is open to the sky, indoor parameters do not suffice. Though outdoor acoustic simulation is mainly used for noise propagation and calculation in urban spaces, studies have been conducted to test whether applying indoor parameters to outdoor simulation techniques produces accurate results or not. 2.3.1. Outdoor Spaces Within an Urban Context For urban projects, acoustic simulation is utilized for predicting noise and calculating the acoustic comfort of residents. Experiments have been conducted to test whether acoustical parameters for rooms could be proper indicators of acoustic comfort in outdoor areas. One scenario, where the outdoor area in question was the inner yards in a housing complex, was tested for its acoustical properties which were then compared to simulations based on indoor acoustic principles (Taghipour, 2020). The lack of congruence between the results proves that indoor acoustic parameters do not suffice for outdoor simulation due to environmental factors. (Table 2.1) Table 2.1: Some acoustic parameters used for common indoor spaces (Acoustic Bulletin, 2016) 40 Though initially designed for enclosed spaces, these parameters have previously been useful for music performance spaces, amphitheaters and urban spaces. Therefore, to test whether they would suffice for predicting acoustic comfort within partially bounded spaces, three laboratory experiments were conducted in virtual inner yards. The experiment’s volunteers were subjected to simulated sounds from virtual inner yards of housing complexes. After the experiments, the volunteers filled out surveys that helped quantify the acoustic parameters. The inner yard used for reference was a simplified 3D model of an existing housing complex in Switzerland. The model was built in Sketchup and imported into the ODEON environment. One of ODEON’s limitations is that it works with closed room models (Naylor, 1993). Therefore, the inner yard was 3D modeled as a room without a ceiling, placed within a larger box with high absorption surfaces. This was to represent free field. ODEON’s acoustic simulation method, using Ray Tracing and Image Sources, presents limitations regarding 41 diffraction. Other computational limitations in ODEON include the absence of background and foreground sounds (both static and moving) to reduce complexity (Taghipour, 2020). The results of this experiment also indicated that the nature of the sound source was significant in its perception as positive or negative. For example, pleasant sounds such as water features and birds received a higher rating than unpleasant sounds like a basketball match (Taghipour, 2020).. The parameters used in these experiments are not originally intended for acoustic scenarios in outdoor environments. The results of the experiments showed that a number of classic room acoustic parameters were significant predictors of short-term acoustic comfort outdoors. These parameters included the sound source, individual room parameters, and the subject’s random intercept. A few room acoustic parameters did not apply towards acoustic comfort (Table 2.1), and other acoustic indicators that operate more successfully for this purpose are not being considered. The results found in the experiment should be used to define new parameters more suited for outdoor spaces (Taghipour, 2020). 2.3.2. Outdoor Performance Spaces In order to test the accuracy of predicted acoustical parameters to outdoor performance spaces, experiments have been conducted by researchers and programmers. Predictive acoustic software was applied towards “unconventional outdoor environments,” with the standardized software input parameters, to see if there is a need for optimization. The research conducted tests the 42 accuracy of frequently used predicted acoustical parameters. These include reverberation time (T20), sound strength (G), and clarity (C80). The ancient Syracusae open-air theatre in Italy was used as a case study and the acoustic simulation was conducted using two different types of acoustic software, due to the input variability of the absorption and scattering coefficients. These were then compared to measured Impulse Responses (IR) (Elena, 2018). For the on-site measurements and field study, the tests were conducted in unoccupied conditions and the measurements were based on ISO 3382-1. For the sound source, firecrackers were used due to the high background noise levels created. The uncertainty caused by the input values for scattering and sound absorption coefficients, αw and s, were calculated using ODEON and CATT-Acoustics. The simulated and measured parameter values were compared to test the accuracy. The results observed after computing the acoustics and comparing them to the on-site measured IRs indicated variability. This can be attributed to the different algorithms used to approximate the phenomenon of scattering and absorption and the fact that ODEON and CATT-Acoustic work with geometric methods (Elena, 2018). The following results were found from the uncertainty analysis on the case study site of the ancient Syracusae open-air theatre: The Sound Strength (G) simulated was lower than the actual measured Sound Strength, and ODEON computed lower levels for both Reverberation Time (T20) and Clarity (C80) as well. It was also observed that ODEON is more sensitive to sound absorption variation opposed to sound scattering and CATT-Acoustics operates vice versa. It can be concluded that the simulation 43 results do not match the measured G, T20 and C80 and more suitable parameters for the acoustical characterization of the open-air theatres than those described in ISO 3382-1 standard are the subject of continuous research (Bo, 2018). 2.3.3. Observations and Limitations After reviewing some research papers conducted in recent years, a few conclusions have been drawn. When considering the available, most used software for acoustic simulations (i.e. ODEON, EASE, CATT-Acoustics), it is evident that they do not provide for a smooth workflow in the design process for architects, designers and students. These software programs are not integrated within any 3D modeling software and therefore require an import/export of the model to and from the 3D modeling software. It has also been concluded that these software programs primarily rely on Ray Tracing and Image Source methods. Ray Tracing and Image Source methods are appropriate for acoustic simulation in indoor spaces; however, outdoor spaces require more specificity considering the external factors influencing their conditions. 2.4. PROPOSED SOLUTIONS Due to the lack of outdoor acoustic simulation software, designers have resorted to creating their own plugins for 3D modeling software. In these plug-ins, the complexity of multiple surfaces is considered in their respective methodologies. Some take advantage of the GA method, while some utilize a hybrid method of both GA and wave-based methods. 44 A few examples include Olive Tree Lab – Terrain, I-Simpa, DISIA Project and Edge Diffraction. These engines have been developed for outdoor acoustic simulations, yet they are not integrated into 3D modeling software and are external engines. Most of the software listed above is also not open source. One that is open source and utilizes a hybrid methodology of geometric and wave- based techniques is a program called Wayverb. Wayverb, however, has been designed for indoor acoustic simulation and is also not integrated into any 3D modeling software. In order to proceed with the development of a plug-in for outdoor acoustic simulation, the equations and methods employed by Wayverb will be incorporated into the final calculations. This is because Wayverb is the only acoustic simulation engine that uses both geometric and wave-based methodologies (Wayverb, 2019) and employs them in ways that do not require high computing power. Incorporation of a pre-existing algorithm can be put into a new plug-in design, and open source plug-ins can be customized and improved upon. Their accessibility improves the workflow of their users through the use of scripting and plug-ins. 2.4.1. Ray Tracing Equations for Acoustic Simulations For acoustic simulation calculations, there are many considerations for correct calculations and execution of code. These include not only the ray propagation from the source sphere, which needs to be divided equally to allow equal distribution of ray starting points, but also the correct calculations for the Sound Pressure Level and the reverberation time. 45 To confer the parameters and equations required, research papers regarding the implementation of the Ray Tracing equation in both commercial acoustic software and plug-in documentation. For the purpose of replicating the equation, two case studies were considered to compare their applied algorithm and to confirm synchronicity in the methodology. 2.4.1.1. Prediction of Sound Pressure and Intensity Fields in Rooms and Near Surfaces by Ray Tracing. In his research paper, Owen Mathew Cousins documents the theory behind sound pressure fields in acoustics and how to implement them through the ray tracing algorithm. For the calculations of SPL at the receiver, the initial Sound Power Level of the source must be divided by the number of rays being propagated (Cousins, 2008). The formula for calculating the Sound Power Level of each ray at its origin point can be calculated by Equation 2.1. (Equation 2.1) Once this is calculated, the next step proposed within the case study is the calculation of the Sound Pressure Level, being transferred by the ray until the receiver position (Equation 2.2) (Cousins, 2008 ). 46 (Equation 2.2) Here, the ray’s original energy level is measured in Watts and then mut be converted back to its decibel logarithmic value. This calculation, however, does not take into account the number of intersections of each ray with the surrounding geometry and accounts for the Sound Power lost at each intersection. 2.4.1.2. Room Acoustics Modeling Using the Ray Tracing Method: Implementation and Evaluation This research paper documents meticulously the steps that must be taken to successfully implement the Ray Tracing algorithm for acoustic simulation measurements. The author, David Oliva Elorza, provides theoretical background information for wave properties and sound parameters that are required for simulation calculations. To convert Sound Power Level (measured in decibels), one should convert to Sound Power (measured in Watts) to simplify calculations (Elorza, 2005). After implementing the SPL calculation, presented in Equation 2.3, the value is then converted back to decibels ( Elorza, 2005). 47 (Equation 2.3) 2.4.1.3. Acoustic Simulation of Building Spaces by Ray-Tracing Method: Prediction vs. Experimental Results Another researcher provides algorithm equations for ray tracing calculations, but also provides an example where he has applied these equations to obtain values and graphs for the Sound Pressure Level at the receiver’s position (Mahjood, 2008 ). The equations for the optimum number of rays that would provide a viable amount of rays intersecting with the receiver, the initial division formula for the source’s Sound Power Level amongst this number of rays, and the final calculation for the SPL with considerations for the intersections with surrounding geometry.(Expressed as figure 2.4, 2.5 and 2.6 respectively) (Mahjood, 2008 ). 48 (Equation 2.4) (Equation 2.5) (Equation 2.6) 2.4.1.4. Comparisons and Conclusions After comparing the formulae and equations employed and documented by researchers, the commonality between the SPL calculations is evident. There is synchronicity between the different equations presented, though some do not consider intersections while others do. For the purpose of a detailed calculation, Equations 2.3 and 2.6 present a similar formula that considers each intersection of each ray and accounts for the total Sound Pressure Level of each ray as it reaches the receiver. 2.4.2. Scripting Interfaces for Acoustics Through experimentation and utilizing open-source plug-ins, it is possible to customize the acoustical requirements and add new parameters to a pre-existing plug-in. To test the possibilities, Pachyderm was used as the original function that was customized through 49 IronPython. The purpose of using IronPython is because Pachyderm is written in the Python language, and IronPython allows the used to pick up the algorithm from Python and allowing integration into .NET which further allows c# (a .net programming language) to work with older engines created using Python (Van Der Harten, 2000). The task being to calculate the change in ERE (Early Reflection Enhancement) as a new parameter, in order to calculate the sound reflections of multiple performers on stage, Pachyderm’s script was altered. The new function created a mesh on every surface of the stage with evenly spaced vertices and places a spherical receiver object for Ray Tracing at every vertex. After this, the user is prompted to input the positions of the multiple sources (performers). After this, the plug-in calculates reflections for each of the sources while considering the remaining as receivers. This allows for a simulation of the ERE on stage. Once the calculations are complete, the task remains to calculate the change in ERE. The evaluation of these results is time-consuming and requires high computing power depending on the number of vertices. Therefore, the new script collects the maximum and minimum value and finds the difference between those instead of going through them all. ΔERE can also be considered as another parameter to consider when simulating sound propagation in outdoor performance spaces (Van Der Harten, 2000). 2.4.3. Plugins for Acoustic Simulation 3D modeling is an integral part of an architect’s design process. It helps the architect visualize the interior and exterior of the spaces they design. Unfortunately, this only allows for improved visual simulation appearances, whereas acoustics are often disregarded. 50 Some research suggests the reason behind a lack of acoustic analysis in the design process is the lack of acoustic simulation software that is easily available and easily incorporated into the 3D modeling process. The designer must be able to employ acoustic simulation within his 3D modeling platform of choice, to maintain familiarity and continuity in the design process (Pelzer, 2014). Even though most acoustic simulation software allows users to import their 3D models as different formats (e.g. AutoCAD, 3ds Max, or Sketchup), the lack of interactivity and interaction between the 3D model and acoustic software does not allow users to update the model simultaneously. Often, the exporting and importing of 3D models into the acoustic simulation software can be a time consuming and tedious process (Pelzar, 2014). The output is not easily understandable by someone who does not have a strong background in acoustics. Some software allows for auralisation, though this is even more time consuming in its calculations. The inability to seamlessly change the 3D model and observe the changes in acoustics is a limitation of most commercial acoustic simulation software. 3D modeling software can be sorted based on their plug-in accessibility, clarifying the compatibility of programming language and 3D modeling software (Table 2.2). Table 2.2: 3D Modeling software and their plug-in accessibility (Pelzer, 2014) 51 A room acoustic simulation integrated into a CAD modeling software (Sketchup) allows for real time acoustics processing within the Sketchup GUI ( Pelzer, 2014). Important parameters are visualized directly into the model. The plug-in also allows for auralisation in real time. The resultant real-time visualization and audio feedback coupled with the integrated control provided to the user make the documented plug-in a “versatile tool that attracts many users”. The acoustic simulation integration into the 3D modeler allows for a smooth workflow. Time and energy are saved by avoiding the import and export processes for external commercial software (Pelzer, 2014). 2.4.4. Pachyderm Acoustic Simulation An acoustic simulation plug-in has been designed for Rhino called Pachyderm ( Van der Harten, 2013). The implementation of Pachyderm in a real-world acoustic analysis at Melbourne’s Hamer Hall proved to be advantageous (Van Der Harten, 2013). During a renovation, it was 52 proposed that the facets on the interior walls and ceilings of Hamer Hall were causing multiple diffuse reflections which were causing a distortion in the sound from the stage. One of the main advantages of Pachyderm is its exposed source code that other programmers can use to run custom simulations. This open-source allows for changes and customization using its native language C# or the IronPython interface. For Hamer Hall, Pachyderm proved that the “harshness” and distortion in the hall was not being caused by specular reflections and diffraction. In order to prove this, Pachyderm was customized and supplemented with open-source Boundary Element Modeling (Van Der Harten, 2013) using MATLAB’s Mathwork Toolkit. This provided acousticians with visualization of the sound’s scattering effect (Figure 2.5). It also accurately predicted the effect of absorptive treatments on the respective panels and facets within the Hamer Hall. 53 Figure 2.5: Pachyderm and MATLAB visualization (Van Der Harten, 2013). 2.4.5. Observations and Limitations An analysis of research papers pertaining to using open-source algorithms to customize and produce a plug-in for 3D modeling software, proves the ease of customizing the open-source algorithm, as in the example of Pachyderm for Rhino. The entire purpose of open-source coding is to promote other programmers to “tweak” it according to their requirements and to improve upon it. Analysis tools should be more available to people, so they make informed decisions in their work. The availability Pachyderm’s source code provided an opportunity for the acousticians involved in Melbourne’s Hamer Hall renovation to properly understand the distribution and scattering of sound within the hall. This case study demonstrated the benefits of the increased understanding provided by the detailed analysis made possible using open-source sound analysis techniques. Another observation reached is that the lack of real-time interactivity and interaction between acoustic simulation software and 3D modeling software is a hindrance. The integration of a simulation engine as a plug-in for 2D and/or 3D modeling software would accelerate the workflow of the average designer. 54 A reliable solution for acoustic simulation within 3D modeling software can be developed by building off model plug-ins that incorporate both wave-based and geometric simulation methods. 2.5. SUMMARY The input, performance and output results of existing indoor acoustic simulation engines, namely ODEON, CATT Acoustics and EASE that these simulation tools were primarily designed for commercial use by professional acousticians and engineers. This is evident through EASE’s name (Enhanced Acoustic Simulator for Engineers). These engines are also expensive to purchase and are normally utilized by firms and universities. The trial versions offered have limited options. These software programs also lack real-time integration with 3D modeling software and require models to be imported and exported via plug-ins. It is extraneous and interrupts the productivity of designers. It is also evident that the results provided by the software are complicated and require the user to have extensive knowledge on acoustics. Though productive for professionals and instructors, these programs lack the approachability that would encourage independent architects to incorporate acoustical considerations into their designs. Outdoor acoustic simulation is mostly used for noise simulation in urban spaces. Unfortunately, the methodology that is applied for outdoor acoustic simulations is not far too different from that applied to indoor spaces. This may suffice for noise prediction but is not accurate enough to predict the acoustics of outdoor performance spaces. For outdoor acoustic simulations to correctly predict the acoustics of performance spaces, external factors such as air pressure, temperature, differential humidity must be considered. 55 For calculations of Sound Pressure Level at the receiver locations, a common equation is applied across most of the acoustic simulation platforms. This equation considers important acoustic parameters, including the total length of the rays and the absorption coefficients of the surfaces the rays intersect with. The development of acoustic simulation plugins within 3D modeling software encourages more designers to incorporate architectural acoustics into their projects. The intuitive link between the simulation results and the 3D model allow for changes to be made and visualized automatically, without having to export the updated model once more. These plugins can also visualize and auralize the results within the 3D model, allowing for an easier understanding of the acoustic performance of the space. 56 3. CHAPTER 3: METHODOLOGY Chapter 3 provides a detailed explanation of each step towards the development of an acoustic simulation custom component for Rhino’s Grasshopper. The component is considered from two points of view; that of the user (Figure 3.1, pink ribbon) and that of the programmer (Figure 3.1, white squares). 57 Figure 3.1. Methodology Diagram with zoomed in views. The required 3D model format, the programming algorithm and formulae used, instructions on importing the component within Grasshopper, the method of calculating resultant impulse responses by executing the programming algorithm and displaying it visually and aurally are explained in the following sections. In order to overcome the shortcomings of existing acoustic simulation tools and their complications, a custom component category has been chosen as an appropriate solution. This will improve the efficiency of the user’s workflow. As a simulation tools for outdoor acoustics, the algorithm will consider both high and low frequency sounds, with increased allowance for sound rays, image sources, scattering and absorption coefficients of outdoor fixtures within the 3D model. The algorithm will also allow for outdoor environments and the openness to the sky. 58 To validate the results achieved through the simulation, a test site will be used for a field test, which will be compared to the simulated acoustic response of the same space as a Rhino 3D model. 3.1. OVERVIEW The process of developing a plug-in must consider not only the efficiency of the programming algorithm, but also its effortless and seamless execution. For that purpose, the custom component designed for outdoor acoustic simulation must be designed to be easy for users to integrate into their workflow. The building of the 3D model is maintained as simple and limited to Rhino’s controls. The only requirements for the model are the surfaces must be polysurfaces and they must be assigned parameter values that assist in their absorption coefficient calculations. For programmers to understand the source-code of this custom component, the different classes for the final programming algorithm are considered and combined for a final impulse response that covers all frequencies of sound. For users, the import of the component into Rhino’s Grasshopper is the same as that of other components (e.g. Kangaroo). Yet, for those designing this custom component, Visual Studio’s 59 programming environment is utilized for setting up the component’s class, category and for the final build. The final component within the Grasshopper canvas will only require inputs. After calculations, it will display the outputs within the 3D model in Rhino. The program will execute a complicated sound propagation algorithm to result in a final impulse response. The directions of the rays will also be stored for the final visual outputs. These outputs will be limited to aural and visual. The ray paths will provide a guideline for the scattering diagram that will visualize the directions of sound travelled and the auralisation function will use the impulse response and convolute it with a provided input sound signal to auralize what the sound will be perceived as at the receiver’s position. The complete methodology is fairly simple for the users and rather complicated for the programmer. 3.2. MODEL GEOMETRY The goal of a custom component within Grasshopper is to reduce the time spent preparing the 3D model for the simulation. And though the requirements for this are fewer than having to use external plug-ins for import and export between platforms, they are still existent. For the component’s algorithm to select the surfaces of the 3D model, they must be polysurfaces. Rhino’s texture mapping can be utilized for applying surface textures over each polysurfaces selected. These can then be selected in groups, and parameter values can be assigned to them. These parameters include absorption coefficients, scattering coefficients, reflectivity, roughness. 60 3.3. PROGRAMMING ALGORITHM A programming algorithm is defined as a computer procedure that tells your computer precisely what steps to take to solve a problem or reach a goal. The instructions and requirements are called inputs, while the results are called the outputs. To encompass all sounds of both high and low frequency, a hybrid method of geometric and wave-based methods is used. The geometric methods used are both ray tracing and image-source method for late and early reflections, respectively. For low frequency sounds, waveguide method is used, which is wave-based element method (Figure 3.2). Figure 3.2: Methods and their frequencies 61 3.3.1. High Frequency Sound High frequency sounds have a frequency between 8 and 20 kHz. Methods often used to simulate high frequency sounds include the geometric method, which can be further divided into ray tracing, beam tracing and the image-source method. For accurate sound prediction, and to allow dedicated processing time for each process, high frequency sounds are processed in two categories; early reflections and late reflections. 3.3.1.1. Image-source Method for Early Reflections Early reflections are the sound rays that reach the receiver after the direct sound. These are usually predicted as rays leaving the source in all directions, reflecting off nearby surfaces and reaching the receiver (Figure 3.3). Due to the simplicity of the order of these reflections, the image-source method is employed for sound prediction. Not all energy is perfectly reflected at the surrounding surfaces and some is scattered and randomly diffuses. The image-source method, though accurate, cannot compute such complicated reflections and is therefore utilized for easier early reflections. 62 Figure 3.3. Sound ray reflections from source to receivers. (Acoustics: Putting it all together, n.d.) In theory, the image-source method assumes rays are perfectly reflected at boundary surfaces. Requiring the source and receiver positions, along with the boundary surface line, the image- source method mirrors the original source across the boundary surface line. This new image- source represents a perfect reflection path. When reflected off a single boundary line, this represents a first order reflection. Those reflected off multiple surfaces represent a higher order image-source (Wayverb, 2019). The resultant image-source reflections must meet certain criteria in order to be recorded. To ensure that there is a clear specular path from the source to the receiver, the line from the receiver to the image source is checked to see if it intersects with the boundary surface line. If it does, then it is considered a valid image-source. Usually, a large number of rays is emitted from the source, but at every intersection with the reflection point, the ray is checked to see if it is visible from the receiver. If it is, then the validation check is conducted. There are three main advantages to this method 1. The code is more likely to find all valid image-sources since more paths are checked. 63 1. 2.Vector-based scattering can be used. This is because the initial ray paths do not have to be specular. 2. All paths capable of intersecting the receiver are checked. This means the results will be more specific and accurate. 3. However, the main disadvantage is that a greater number of rays being accounted for results in a larger number of validity checks. This requires time and computing power. However, it is still faster than the traditional method of only using ray tracing. It is assumed that all the sources, both original and the image-sources, emit the same impulse response at the same time. The sum of signals from each source can calculate the total contribution of the image-sources, after being delayed and attenuated appropriately. To process the image-source method into an impulse response, an open-source code has been utilized (Figure 3.4). 64 65 Figure 3.4: Screenshots of image-source function code (Tu, 2014). 3.3.1.2. Ray Tracing Method for Late Reflections The rays that are propagated originally from the sources, by the image-source method, are propagated in all directions. Only those visible by the receiver are recorded. The exact proportion of intersection rays is displayed by (Equation 3.1) (Wayverb, 2016) . 66 s/4r 2 (Equation 3.1) s = constant area covered by the receiver r = distance between the source and receiver This formula clarifies that the proportion of rays that are intersecting the receiver is inversely proportional to the square of the distance between the source and the receiver. For calculations, the energy accounted for is proportional to the number of ray intersections that are documented. The accuracy of the output increases with the average number of rays detected per unit time, using the formula (Equation 3.2) (Equation 3.2) Here, N is the number of rays, c is the speed of sound, V is the room volume and r is the radius of the receiver sphere. For programming the code in c#, a simple ray tracing algorithm from an open-source code is utilized. This algorithm works by calculating the intersection of lines with the objects within the Rhino 3D model. Here, the start point of the line is defined as the viewer position and the end point is assumed to be located at the polysurfaces plane. For each discrete point at the projection 67 plane an equation for the line is obtained and it is calculated all intersections with all objects selecting the intersection which are nearest to the viewer. Once the surrounding surfaces are collected by the plug-in, each surface is attributed a reflection point. These points act as the end points of the 3D lines propagated from the sound source sphere. The 3D lines can be represented by (Equation 3.3) l = p + tv (Equation 3.3) l = line p = point in R 3 t = scalar v = vector in R 3 This can also be represented as (Equation 3.4) l(x,y,z) = p(x,y,z) + t * v(x,y,z) (Equation 3.4) (px, py, pz) = the points lying in the 3D line, t = scalar parameter (vx, vy, vz) = direction vector. 68 The above equation can also be obtained by the definition where a line can be defined by 2 points, so given two points P1 and P2 we have: Point 1 P1(x1,y1,z1) Point 2 P2(x2,y2,z2) Vector v = P2-P1 = (x2-x1, y2-y1, z2-z1) (Equation 3.5) Replacing the vector v and p from the line equation l(x,y,z) = p(x,y,z) + t * v(x,y,z) we have line equation (Equation 3.6) ensures that all (x,y,z) which satisfies the above equation belongs to the line defined by the points P1 and P2. L(x) = x1 + t*(x2-x1) L(y) = y1 + t*(y2-y1) L(z) = z1 + t*(z2-z1) (Equation 3.6) For spheres that will be positioned where the receiver is, the equation to be used is (Equation 3.7) 69 r 2 = (x-cx) 2 +(y-cy) 2 +(z-cz) 2 (Equation 3.7) r = radius of the sphere (receiver) (cx, cy, cz) = center of the sphere (reflection point) This equation allows us to ensure that all the xyz points lie on the surface of the sphere. The resultant intersection equation between the line and sphere must now be a set of (x,y,z) points that satisfy both equations. For calculating whether a line intersects a sphere and to find the intersection, the x, y and z values from the line equation (Equation 3.6) can be placed into the sphere equation (Equation 3.7) directly. This would provide us with (Equation 3.8) r 2 = (x1 + t*(x2-x1)-cx) 2 + (y1 + t*(y2-y1) -cy) 2 + (z1+ t*(z2-z1)-cz) 2 (Equation 3.8) By replacing (x1,y1,z1) with (px,py,pz), the resultant (Equation 3.9) is (x1-cx+t*vx) 2 + (y1-cy+t*vy) 2 + (z1-cz+t*vz) 2 - r 2 = 0 (Equation 3.9) This 2 nd degree equation now provides 0, 1 or 2 different solutions for t, where t represents the intersections if there are any (Equation 3.10). a*t 2 +b*t+c = 0 a = (vx 2 + vy 2 + vz 2 ) b = 2.0 * (px * vx + py * vy + pz * vz - vx * cx - vy * cy - vz * cz) 70 c = px 2 - 2 * px * cx + cx 2 + py 2 - 2 * py * cy + cy 2 + pz 2 - 2 * pz * cz + cz 2 - r 2 (Equation 3.10) Equation 3.10 can be represented in c# as: double A = (vx * vx + vy * vy + vz * vz); double B = 2.0 * (px * vx + py * vy + pz * vz - vx * cx - vy * cy - vz * cz); double C = px * px - 2 * px * cx + cx * cx + py * py - 2 * py * cy + cy * cy + pz * pz - 2 * pz * cz + cz * cz - radius * radius; double D = B * B - 4 * A * C; double t = -1.0; if (D >= 0) { double t1 = (-B - System.Math.Sqrt(D)) / (2.0 * A); double t2 = (-B + System.Math.Sqrt(D)) / (2.0 * A); if (t1 > t2) t = t1; else t = t2; // we choose the nearest t from the first point } 71 The source code utilized for this equation in c# is open source (Figure 3.5). 72 73 Figure 3.5: Screenshots of ray tracing function code (Almeida, 2007). 3.3.2. Waveguide Method for Low Frequency Sounds In most existing simulation software programs, geometric simulation methods are employed. Rarely do they employ wave-based methodology. However, wave-based methods of acoustic simulation, primarily the FDTD method, are a good choice for low frequency sound simulation. The process begins by dividing the space into a grid of 3D cuboid voxels. These can be categorized as the “mesh topology” within the grid (Figure 3.6) Figure 3.6: Mesh Topology Options For this plug-in design, a simple rectilinear mesh topology is chosen to reduce calculation time. The air is divided into equally spaced nodes and each is assigned a pressure value. Each node’s pressure is affected the pressure of the node preceding it. Nodes are connected by rectilinear lines that represent Delay Lines. The algorithm of the plug-in component should consider the low frequency sound wave as a string, where each string’s total displacement is equal to the sum of delay lines in comes in contact with. Here, the pressure is calculated. The final value is then used to calculate the total time elapsed and is then incorporated towards the final impulse response calculation. 74 3.4. GRASSHOPPER COMPONENT Grasshopper’s scripting environment, with its easy user interface and simple nodes, makes it easier for users to create custom components. However, the standard procedure of making a custom component within Grasshopper is combining multiple nodes together so they work in the order assigned to them. This complicates the procedure and takes the program longer to run. A more efficient way to program a custom component is through Visual Studio. Using the Grasshopper Template within Visual Studio eases the component build process and it allows the user to create a Grasshopper plug-in from scratch. Visual Studio allows the user to set the component class and category. 3.4.1. Load Component into Grasshopper For a user running the plug-in within Grasshopper, there are very few prerequisites to be met. For the installation of the plug-in, first the folder needs to be downloaded or copied into the user’s computer. After this, the plug-in requires the user to copy the .gha file from within the folder into the Grasshopper components directory. This is usually placed in (C:\Program Files\Rhinoceros 4.0\Plug-ins\Grasshopper\Components). This might cause issues if Grasshopper auto-updates. An alternative option is to create a separate directory This is done by copying the directory on your hard disk and typing in “GrasshopperDeveloperSettings” within Rhino’s command line and adding a search path to the plug-in’s directory. 75 The plug-in for outdoor acoustic simulation is divided into multiple component categories, each with a maximum of 3 components. The categories are Model, Parameters, Computation and Results (Figure 3.7). The aim is to keep it simple and easy to use. For the user, the categories can be used in order from left to right. The inputs within the Model category include Geometry, Source and Receiver. These involve simple selections within the Rhino model. The second category, Parameters, allows the user to input certain parameters related to the surfaces and surrounding environment (i.e. absorption coefficient, scattering coefficient, and wind speed and air pressure for environmental factors). The calculate button is for the plug-in to compile all the data provided by the user and make calculations for the final output. The Results category is also simple, with an auralisation component and one for visualizing the rays and sound falloff within the 3D model in Rhino. 76 Figure 3.7: Plug-in Component Categories 3.4.2. Component Build in Visual Studio The process of building a custom component in Visual Studio begins with the installation of the Grasshopper Assembly for V6, along with RhinoCommon’s template for V6. The following steps demonstrate the procedure of setting up the workflow and the project for the build of a custom component for Grasshopper. i. Set Up Workflow There are two ways to set up the workflow before beginning the build of the component via Visual Studio. One option is to automatically build to a Grasshopper component folder. This can be done through Properties > Build Events > Edit > Post Build Event. Copy "$(TargetPath)" "C:\Users\Josh\AppData\Roaming\Grasshopper\Libraries\<Name of Plugin>.gha" Another option is to add a build folder to Rhino’s search path. By typing in “GrasshopperDeveloperSettings” into Rhino’s command line, a user can add the output folder of their project to Grasshopper’s search path. A keyboard shortcut can also be bound to this command, e.g. Ctrl + R. ii. Visual Studio Project Set Up 77 Within Visual Studio, the first step is to open up a New Project and open it with the templates for visual C#, preferably the Grasshopper Add-On template. Once this is open, Visual Studio provides a Grasshopper Assembly dialog box which requires the nickname, category and subcategory of the component. These identify the name, tab and group under the tab, respectively. iii. Create GH_Component Base Class This procedure specified to Visual Studio that the plug-in being created is for Grasshopper (Figure 3.8) Figure 3.8: GH_Component class iv. Public Constructor Providing a public constructor without any arguments is Visual Studio’s record of the name, description, category and subcategory of the plug-in within Grasshopper (Figure 3.9). 78 Figure 3.9: Public Constructor v. Register Inputs Using pManager object inside the overridden implementation of RegisterInputParams method allows the user to add the inputs that the custom component will require (Figure 3.10) Figure 3.10: Registering Inputs vi. Register Outputs The outputs of the custom component do not have default values and must have the correct access type assigned to them. Outputs can be registered with pManager to access to the object inside the overridden implementation of RegisterOutputParams (Figure 3.11). 79 Figure 3.11: Register Outputs vii. Component Logic Using the DA object to access inputs and outputs, component logic can be implemented inside the overridden implementation of the SolveInstance method (Figure 3.12) Figure 3.12: Collecting data and performing calculations. 80 viii. Functionality implementation The functionality of the component (i.e. the way it processes the data added to it) can be implemented (Figure 3.13) Figure 3.13: Functionality ix. Component Icon The icon of the component can be personalized by overriding the Icon method’s overridden implementation (Figure 3.14). Figure 3.14: Icon Override https://gist.github.com/parkerjgit/cf0374309a120437dffc9ed644c52dc9 81 3.5. ALGORITHM EXECUTION The methodology for image-source, ray tracing and the waveguide method discussed in 3.3. will be executed by the plug-in component in the Component Logic (Figure 3.12) whereas the actual implementation of the formulas and their hybrid combination will be implemented within the functionality part of the program algorithm. Though the three processes have different origins, inputs and outputs, they are interconnected by common calculations. The implementation of these three simulation methods is discussed, with a detailed explanation of how they can be combined to produce a final impulse response. 3.5.1. Implementation of the Image-Source Method For the image-source method to be implemented, the algorithm first collects the scene and forms a bounding box. This bounding box is them divided into cuboid voxels. This collection of voxels reduces the time taken for calculation and accelerates the process (Wayverb, 2016). After this step, the algorithm collects the source position and “fires” rays in all directions from it. If these rays intersect with the scene, they are recorded. The ones that do not are discarded from the record. The rays are then checked to see their interactions (if any) with the cuboid voxels within the scene. The voxels intersected by the rays are then checked to see if they are visible from the receiver position. Those that are visible are recorded, whereas the rest are discarded (Wayverb, 2016). 82 At this point, the record contains voxels which were intersected and are visible from the receiver’s position. Sometimes rays and reflections paths overlap and are repeated. The duplicates are removed. This is done by condensing the information per ray to a tree of valid paths. A validity test is then conducted to see if these rays form valid image-sources. The final record contains those rays that form valid image-sources. For the paths of these rays, the algorithm then runs a calculation for the pressure and the propagation delay. The propagation delay is calculated by dividing the distance between the source and receiver by the speed of sound. The pressure for the image-sources is calculated by collecting the reflectances of the intermediate surfaces intersecting with it (Wayverb, 2016). For the final calculation for the contribution of a single image source with intermediate surfaces m1, m2 …mn can be written as (Equation 3.11). The sum of all contributions is then used for the final impulse response (Wayverb, 2016). (Equation 3.11) Z 0 = Acoustic impedance of air c = Speed of sound dm1m2…mn = distance between receiver and image-source Rmi = reflectance of surface i 83 3.5.2. Implementation of the Ray Tracing Method For ray tracing, the initial process of the image-source method (i.e. the voxel division of the space) is the same. The difference here is the rays that are “fired” uniformly from the source position are assigned energy values. The intersections of each ray with surface geometry are recorded, along with further information including the ID of the surface voxel, whether the surface voxel is visible from the receiver’s position and the position of the ray. These parameters are used for final energy calculations and for the visualization of the sound. At its intersection, a new secondary source is created. The source is represented as a hemisphere. From this new secondary source, a new ray direction is calculated using the scattering coefficient value from the inputs. The new direction is determined by (Equation 3.12). This mimics the real world behavior of rough surfaces, which cause some energy to be randomly diffused in non- specular directions during reflection of the wave-front (Wayverb, 2016). (Equation 3.12) s = Scattering coefficient Once the new ray direction is determined, the energy carried by the original ray is decreased, depending on the absorption coefficient of the intersected voxel cuboid. If the surface has an absorption coefficient of α α in a particular band, then the energy in that band is multiplied by (1− α) to find the outgoing energy. This process is repeated, using the 84 incoming energy and absorption coefficient for each band, to find outgoing energies in all bands. The new ray, with the computed outgoing energies and vector-scattered direction, is now traced (Wayverb, 2016). 3.5.3. Implementation of the Waveguide Method For the waveguide method to be implemented, the process begins in the same way as for image- source and ray tracing where the space is considered as a bounding box and then divided into 3D cuboid voxels. The difference in the waveguide method is that certain voxels might be along the boundary line and these are often ignored by ray tracing and image-source. Waveguide’s method of determining whether a voxel is a boundary node or not is through the mesh topology of each node, which is further divided into three categories; 1D, 2D and 3D (Wayverb, 2016). Once it is decided whether the voxel is a boundary node or not, delay lines are created between the nodes. Along with this, each material within the scene has an index field associated with it. Each delay line is linked and paired with an index field value (Wayverb, 2016). The final step is to find which filter coefficients should be linked to which filter delay line. For 1D boundaries, the process is as follows: find the closest triangle to the node; find the material index of that triangle; get the node’s filter data entry; set the coefficient index field to be equal to the closest triangle’s material index. For 2D boundaries, adjacent 1D boundary nodes are checked, and their filter coefficient indices are used, which saves running further closest-triangle tests. For 3D boundaries, adjacent 1D and 2D nodes are checked (Wayverb, 2016). 85 (Equation 3.13) 3.5.4. Hybrid Methodology Figure 3.15: Hybrid Methodology 3.6. OUTPUT To increase the integration of acoustic simulation into the design workflow, the goal of this plug- in is to provide results that are easy to interpret by those lacking prior acoustics knowledge. To accommodate this criteria, there are only two outputs provided; auralisation and visualization. The auralisation function requires a .wav file that it then simulates from the receiver’s point of view. The visualization of sound rays and their reflections make it easy for users to see the 86 direction of propagated rays. This function also allows users to move their source and receiver and see new ray paths in real-time. 3.6.1. Auralisation For the auralisation function of the plug-in, the ray tracing method’s calculations are utilized. The ray tracing results in a set of histograms which describe the energy decay of each frequency band. These histograms are not directly appropriate for auralisation and must be processed further. To convert these histograms into audio-rate impulse responses, the process followed by Wayverb (Wayverb, 2019) is replicated. This requires the decay tail to be synthesized with an adjustment to the gain using the histogram envelopes (Figure 3.16). 87 Figure 3.16: Generating an audio-rate signal from multi-band ray tracing energy histograms at a low sampling rate. (Wayverb, 2019). 3.6.2. Visualization The image-source and ray tracing methods share the first three steps in their methodology in common i.e. converting the space into bounding box, dividing it into voxels, “firing” rays from the source and only recording those that both intersect with a voxel and are visible from the 88 receiver’s position. The rays that are recorded at this point are converted into a 3D element (i.e. by using the start point and end point of the rays, a 3D cylinder is built). These are then assigned specific colors based on their delay and proximity to the receiver or source. The color scheme chosen for the visuals will also be minimal and aesthetically clear). 3.7. APPLIED METHODOLOGY After trial and error, it was concluded that Visual Studio was not a viable option for the development of this plug-in component. This is due to the complications of building and executing the code at every change, which is a tedious process that involves starting up Rhino and Grasshopper every time. Visual Studio also does not currently have a Grasshopper v6 Template that is compatible with Rhino 7. Due to time constraints and to avoid dependencies on other plug-in components, auralisation was not addressed in this plug-in. The deviations from the original methodology can be noticed when comparing the original methodology diagram (Figure 3.1) to the methodology diagram compiled at the end of the plug-in development (Figure 3.17) 89 Figure 3.17. Updated methodology diagram with zoomed in views 90 The final Acoustic Simulation plug-in, labelled Rutabaga Acoustics, is a single component that is limited to 10 input values. These include the sound source sphere, the receiver sphere, the starting source Sound Power Level, the number of rays to be propagated, the surrounding geometry and the absorption coefficients of the surrounding geometry. Rutabaga Acoustics’ outputs including 3D ray visualization, arrival times of each ray and the resultant Sound Pressure Level of each ray. These values can be exported into a Microsoft Excel sheet that plots the data into graphs. The component has been further documented in detail within chapter 4. 3.8. SUMMARY This chapter provides details on the methodology employed by the plug-in for outdoor acoustic simulations. Instructions are provided for both the user of the plug-in and for the programmer wishing to either customize or to replicate the simulation techniques. The prerequisites for the plug-in are simple and only require a 3D model with its surfaces as Breps. High frequency sounds are prioritized by most acoustic simulation tools. To simulate early reflections, the image-source method is employed, and the ray tracing method is used for late reflections. The plug-in proposed design is going to use open-source code for both ray tracing and image-source methods and then customize it accordingly. 91 A detailed study of all these processes makes it clear that the process of using this plug-in for users is comparatively easier than programming it. The ease of use of the plug-in is exacerbated by the minimal aesthetic of the category’s icons in Grasshopper’s Component Palette. There were changes from the original methodology planned. These included foregoing the decision to include auralisation at this stage of development due to time constraints, the decision to forego waveguide calculations since such low frequency sounds would be ineffective within large outdoor spaces, and to write and execute the code within Grasshopper’s C# component as opposed to Visual Studio’s programming environment. The resultant plug-in development methodology incorporated calculations for ray tracing, image- source and the final calculations at the receiver. 3D visualization of sound rays was also achieved within the Rhino 3D model. 92 4. CHAPTER 4: PLUG-IN DEVELOPMENT Following the methodology and development processes explained in chapter 3, this chapter documents the execution of the plug-in development code and the deviations from the initial methodology that altered the outcomes. Chapter 4 breaks down the layout of the code within Grasshopper’s c# component and details the classes, methods and functions written to execute the acoustic simulation component design. The thought process behind each defined class is also explained to provide a better understanding of the plug-in development methodology. 4.1. PROGRAMMING SCRIPT Though Visual Studio’s Grasshopper v6 template provides a simple environment for coding in c#, the latest update from Rhino (released December 8 th , 2020) is not compatible with the existing template version. Another avenue that was explored for writing the plug-in’s code was through following the template format and manually entering the data (as opposed to sample code being provided with the template). This format is divided into a few steps: defining the code as a class with a specific namespace, defining the inputs of the component, defining the outputs of the component and finally the “Solve_Instance” section which uses the input values and executes functions and methods to output from the component. Visual Studio allows for easy 93 customization of the component, such as applying a particular icon image (.ico format) for the component. Within the scripting environment of Visual Studio, the built component cannot be checked to see if it is functioning. However, through the “Ctrl + f5” function, after specifying within the properties, Visual Studio prompts Rhino to load. The user then must open up Grasshopper, find the plug-in within the categories and then test it. A third avenue was explored for coding and testing the plug-in. This was the chosen method for developing the acoustic simulation plug-in. Within Grasshopper, the customizable c# component is a viable solution for writing code and being able to test it simultaneously. The layout of the code is different from the one required by Visual Studio, where instead of implementing the inputs and outputs through the code, the node itself can be customized. 4.1.1. Scriptable Component Within Grasshopper Grasshopper’s visual scripting environment allows for parametric design with real-time alterations within Rhino’s 3d model. A customizable c#.NET scriptable component allows Grasshopper users to implement their own code using the Rhino API and Grasshopper’s SDK. Within this component, users can add new input and output parameters with customized names and units. These parameters are automatically loaded into the component code to be used within the classes and methods implemented (Red arrow, Figure 4.1). The component includes a Play button that allows the user to test their written lines of code at will, and issues warnings for lines that have errors (Green circle, Figure 4.1). 94 Figure 4.1: Automatically loaded inputs and outputs. For the acoustic simulation component design, the inputs and outputs maintain simplicity and ease of understanding. For first-time users to easily utilize this component, the inputs are limited to a maximum of ten (Figure 4.2). 95 Figure 4.2. Acoustic Simulation Component Inputs. There is one component that is for ray visualization and data calculation. This component labelled “Rutabaga Acoustics” returns outputs for ray visualization within the Rhino 3d model. The output visualization from this component include the direct sound ray, early reflections, intersection points with surrounding geometry, hemispheres created for secondary ray propagation, secondary rays, and intersection points at the receiver sphere. 96 Figure 4.3. Rays propagated within geometry in Rhino model (shown in green) The component also outputs the total time elapsed between the sound being propagated until the rays run out of geometry to reflect off. 4.2. C# SCRIPT COMPONENT LAYOUT The Acoustic Simulation component can be double clicked to access the underlying code. The following section of this chapter documents the code for this plug-in, in the order required by the component (Figure 4.4). 97 Figure 4.4. C# Component Layout 4.2.1. References (Using) Before writing the code, the component must be provided with a list of instructions. The statement at the beginning informs the components of the references it must collect databases 98 from. This allows the script to access the Rhino API (Application Programming Interface) and the Grasshopper SDK (Software Development Kit), along with Rhino and Grasshopper geometry. For the Acoustic Simulation component, it must contain many references (Figure 4.5). Figure 4.5. References within the Acoustic Simulation Component. Each of these references allows the component code to access significant classes and assemblies. An example of this is . Here, Rhino’s geometry is referenced in order to collect the Breps (Boundary representations) for the sound source, surrounding geometry, and receiver within the model. It also allows access to Rhino’s geometry for the output polyline rays. 4.2.2. Utility Functions and Members Within the public class “Script_Instance : GH_ScriptInstance”, the component is being instructed to follow certain instructions assigned through the Utility functions and Members “Regions” within the script. These are autogenerated by the component and are not accessible to the user. It automatically updates certain instructions according to the written code (Figure 4.6). 99 Figure 4.6. Utility functions and Members within the Acoustic Simulation component code. 4.2.3. Methods The following overview of the Acoustic Simulation component’s methods provides step-by-step instructions (Figure 4.7). The figure shows the method divided into 19 sections for elaborate explanation. 100 101 In order, section no. 1 (1, Figure 4.7) is the list of variables to be used within the script. These variables include the starting points of the rays, the primary rays, secondary rays, scattering hemisphere center points, and a list of the hemispheres. Section 2 (2, Figure 4.7) is to check the absorption coefficients of the geometry. If the absorption coefficient is less than or equal to 0, then the maximum number of bounces will occur. Otherwise, the number of bounces specified will be executed. 102 Section 3 (3, Figure 4.7) starts the timer which will calculate the total time elapsed until the rays find no more surfaces to bounce off. This timer is in milliseconds. Section 4 (4, Figure 4.7) generates starting points on the sound source sphere. These are dispersed along the surface of the sphere. Section 5 (5, Figure 4.7) initializes the list of objects for the rays. Initializing specifies the factors to be considered in the execution of a particular step. For example, the start points, the ray length (specified by the user) and the source sphere are factors to be considered when executing the creation of the rays class. 103 Section 6 (6, Figure 4.7) calculates the first reflections of the primary rays, and section 7 (7, Figure 4.7) creates hemispheres at the first intersection points. Section 7 also covers the creation of points along the surface of the hemispheres. Section 8 (8, Figure 4.7) calculates the second reflections for the primary rays and the section 9 (9, Figure 4.7) calculates reflections for secondary rays. Here, the secondary rays are those emitted from the hemispheres. After this, section 10 (10, Figure 4.7) checks for primary ray intersections with the receiver spheres and records the points collected. 104 Section 11 (11, Figure 4.7) checks for secondary rays’ intersections with the receiver sphere and records those points as well. Section 12 (12, Figure 4.7) creates polylines for the primary rays. These are for visualization purposes. Similarly, section 13 (13, Figure 4.7) creates polylines for secondary rays which are also used for visualization purposes. 105 Section 14 (14, Figure 4.7) calculates the amount of valid secondary rays being produced from the primary rays. 106 Section 15 (15, Figure 4.7), 16 (16, Figure 4.7) and 17 (17, Figure 4.7) are related to the decibel value records and calculations. Here, the decibel values of the rays are collected at their intersection points and then at the receiver sphere. The same process is applied to the primary rays and the secondary rays. This data is then output from the component for graphical representation. 107 108 After this, section 18 (18, Figure 4.7) is a list of the output variables and their Grasshopper output terminology. 109 Section 19 (19, Figure 4.7) stops the timer, which records the total time elapsed and outputs it from the component. Figure 4.7. Methods to be carried out in order by the Sound Ray Visualization component. 110 4.2.4. Classes In the previous methods section, the code contains instructions on the steps to follow to run the plug-in component. C#, unlike Python, cannot simply be told which tasks to execute. It must be instructed on how to perform these tasks, using the correct syntax. The custom classes defined in the c# component are detailed and access the references mentioned at the beginning of the code. The equations applied within the classes are as follows: 1. Initially, the starting decibel value (in decibels) is converted to its Sound Power (in Watts) using the equation: 10^((A2-120)/10) 2. Once this calculation is complete, the value is used to divide the source’s Sound Power by the number of rays. (Equation 2.5) 3. Upon getting the value for E0, the following equation is implemented. (Equation 2.6) 111 4. The final step is to convert the value back to its decibel value. This is executed via the following formula. 10*LOG(C2) +120 These classes do not have to be in order. They can reference each other and use the calculated “return” values within the next class. Within the Acoustic Simulation component code, each class is presented and described in detail (Figures 4.9 – 4.17), and the overall view of the classes is as follows (Figure 4.8): 112 Figure 4.8.Class definitions within the Rutabaga component. See other images for legible text. 113 These classes are definitions of the methods expressed in the method section above. The first step is to equate the results from the methods with the outputs from the classes. This is done by declaring the variables produced from the classes. (Figure 4.9). Figure 4.9. Declaring variables and public classes. After this, the surface of the sound source sphere is collected, and the variables are declared for the Rays class. These include the 3d points on the surface of the sphere, the ray length (as an input) and the surface itself. Another consideration for the rays is the direction and this is determined by the surface normal at the points. These directions are saved as moveDir, and it can be referenced in later classes. To determine the complexity of the script, this class also takes into account the amount of bounces as an input (Figure 4.10). 114 Figure 4.10. Class definition for Rays, and the variables it considers. The next class to be defined is the constructor for Breps. This considers the direction of the propagated rays and selects the closest point from the ray’s direction. These closest points align with the surrounding geometry Breps (Figure 4.11). Figure 4.11. Constructor for Breps class definition. 115 Once the rays have been defined and their closest Breps identified, the first reflections of the rays are computed. The rays are currently being propagated infinitely and must be checked for intersections. If there is no intersection point, the rays continue forward. If they do intersect with Breps, the intersection points are recorded. The normal to the Brep surface is then collected and the incoming primary rays are mirrored along it. The intersection points are also recorded as variables for use within the hemisphere class. This class calculates and returns the first reflections of the primary rays. At the end of the calculation and reflection recordings, the absorption coefficient for the intersection geometry is checked and the calculation is performed for the decibel values at this reflection point (Figure 4.12). Figure 4.12. Calculate first reflection class. 116 Once the first reflections have been computed, the second reflections must be calculated (Figure 4.13). This class uses the inputs for the number of bounces, the rays being reflected, the intersection points of the new rays with any surrounding geometry and the normal to the surface of this geometry. Similar to the first reflections being calculated, the second reflections follow a similar process. Here, however, the script checks to see if 2 is the maximum number of bounces. If so, it will no longer calculate any further reflections. Similar to the first reflection calculation class, the second reflections are also recorded for their intersecting absorption coefficients. At this point, the new decibel value of the ray is calculated and recorded. 117 Figure 4.13. Calculate Second Reflection class within the Rutabaga component. Classes do not have to be written in the order they must be executed. A class that is utilized earlier on in the plug-in can be written towards the end. Here, the class is called “PopulateSphere”. This class collects the base sphere and the input for the number of rays to be propagated. It collects the input for the sound source sphere and locates its centroid. Once this is done, the code generates new points on the surface of the sphere by adding points in the X, Y and Z directions from the centroid to the end of the radius. The “PopulateSphere” class returns populated points, which are considered as the start points of the primary rays (Figure 4.14). 118 Figure 4.14. Populate Sphere class within the component. Another class labelled later is for creating hemispheres. It has already been defined that the hemispheres must be created at the intersection. Here, the hemisphere constructor class is defined. These hemispheres are to propagate the secondary rays. It uses the intersection points of the initial rays as the center point for the new hemispheres, the radius defined by the user in the inputs, the number of rays to be propagated from the hemisphere. It returns hemispheres and names them hemisphereGeo. This is used in other classes for reference. 119 Figure 4.15. Class definition to create hemispheres and populate them with points. A class is created to check intersections with the receiver sphere (Figure 4.16). This class converts the receiver sphere to a Brep and considers the primary rays for their intersections with the receiver sphere. If there are intersections, the rays are halted, and these intersection points are recorded and output from the component. 120 Figure 4.16. Checking intersections with the receiver sphere. For the calculations of the direct rays of sound, the distance is computed between the source and the receiver. This is calculated through a class definition. The class collects the centroid of the 121 source sphere and the receiver sphere and calculates the distance between the two points. This “double” value is output from the component as the distance. (Figure 4.17). Figure 4.17. Direct rays distance calculations. 4.3. COMPONENT OUTPUTS From this component labelled “Rutabaga Acoustics”, the output data includes the total elapsed time in milliseconds, the number of intersections between the rays and the surrounding geometry, as well as between the rays and the receiver. The distance between the source and the receiver is also calculated. The component outputs both visuals and calculated data, presented both within the Grasshopper file and within an optional Excel file. Graphical results are also plotted both within the Rhino file and within the optional Excel file. 122 The equations used within the plug-in component include the conversion from Sound Power Level (in decibels) to Sound Power (in Watts), followed by the original division of the source Sound Power amongst the number of rays propagated, which is then processed to calculate the resultant Sound Pressure Level of each ray at the receiver and is finally converted back to its decibel value. 4.3.1. Visual Outputs The visualization of the component makes it easier to not only comprehend how the rays are travelling throughout the space, but also for the developer to identify the correct execution of the code, such as the speed of the ray travelling through time at a particular distance. This assists with validation of the calculations. After executing the code within the component, the output visuals are limited to only those rays that intersect with the receiver. This is to reduce the load on computation and the time taken to process the images. The component’s visual outputs are the following: the primary rays are output as polylines, and hemispheres are created upon the first bounce of these rays. The secondary rays are then propagated from these hemispheres and represent the scattering of sound from surfaces. These hemispheres created at the first bounce are output as Breps. These can be visually altered by the user within the 3D model using Grasshopper’s Custom Visual component and the Swatch component. 123 Visually within Rhino’s geometry, rays are displayed as polylines. The rays propagated from the source are output separately from the scattering rays that emerge from the bounces. The sound source sphere input by the user displays points across it (Figure 4.18). Figure 4.18. Source Sphere with points and rays propagating from it. The initially propagated rays can be visualized without the scattered rays to prevent visual crowding of the scene. In an example, the ray propagation is tested within a simple 3D model containing one wall, a floor and a ceiling with open spaces (Figure 4.19). 124 Figure 4.19. Initial propagated rays visualized within Rhino’s 3d model. This can be viewed from a perspective view (Figure 4.19) and from top and side view (Figure 4.20). Figure 4.20. Top view of space displaying the propagated primary rays. 125 The component contains a Timer slider (measured in millimeters) that allows the user to slide through the time taken for the rays to reach the receiver from the source, and to view the rays location at that time (Figure 4.21). 126 127 Figure 4.21. Timer visualization in milliseconds. 4.3.2. Calculated Data Outputs Beside visualization, the Rutabaga Acoustics component provides Sound Pressure Level calculations of the rays intersecting with the receiver. This calculated data is output alongside a list of the times at which the rays reached the receiver. Details of the rays are also provided, included their length and number of segments (Figure 4.22). The segments indicate the number of intersections, where the rays are broken up into segments at each intersection with the surrounding geometry. 128 Figure 4.22. Calculated Outputs 129 Both the Time of Arrival at Receiver and the SPL at Receiver lists are output into an Excel file which is prepared to assign these columns of data into a graphical representation (Figure 4.23) Figure 4.23. Graphical representation of data within Microsoft Excel. With a limited list of outputs and graphs, this component makes it easy for first-time users to understand the functionality of the plug-in and for people unfamiliar with acoustics to understand the paths pursued by propagated sound rays within a space. A simplified instruction manual is provided with the plug-in that also provides sample graphs for what “good” acoustics within a space should resemble. 130 4.4. SUMMARY The Rutabaga plug-in contains one component labelled “Rutabaga Acoustics.” This component acts as a sound ray visualizer and is the main component that runs the visualization and displays the rays travelling from the source to the receiver, with any reflections off the surrounding geometry. This component also calculates the sound attenuation of the distance the rays travel and returns output values for the decibel values at the start of the rays, their midpoints and their decibel levels when they intersect with the receiver. The Rutabaga Acoustics component, built within Grasshopper’s c# component, was executed within the Grasshopper canvas to test the script at every step and avoid errors. The inputs of the component make it easier for the user to understand how to use it due to their simplicity. These inputs include • Source Sphere • Receiver Sphere • Surrounding Geometry • Absorption coefficients of this geometry • Number of rays to be propagated • Number of bounces • Sound source amplitude (measured in decibels) Another input slider is included, labelled “Timer”, which allows the user to select the time at which they want to view the rays within the geometry. This timer is measured in milliseconds. 131 By adding sliders for the number of bounces and time elapsed, the limits are defined and avoid confusion for the user. These inputs are then used to process and visualize the outcomes. Step by step instructions are provided in the script that instruct the component to execute the steps in the order they are entered. These steps are located within the methods section of the component code. To instruct c# on how to perform these tasks, classes must be defined. The Rutabaga Acoustics component contains nineteen classes, ranging from point creation to ray propagation and hemisphere construction for secondary rays. The outputs of this component include the following: • Hemispheres • Center Points of the hemispheres • The distance between the source and the receiver • The primary rays as polylines • The secondary rays as polylines • The intersection points between the primary rays and the receiver • The intersection points between the secondary rays and the receiver. • The total time elapsed until the rays reach the receiver. • The total Sound Pressure Level of each ray at the receiver. • The number of intersections of each ray. • The total lengths of each ray. • The number of segments each ray is broken into (at each intersection with surrounding geometry) • T60 value 132 These outputs can be customized by the user to customize the visualizations of the rays and hemispheres within the model. The component also outputs the decibel values of the propagated rays at their origin points, their intersection points with geometry and their final intersections with the receiver. These values are also output as lists for graphical representation within Microsoft Excel, where they are exported to. This Excel file uses this information to plot a graph of SPL (dB) against Time (milliseconds). 5. CHAPTER 5: DATA VALIDATION Before the Rutabaga Acoustics plug-in could be used to simulate outdoor acoustics correctly, the logic, calculations and visualization had to be tested. Chapter 5 documents the calibration tests conducted to test the correctness and accuracy of the code and the results it produced. Two types (interior and exterior) of tests were conducted. Initially, the plug-in was used to test the acoustics of an enclosed space. This enclosed space was altered to test two different scenarios; one simply to test the functionality specifications of the plug-in and one space designed to clearly test the mathematical accuracy of the calculations with regard to the speed of sound, the absorption coefficients and the resultant energy at the receiver in terms of the SPL (Sound Pressure Level). Secondly, for real-world scenarios, two experiments were conducted to test the validity of the plug-in’s outputs. Sound rays cannot be visualized due to their nature; 133 therefore, the sound ray visualization output could not be a factor in these experiments. Since the outputs of the experiment must be comparable to the results from the plug-in, the results from the experiment were restricted to energy time curves, RT60 values and sound pressure levels over time. These two experiments can be considered as two different case studies, one conducted at an indoor site and one at an outdoor site. The two sites vary in size and geometry. The varying landscape, surface materials and sound barriers present at these sites also provide diverse results, which assists in testing the precision and “correctness” of the plug-in’s results. For testing the results thoroughly, three different acoustic tests were conducted at each site. Field tests that were conducted on site corroborated the data collected from simulating the acoustics within a 3d recreation of the sites within Rhino. All aspects of these experiments, including the documentation and 3d recreation of the sites, the field test conducted by researchers on site and the results comparison are reported within this chapter. 5.1. TESTING THE LOGIC Though the plug-in is designed for acoustic simulation, it must meet other logical and mathematical criteria before it can be tested for its acoustic calculation accuracy. The tests were conducted using simple box geometry with two spheres to represent a source and a receiver, built within Rhino. This section documents the different “logic tests” conducted using the plug-in. 134 5.1.1. Testing Geometric Logic and Accuracy In the first scenario, the source and receiver spheres, both 0.5 meters in diameter, were placed 10 m apart, within an enclosed box. The enclosure was intended to check the complexity of reflections produced by the plug-in (Figure 5.1). Figure 5.1. Plan and 3D view of the model geometry to test the plug-in The source and receiver, along with the surrounding geometry and its respective absorption coefficient values were entered into the component inputs (Figure 5.2). 135 Figure 5.2. Component inputs selected and entered into the Grasshopper component (pictured to the right). Following this step, the starting ray count was set to 100 (Figure 5.3). Figure 5.3. Starting ray count set to 100 (within the Grasshopper component pictured to the right). At this point, the preview was turned on for the component to observe the resultant rays produced. The preview within Rhino displayed a scattering of primary rays and hemispheres upon their first bounce that propagated secondary rays (Figure 5.4). 136 Figure 5.4. Ray visualization within Rhino geometry (viewed within Rhino, pictured on the left, with the Grasshopper component pictured on the right). The hemispheres produced upon the first bounces can be visually customized (Blue hemispheres, figure 5.5). Figure 5.5. Scattering hemispheres visible within the Rhino model (pictured to the right) with the Grasshopper component pictured on the left. At the receiver, the intersection points were checked. With a sphere radius of 0.5 m, 100 propagated rays with 3 bounces, there are 15 intersection points recorded at the receiver (Figure 5.6). 137 Figure 5.6. Intersection points recorded at the receiver (visible within Rhino, pictured on the left). The steps so far prove that the plug-in collects geometry and visualizes the rays propagated and scattered. The following steps were conducted to test whether the plug-in could compute higher values and if they would correspond to the intended logic. The first test is to check if increasing the radius of the receiver sphere would result in more intersections with rays. The outputs indicated that doubling the receiver diameter increased the intersection points from 15 to 47 (Figure 5.7). This is important because it indicates that the plug-in understand the logic behind the receiver size and intersections. 138 Figure 5.7. 47 intersection points with 1 m diameter receiver sphere (Previewed from the grasshopper component pictured to the right). Returning the receiver sphere diameter to its original value of 0.5 m, the next test was conducted to see if increasing the number of starting rays and their bounces from the source would increase the number of intersections. This proved to be correct, where the intersection points increased to 25 when the starting ray count was set to 180 and the bounces increased to 7. The intersection points at the receiver were also more equally distributed along the surface facing the source and curving around the sphere. This indicates that the number of bounces increase the number of directions from which the rays intersect with the receiver (Figure 5.8). This is important because it elaborates the plug-in’s understanding of the logic intended by the code. Figure 5.8. Increasing starting rays and bounces increased intersections to 25 (pictured in the sliders within grasshopper, to the right). The final test conducted was to observe if increasing the distance between the source and receiver would decrease the number of intersections. The starting ray count was reduced to its original 100 and the bounces to their original value of 3. The only variable changed was the placement of the receiver sphere further away and the elongation of the enclosing box. As 139 compared to when the receiver sphere was only 10 m apart and there were 16 intersections, doubling the distance resulted in 5 intersections (Figure 5.9). This is important because the plug- n demonstrates an understanding of the relationship between the distance between source and receiver and the number of intersections that are affected by the distance. Figure 5.9. Doubling the distance decreased intersections to 5 (previewed from the grasshopper component on the right). The results from these tests indicate that the plug-in works as designed for showing the propagation, intersections, and geometry. This is important because it shows that the plug-in passes the preliminary validation test. It allows the plug-in to be further tested for mathematical calculations. 5.1.2. Testing Mathematical Logic and Accuracy In the second scenario, the enclosed space was enlarged to the dimensions correspondent to the speed of sound in meters (Figure 5.10). 140 Figure 5.10. Source and Receiver placed 343 m apart from each other and surrounding surfaces. The value of 343 meters was chosen because the speed of sound is 343 meters per second. By placing the source 343 meters away from the receiver and the surrounding surfaces, it was easy to visualize one ray at a time to check if it was travelling at the correct speed. This test could be conducted by propagating and viewing only the direct ray from the source to the receiver. At a distance of 343 meters, the direct ray (according to calculations) should be at the midpoint at the 500-millisecond mark (Figure 5.11). 141 Figure 5.11. Direct ray reaching the halfway point at 500 milliseconds (visible within rhino pictured on the right). Similarly, at the 1000 millisecond mark, which is equivalent to 1 second, the direct ray should reach the receiver and intersect with it (Figure 5.12). This proved to be successful within the Rhino 3D model. Figure 5.12. Direct ray intersecting with the receiver at 1 second (pictured within Rhino, to the right). 142 Once this was determined to be correct, the next test was to check if the same result was achieved with a first order reflection rays. In the event of an intersecting ray that bounces off one surface, this ray should reach the receiver at approximately the 2000 millisecond mark because it takes a longer route to reach the receiver. This validation test also proved to be successful (Figure 5.13). Figure 5.13. First order reflection reaching the receiver at 2000 milliseconds (pictured in the model to the right, with the timer slider set to 2000 on the left within Grasshopper). The formula applied here is the same as the initial one used to determine the time at which the ray should reach the receiver. Where distance = speed x time. 5.1.3. Validating SPL Calculations Once it was determined that the ray propagation was working according to time calculations, the next step was to determine that the calculations of the Sound Pressure Level (measured in decibels) was correctly calculated by the plug-in according to the time travelled by the ray. This 143 was executed by comparing the graph of time against amplitude to a graph presented within an acoustic analysis case study. Manual calculations were done using the formula (Equation 5.1) B2 = B1 + 20log(R1/R2) B1 = 100 dB R1 = 0.1 R2 = the distance of the ray segment (Equation 5.1) The value for R1 was set at 0.1 since there is no “per meter” formula for decibel calculations since decibels are ratios that reduce logarithmically. For the direct ray, the manual calculation was done simply by measuring the length of the ray between the source and receiver, using that value for R2 and comparing the decibel value to the plug-in result. Here, it was concluded that the value calculated by the plug-in was correct when compared to the manual calculation. 144 Figure 5.14. Comparison between plug-in result and manual calculation of the decibel value. 5.1.4. Validating Absorption Coefficients Direct decibel calculations, where the ray does not intersect with anything besides the receiver, are simple to calculate. Complicated calculations arise when the surrounding geometry must be considered and the sound absorbed by this geometry must be calculated and accounted for. It must be maintained that the decibel calculations are logarithmic and therefore, the sound absorbed by surfaces cannot be simply subtracted. For the surrounding geometry, the user must input the corresponding absorption coefficients. These values are considered within the following formula for sound absorption (Equation 5.2). 145 Loss of energy at surface = 10 * log (1-x) (Equation 5.2) x= the absorption coefficient of the surface. For the plug-in to execute this formula, each ray is broken down into its constituent segments. Then each segment is checked for an intersection with a surface. If there is an intersection, the absorption coefficient of that surface is recorded and applied to the formula (Equation 5.2). The calculated answer is always a negative value which is then added to the value of decibels calculated form Equation 5.1. The calculation executed by the formula was checked manually within an Excel file and then compared to the plug-in’s calculated result (Figure 5.15). The values match. Figure 5.15. Comparison between value calculated by Excel and the value calculated by the plug-in. 146 5.1.5. Considerations for Decibel Addition When sound is perceived as rays that travel from the source to the receiver, there are often scenarios where two or more rays reach the receiver at the same time. For these rays, the addition of their respective decibel values was tested. The Precedence Effect was used to justify the addition. The Precedence Effect claims that sounds reaching the receiver within 5 milliseconds are perceived as one (Zurek, 1987). To execute this within the plug-in, sounds arriving within 5 milliseconds of each other were added together. Since decibel values are logarithmic, the values of the rays are converted to the Sound Power value (Watts) and then added. However, the rationalization behind the resultant value calculated after this addition was not well calibrated. The plug-in was simply performing successive subtraction amongst the decibel value list and adding those that had a difference of less than 5 milliseconds between them. However, the plug-in was only counting roughly 3 or 4 values that were within 5 milliseconds of each other whereas in some scenarios, there were 12 values that were all within 5 milliseconds of each other that the plug-in was not considering. When displayed as a graph, the result was not far from the original graph produced, indicating it was incorrect. 147 Figure 5.16. Comparison between original graph and graph after addition of decibel values. 148 Another better method of calculating the decibel values while considering their addition was to implement the concept of grouping the values together through a conceptual “binning” of values. This method was temporarily and unofficially referred to as a “sliding binner”. Though complicated to execute, the concept was rational. The algorithm was to check each millisecond recorded and consider every value before it. For example, the first record of time in a list was to be considered as a standalone, whereas the next time record was considered alongside the one preceding it and the third time record was considered alongside both values preceding it. This was repeated until the final value, which considered all time values preceding it. Once these groups were made, all the values in each list were subtracted from the final number in the list. Only those within 5 milliseconds of the last value were recorded and their indices output as a list. These indices were then used to collect the corresponding decibel values from the SPL list. The decibel values grouped together were then added by converting them to Sound Power, adding their values and then converting them back to Sound Power Level. A visual explanation of this “sliding binner” method is more comprehensible. (Figure 5.17) Figure 5.17. “Sliding binner” diagram. 149 The graph produced was closer to those produced and presented in acoustic studies, displaying the Sound Pressure Level against Time (Figure 5.18). The resultant values, however, were not correct when compared with research and case studies. Figure 5.18. Graph produced from the “sliding binner” experiment. 5.2. CASE STUDIES 150 To accurately validate the results produced by the plug-in, two case studies were chosen that provided correct formulae for ray tracing and sound absorption. These case studies also provided schematics for their respective sites to test the acoustics and a graph of Sound Pressure Level against time. 5.2.1. Indoor Case Study To validate the plug-in’s results for an indoor space, a case study was chosen that documents the acoustic simulation of an indoor hallway. This case study was documented in a research paper that provided formulae and directions for calculations. Mahjoob’s research paper titled “Acoustic simulation of building spaces by ray-tracing method: Prediction vs. experimental results” documents the acoustic study of a hallway and provides calculations for simulated sound rays and SPL. The paper also provides equations to determine the optimum number of rays to be propagated from the source. This is done by calculating the volume of the space, and the volume of the receiver sphere (Equation 5.3). (Equation 5.3) Though this formula is used in further calculations, the number provided by dividing the (volume of a large space x 10) by the volume of the receiver sphere (which is often a small sphere within the model) provides a value too large for the plug-in to compute. To accommodate this, a 151 predetermined number of rays is provided for the plug-in. This “number of rays” value is provided for small, medium and large spaces with ranges provided for their volumes. Table 5.1. Proposed Number of Rays Size Cubic Meters No. of Rays Small 5,000 – 1,000,000 1000 Medium 1,000,000 - 40,000,000 3500 Large 40,000,000 – 90,000,000 7000 The paper proposes the initial division of the sound power level of the sound source (Equation 5.4) (Mahjoob, 2009) (Equation 5.4) Here, the Lw value denotes the original sound power level of the source (measured in decibels). This formula was applied within the plug-in to corroborate the data collected from the results with the data provided through this research paper’s calculations. 152 Figure 5.19. Testing formula proposed by case study within the plug-in. Further calculations incorporate the previously recorded equations and use them to calculate the total SPL at the receiver (Equations 5.5, 5.6, 5.7). (Mahjoob, 2009) (Equation 5.5) (Equation 5.6) (Equation 5.7) Here, the intersections and loss of energy is calculated, and the value is considered in Watts units. Equation 5.7 converts the values from Watts to Decibels. These equations were implemented within the plug-in (Figure 5.20) 153 Figure 5.20. Implementing proposed SPL calculations within plug-in. The formula provided (Mahjoob, 2008) for decibel calculation at the receiver was executed within the plug-in on a 3D recreation of the hallway design and the resultant graph compared to the one presented within the case study. The results were comparable and correctly executed by the plug-in within the Rhino 3D model which was designed according to the diagram of the hallway’s schematics (Figure 5.21). 154 Figure 5.21. Hallway schematics vs Rhino model (Case Study schematics marked by the red line, Rhino 3D model recreation marked by the yellow line). 5.2.1.1. Results Within the case study research paper, a resultant SPL-Time graph is provided for comparison. This graph is a result of the author’s simulation within the hallway using an initial sound power level from the source at 106 dB (Figure 5.22). 155 Figure 5.22. SPL-Time graph from case study. (Mahjoob, 2009) Using 106 as the starting decibel power level of the source, the same conditions (including the provided absorption coefficients) and the hallway 3D model, Rutabaga Acoustics was used to calculate and provide a graph comparable to Mahjoob’s resultant graph. 156 Figure 5.23. Rutabaga Acoustic’s SPL-Time graph (Red line) plotted on top of the SPL-Time graph from the case study (Mahjoob, 2009). The plug-in’s calculated graph was concluded as similar to the research paper’s, therefore validating the results of the plug-in (Figure 5.23). 5.2.2. Outdoor Case Study In the year 2019, BuroHappold conducted an acoustic analysis of the St. Giles Piazza in London, UK. This location was chosen because it encloses a public space with shops and cafes. It is 157 frequently visited by users and is located near a high traffic zone with traffic passing through and around it (Figure 5.24) ((Crippa et.al, 2019) ). Figure 5.24. St. Giles Piazza (Google Images). The openings from the enclosed space result in high levels of noise pollution. Buro Happold was tasked with providing sound absorption techniques and materials to minimize the traffic noise within the space, to accommodate the visitors and residents of the buildings surrounding the piazza. 5.2.2.1. Simulation Study of the St. Giles Piazza 158 Before proceeding to alter the building facades, an acoustic analysis was to be conducted. Due to the high buildings, it would have been unfeasible to alter the entire building façade without first understanding where the noise reflections were occurring. In order to study the acoustic performance of the space, CATT-Acoustics was employed, and a 3D model of the space was created within AutoCAD. To import this model into CATT-Acoustics, a plug-in was downloaded that sped up the import and export process of the 3D model (Figure 5.25) Figure 5.25. St. Giles Piazza 3D model.(Crippa et.al, 2019) Four receivers were placed within the enclosed space (Figure 5.26), with sound sources placed around the outside to simulate traffic noises. 159 Figure 5.26. Receiver positions within the space.(Crippa et.al, 2019) The average “loudness” of the noise within the Piazza was estimated to be 88 decibels, and this was used in the calculations performed by CATT-Acoustics. The software is not prepared to handle outdoor spaces that are not enclosed and therefore, a ceiling was designed above the Piazza, to simulate the sky, with an absorption coefficient of 100% (Crippa et.al, 2019). To test a variety of possible acoustic noise control measures, 5 different scenarios were simulated where the sound absorption of the surrounding surfaces was altered to test the effects it would have on the receiver positions (Figure 5.27). 160 Figure 5.27. Sound Pressure Level results at different receiver positions from CATT-Acoustics simulation of the St. Giles Piazza. Conducting simulations for 4 receiver positions in 5 different absorption scenarios (where the surrounding geometry’s absorption coefficient is set at 0%, 15%, 30%, 60% and 100%), this graph displays the collected SPL values for each receiver in each scenario. 5.2.2.2. Rutabaga Acoustics Simulation of the St. Giles Piazza 161 To simulate the acoustics of the site within Rhino, first the model geometry was to be created within Rhino. The dimensions were gathered from Google Earth and from readily available 3D models available on the Internet, and the 3D model was built with the receiver locations set to those by Buro Happold (Figure 5.28). 162 Figure 5.28. 3D recreation of the St. Giles Piazza within Rhino. 163 With the 3D model ready and the receiver locations placed, Grasshopper was started to open the Rutabaga Acoustics plug-in (Figure 5.29). Figure 5.29. Rutabaga Acoustics plug-in within Grasshopper. Here, the inputs for the source, receiver, surrounding geometry, and its respective absorption coefficients were entered. The starting Source Power Level of 88 dB was also entered using the slider. Receiver #1 was tested first. The rays propagated from the sound source intersected with the receiver sphere and the plug-in then performed its SPL calculations using these rays (Figure 5.30) 164 Figure 5.30. Rays intersecting with Receiver #1 within the model. Though multiple values for numerous rays were gathered, to compare these results with those presented within the case study conducted, the SPL values were converted to their power values (measured in Watts), added together and averaged by the number of rays, and then converted back to their SPL values (measured in decibels). This value was then used. Each of the 5 scenarios was tested on each of the receivers. The results are presented in Table 5.2 and Figure 5.31, with comparisons to the results produced by CATT-acoustics. Table 5.2. SPL results for 4 receiver positions, in 5 absorption scenarios. 165 Figure 5.31. Graphical representation of simulation result from CATT Acoustics (red outline) and Rutabaga Acoustics (yellow outline) 166 The results collected from the Rutabaga Acoustics plug-in can be compared more simply to the CATT-Acoustics results if divided into respective receiver graphs (Table 5.3-5.6) Table 5.3. Rutabaga and CATT result comparison of SPL at Receiver 1 in all 5 scenarios. Table 5.4. Rutabaga and CATT result comparison of SPL at Receiver 2 in all 5 scenarios. Table 5.5. Rutabaga and CATT result comparison of SPL at Receiver 3 in all 5 scenarios, marked as red due to high deviations from case study values Table 5.6. Rutabaga and CATT result comparison of SPL at Receiver 4 in all 5 scenarios. 167 5.2.2.3. Conclusions and Discussions It has been stated by acousticians that during acoustic simulations, there should be a +- 3dB margin of error (Elorza, 2005). The differences between the SPL values at the receivers in the two simulations could be attributed to this margin of error, however that does not explain why receiver #3 received the least sound via the Rutabaga plug-in, whereas Receiver #2 received the least in the CATT Acoustics simulation. This could be because CATT Acoustics can process multiple sound sources at once, whereas Rutabaga Acoustics is still only able to process one at a time. On site, at the St. Giles Piazza, there is an egress corridor near the location of Receiver #3. Here, traffic flows constantly. When the simulation was conducted within Rutabaga, the source position in figure 5.28 was maintained, therefore ignoring the simultaneous noise from the other source. Rutabaga Acoustic’s simulation is still in the process of being optimized. After conducting tests and research on different equations used for ray tracing an image source validity checks, the formula most used was adopted (Equation 5.8) 168 (Equation 5.8) Despite the logic applied to the calculations, the difference in results can also be attributed to the lack of proper coordinates and size requirements for the receiver spheres. Though Rutabaga Acoustics is still in its early design phase, it can successfully perform ray simulations, SPL calculations and graphical representation. To be a competitor for commercial simulation software, however, it still has a long way to go. 5.3. SUMMARY Chapter 5 is a documentation of the validation processes executed to test the accuracy of the Rutabaga Acoustics plug-in. The first steps taken were to check if the functionality of the component was correct or not. This was done by changing the parameters and testing the ray propagation within an enclosed space. The parameters that were changed included the size of the receiver sphere, the size of the source sphere and the number of rays. The effect these changes had on the number of intersections was noted. The next validation test was to confirm whether the rays were being propagated at the speed of sound. This was done by building geometry with the source and receiver 343 meters apart, since 169 the speed of sound is 343 meters per second. The direct sound ray’s end point was checked at the 100 millisecond mark. The plug-in output displayed the ray reaching the receiver at exactly this time, corroborating the fact that the speed of sound was being followed. Further validation checks were conducted by comparing the Rutabaga Acoustics plug-in results with existing case studies documented in research papers. These case studies included an indoor hallway and the St. Giles Piazza in London. The geometry of both these sites was modeled within Rhino and the Rutabaga Acoustics simulation run on these models to collect the SPL values at the receiver. For the indoor enclosed hallway, the resultant SPL-Time graph was accurately recreated by Rutabaga Acoustics and matched the graph presented by Mohammad Mahjood in his paper “Acoustic simulation of building spaces by ray-tracing method: Prediction vs. experimental results” (Mahjoob, 2009). For the site of St. Giles Piazza, Buro Happold had conducted a research study on-site in 2019. Here, the authors had run CATT Acoustics simulation on a 3D recreation of the space. The research conducted by them resulted in SPL values at 4 receivers spread out within the model. To validate the results of Rutabaga Acoustics, the site of St. Giles Piazza was built within Rhino and the source and receivers placed approximately where the authors of “Building Facades and Soundscapes” had placed their receivers. The results collected displayed slight discrepancy; however, this was due to inaccuracy of receiver locations and Rutabaga Acoustics’ inability to process more than one source at a time. 170 6. CHAPTER 6: CONCLUSIONS AND FUTURE WORK During the initial research stage of this project, acoustic simulation software was investigated, and it was deduced that primarily designer firms use commercial acoustic simulation software. There is a shortage of free acoustic simulation plug-ins and add-ons for independent designers and students. Commercial simulation software is often expensive, and the trial versions provide limited functionality. They also require 3D models to be imported and exported from and to 3D modeling software. This is an interruption in the workflow since the import and export process takes time. When considering free acoustic simulation software and plug-ins, there are a few that provide accurate results and allow synchronicity between the acoustic parameters and the 3D model. However, these simulation engines rarely work for outdoor spaces due to their limited ability to only compute flat surfaces. The input requirements are often very specific for acoustic simulation plug-ins. An example of this is Pachyderm for Rhino’s Grasshopper. One limitation noted within Pachyderm is the complexity of the component variety. There are a total of 36 components, each tailored to accommodate a specific input type. For a user with no prior acoustic knowledge, simply looking to understand the acoustical properties of a designed space, Pachyderm is an overwhelming solution. An acoustic simulation plug-in for Rhino’s Grasshopper was created that simulated sound propagation in outdoor spaces with complicated surfaces. Other criteria for the proposed plug-in 171 included practical inputs and results that were easy to decipher. After the processes of research, plug-in development and data validation, multiple conclusions were reached. Chapter 6 documents a detailed analysis of the initial planned methodology, the deliverables, the shortcomings of the final product, its potential and the future work that must be done. 6.1. CONCLUSIONS Besides providing correct and simplified outdoor acoustic simulation results that can be easily interpreted, the initial purpose of this plug-in design was to be easily integrated into the designer’s workflow. After its development and testing within Rhino’s Grasshopper, using models of varying complexity, it was concluded that the functionality of the Rutabaga plug-in is easy to understand, simple to run and interpret, and an easy extra step to improve the acoustic understanding of a space. With a maximum run-time of 1:45 seconds with maximized input values and a complicated 3D model, it does not consume much computing power. For outdoor spaces, the plug-in correctly collects the surrounding geometry and its absorption coefficients. This collected data is then used to calculate the resultant energy (in dB) at the receiver position and visualize the propagated rays and their reflections. However, it will crash if the number of rays input is very large, for example 20,000. 172 To test the accuracy of the resultant SPL at the receiver and the distribution of rays within the model, two case studies were conducted and compared to pre-existing research conducted on these sites. To test the extremes, the first site chosen was an enclosed hallway. Simple geometry like this produces clear accurate results of an SPL-Time graph. For more complicated geometry, an outdoor site was chosen. St. Giles Piazza had recently been documented by Buro Happold and their research dictated the validity test run for Rutabaga Acoustics. The original research case study placed 4 receivers within the site. These receiver positions were replicated for the validity test and Rutabaga’s resultant SPL values were recorded and compared with those presented by Buro Happold’s study. The results indicated that Rutabaga Acoustics’ calculations were very close to those presented by CATT Acoustics. There were a few discrepancies at one location, where Rutabaga Acoustics was not considering another sound source. Rutabaga Acoustics’ inability to process more than one sound source proved to be problematic and resulted in inaccurate SPL values. However, for those receiver positions that were being affected by one singular sound source, the values were correct. However, the intended simplicity of the plug-in stunted the potential of the outputs. An example of this is the calculation for sound propagation in outdoor spaces, which should include inputs for environmental conditions. In order to limit the inputs to provide ease-of-use, the environmental factors have been ignored. Though the plug-in considers sound scattering and diffusion, the presence of wind, humidity and air pressure would provide greater accuracy to the results. Other limitations of the Rutabaga acoustic plug-in include the limited complexity of the output values. These values include the reverberation times, the Energy-Time curve and 173 individual dB values of rays at their intersections with both surrounding surfaces and the receiver. Acoustics, nevertheless, are complicated and involve the study of the frequency response, mid/high frequency decay and reverb decay of sounds within spaces. The Rutabaga plug-in does not yet calculate these values. The deviation from the planned methodology also derailed the development process and complicated the scripting layout. Visual Studio’s inability to simultaneously display results within the plug-in and its lack of an up-to-date Rhino 7-compliant Grasshopper template elongated and delayed the time taken to develop the code. The difference between the formats of the Visual Basic language and traditional c# was an adjustment that was not accounted for. Overall, the Rutabaga plug-in proved to be a fairly successful design that fulfills half of its intended requirements. If the design is advanced and the simplicity disregarded, the plug-in can meet its full potential as an outdoor acoustic simulation engine that is easily incorporated into the initial design phases of performance spaces, homes, parks, public spaces and urban design. In its current stage, the plug-in meets the minimum requirements of an acoustic simulator that provides simple results. 6.2. METHODOLOGY ANALYSIS AND EVALUATION During the initial planning of the plug-in design, the methodology defined for its execution was meticulously planned out. The steps within the methodology planning were the following: 174 1. Build model geometry 2. Perform geometric methods for the high frequency sound simulations 3. Perform wave-based methods for low frequency sound simulations (not done) 4. Build the components of the plug-in within Visual Studio (not done) 5. Execute the algorithms and perform convolution and impulse response calculations 6. Output visualization and auralisation (advanced visualization and auralisation not done) 7. Perform testing and validation of the result Upon careful consideration, a few of the steps planned in the methodology were abandoned. Step three, which involved implementing wave-based methods for low frequency simulations, was not utilized. This was a deliberate decision that was discussed with professional acousticians who suggested the low frequency sounds were not audible enough to be considered in the final calculations. In step four, a different approach was taken which involved the component construction within Grasshopper instead of Visual Studio. The calculation of convolution within the simulation was abandoned as well due to time constraints, along with the auralisation output. Though some of the planned results were not calculated, the remaining deviations from the plan produced results of equal, if not better, configuration and construction. The altered methodology that was adopted saved time but sacrificed significant outputs from the plug-in. 6.2.1. Low Frequency Sounds When considering the inclusion of low frequency sound simulation using wave-based methods, the effectiveness of this was outweighed by the reasoning supporting the exclusion of it. Low 175 frequency sounds, especially in outdoor spaces, would go largely unnoticed. Within an outdoor space, where the site sizes are often measured in acres, sounds of low frequencies often don’t travel far. Sounds within this category are measured at 500 Hz or lower and though this falls within the range of human ear audibility, it is closer to the lower spectrum of the conversation area in the range of human hearing, and therefore closer to the threshold of hearing (Figure 6.2). This could be added at a later time. Figure 6.2. Human auditory limits. 6.2.2. Visual Studio Component Build The decision to forego the component build within Visual Studio was a sound one. There were many reasons that contributed to the decision to build the components using Grasshopper’s c# component. The first hindrance in using Visual Studio was the unavailability of an up-to-date Grasshopper template that was compliant with Rhino 7. Another limitation faced during the process of scripting was the inability of Visual Studio to simultaneously display within Rhino any changes made by the code. Upon making any changes within Visual Studio, the user has to “build” the 176 code, wait for Visual Studio to launch Rhino, manually launch Grasshopper and load the components into the canvas. This process is very time consuming when the code needs to constantly be checked against the results it is producing. This plug-in design involved a significant amount of trial and error and therefore, a more convenient solution was chosen. The c# component within Grasshopper allows the user to directly add custom inputs, outputs and code to compute algorithms ranging from simple additions to complicated parametric solutions. Using traditional c#, as opposed to Visual Studio’s simplified template, was not an easy transition and proved to be the most time-consuming process in the plug-in development. However, one main advantage of using the Grasshopper custom component is the real-time visualization within the Rhino 3D model. When constructing many rays with bounces displayed using hemispheres that emanate more rays, real-time visualization is significant to avoid errors. The decision to avoid Visual Studio’s programming environment was a beneficial one because it would have been more time consuming to frequently check the calculation and visualization results within Rhino and Grasshopper due to Visual Studio “building” and opening Rhino every time a change in code was to be executed. This deviation from the original plan to build the plug-in within Visual Studio resulted in a significant change in the final output. Visual Studio’s template allows the user to specify the category and subcategory of the components designed. This implies that within Grasshopper, a new tab is constructed that can be customized. When the component is customized within Grasshopper’s canvas (which was the method implemented by Rutabaga’s design), the final product is a series of Grasshopper components, linked together to create a “Grasshopper 177 definition”. This, unfortunately, is not in a customized tab within Grasshopper and must be distributed as a Grasshopper file. For the purpose of this project, the output format is still functional as the plug-in must be provided with an Excel file for its output. This way, a folder under the name “Rutabaga Acoustics” is the medium for distribution. This folder contains the .gh file (containing 1 components), the instructions document, the output Excel file and some reference material for interpreting the outputs of the plug-in. 6.2.3. Convolution and Auralisation Auralisation within a Grasshopper plug-in requires multiple calculation and input criteria to be met prior to the execution of auralisation. After checking the intersections with the receiver and recording the energy collected at the receiver over a certain time period (measured in milliseconds), the impulse response that is generated from the results must dictate the convolution, which in turn collects and alters the sound input by the user. Within Grasshopper, this is a complicated procedure. Though it is possible to run the auralisation simulation within Grasshopper, the author was unable to incorporate the auralisation function due to time constraints. Another factor which led to the lack of auralisation was the complicated nature of importing .wav files into the Grasshopper canvas. To be executed simply and in limited time, the Rutabaga Acoustics plug-in would have dependencies on other plug-ins to import the sound. These external plug-ins include Pachyderm and Mosquito. Considering that the ethos of the Rutabaga Acoustics plug-in was to simplify the simulation of acoustics, dependence on 178 another plug-in was not a viable option. However, this could be considered future work if desired. 6.3. FUTURE WORK Though the Rutabaga plug-in has potential to be effective in not only simulating outdoor acoustics but also in many applications, its functionality must first be improved. These improvements range from increasing the source capacity to calculating more complicated results. 6.3.1. Functionality Improvements In real scenarios, there are many other environmental conditions that can be considered, including traffic, neighboring houses, pets, wind speeds, rain, and naturally existing sources of sound e.g. waterfalls, beach with strong waves etc. The Rutabaga Acoustics plug-in will consider these input parameters in the future. Currently, the Rutabaga plug-in has a ray propagation limit of 10,000. It can only simulate sound from once source traveling towards one receiver. To accurately simulate real-word situations, the plug-in must be able to process more than one source and provide customization of the sounds produced from these sources. The source geometry must also be flexible. The user should be able to create a wall of sound to simulate a nearby freeway, or a waterfall. The sound rays being 179 projected from the source(s) must be configured to correctly cancel each other out when the situation desires, and if it is an accurate depiction of real sound waves. This is difficult, because the cancellation would vary with the frequency and thus would require multiple iterations. Though this can be a bit complicated and make the script convoluted, simpler methods can be employed. These include creating a different component for every source, with one extra component that calculates their respective sound’s interaction with each other. To correct the mistakes made during the initial development of the plug-in, auralisation must be incorporated into the outputs. This can be executed through a different auralisation component, that collects the data from the main ray visualizer component (including the impulse response), convolutes the input sound accordingly and outputs it through a stereo output. The user should be able to use headphones and hear what their space sounds like with sound sources placed at differing locations. The plug-in should have an add-on that in functional within Rhino, to allow the user to input absorption coefficients within Rhino by selecting the surface geometry and adding the parameter values in a debug window to the side. The outputs produced must be more sophisticated. Currently, simple calculations of decibel values of individual rays, the T60 value and the total SPL at the receiver is all the plug-in is capable of calculating. However, acoustic simulations run by advanced software produces complicated calculation results in the form of graphs, lists and diagrams. The values depicted in 180 these include frequency response, reverberation, impulse response, sound pressure level and frequency mapped against amplitude and reverberation times. The goal is to improve Rutabaga’s calculations and outputs to provide more complex results. 6.3.2. Applicability Improvements The functionality of this plug-in can also be extended to other 3D modeling software, including Sketchup and Revit. Within Revit, the plug-in can be more effectively used with BIM. Using the plug-in can allow designers to simulate which building elevations will be most affected by any surrounding noise pollution. The rays produced by the plug-in can be checked to see which windows and building surfaces they intersect with, which in turn can inform the designer of the most affected windows, doors and balconies. The results can allow the designer to change the surface materials and placement. Performing these tasks during the initial design phase can improve the noise effects within the buildings and save capital that would be invested towards sound damping fixtures installed on-site. The Rutabaga acoustics plug-in, once it is able to process many sound sources, could be used to simulate performance acoustics in outdoor spaces. These include amphitheaters and stadiums, where multiple musicians and performers are on stage. The functionality of the “sound wall” can be used to simulate crowds along the sides of stadiums. 181 6.4. SUMMARY The Rutabaga Acoustics plug-in for Rhino’s Grasshopper is currently at a very basic level of complexity. Priority was given to the visualization within Rhino and the total loudness at the receiver. These functions assist with the final calculation for the amplitude decibel levels and their changes over time. The initial planned methodology for the plug-in development was altered due to complications and time constraints. These alterations included abandoning the utilization of Visual Studio and opting for the Grasshopper c# customizable component instead. This decision assisted greatly in the development of the plug-in because it allows the programmer to simultaneously see the visual changes of the geometry their code dictates. Another alteration to the initial methodology was the decision to ignore wave-based simulation methods since wave-based methods primarily deal with low frequency sounds. The Rutabaga plug-in is designed for outdoor acoustic simulation and low frequency sounds often do not contribute to the total energy collected at the receiver in large spaces. Auralisation was intended to be a significant output from the plug-in. However, this process could only be executed after the main calculations were computed. Poor time management resulted in delayed calculation results, and therefore the auralisation component could not be developed in time. 182 Though some complex calculations could not be performed by the Rutabaga Acoustics plug-in, the final deliverable project has the potential to be utilized for many purposes. If the frequencies are computed and the source geometry is more versatile, the plug-in could become an important acoustics solution for independent designers and students that do not have licenses to use commercial acoustics software. It would no longer be a simplified set of components and would require complex input parameter values. This would require the user to have some prior knowledge about acoustics and would provide a complete acoustical analysis of a 3D outdoor space. 183 REFERENCES “4.1.1 Acoustics.” Digital Sound & Music, digitalsoundandmusic.com/chapters/ch4/. Abi-Chahla, Fedy. “When Will Ray Tracing Replace Rasterization?” Tom's Hardware, Tom's Hardware, 22 July 2009 Adrian James Acoustics. “Common Pitfalls in Computer Modelling of Room Acoustics”, Institute of Acoustics, 2016. “AFMG.” EASE, ease.afmg.eu/index.php/documents.html. B, Phil. “Reverberation Time Calculator And Definition.” Accessed November 18, 2020. https://www.soundassured.com/blogs/blog/reverberation-time-calculator-and-definition. Bill Wagner. “Interoperability - C# Programming Guide.” Interoperability - C# Programming Guide Bo, Elena. “The Accuracy of Predicted Acoustical Parameters in Ancient Open-Air Theatres: A Case Study in Syracusae”. 2018. CATT TUCT Overview, www.catt.se/TUCT/TUCToverview.html. 184 Colette, Tony et al. “Early Reflections 101.” Acoustic Frontiers, 28 Sept. 2015 David Oliva Elorza, “Room Acoustics Modeling Using the Ray Tracing Method: Implementation and Evaluation”, 2005. “Digital Waveguide Mesh.” Wayverb - Waveguide. Accessed November 18, 2020. https://reuk.github.io/wayverb/waveguide.html. Guyer, Paul “An Introduction to Architectural Design: Theatres and Concert Halls.” Volume 2 (2014):19-21. Honeycutt, Richard. “Predictive Acoustics and Acoustical Modeling Software” July 2014. Honeycutt, Richard. “Reverberation Time: RT10, RT15, RT20, RT30,RT60???”,SynAudCon (2012): 5-7 Holger Rindel, Jens. “Computer Simulation Techniques for Acoustical Design of Rooms.” (Thesis, Technical University of Denmark, Vol 23, 1995) Institute of Acoustics, Acoustic Bulletin Vol 41 No. 4, 2016. ISO, www.iso.org/obp/ui/. Lab, Mode. “The Grasshopper Primer Third Edition.” The Grasshopper UI | The Grasshopper 185 Primer Third Edition. Accessed November 18, 2020. Larson Davis, “DNA Software” Larson Davis Literature (2019): 6-8 M. Cizek and J. Rozman, "Acoustic Wave Equation Simulation Using FDTD," 2007 17th International Conference Radioelektronika, Brno, Czech Republic, 2007, pp. 1-4, doi: 10.1109/RADIOELEK.2007.371457 Mahjood, Mohammad. “Acoustic simulation of building spaces by ray-tracing method: Prediction vs. experimental results” (2008) Naylor, G.M. “ODEON—Another Hybrid Room Acoustical Model.” Applied Acoustics, vol. 38, no. 2-4, 1993, pp. 131–143., doi:10.1016/0003-682x(93)90047-a. Pelzer, Sonke “Interactive Real-Time Simulation and Auralization for Modifiable Rooms”, Sage Journal Volume 21 issue 1 (2014) “Putting It All Together.” Acoustics, www.acoustics.com.ph/reverberation.html. Rindel, J.. “The Use of Computer Modeling in Room Acoustics.” (2000). Ritscher, Walt. “Visual Studio 2019 Essential Training.” Lynda.com - from LinkedIn. Lynda.com, August 2, 2019. https://www.lynda.com/Visual-Studio-tutorials/Visual-Studio 2019- Essential-Training/2808543-2.html. 186 Sam, Samual. C# Language Advantages and Applications, 2018, www.tutorialspoint.com/Chash- Language-advantages-and-applications. Savioja, Lauri. “Real-Time 3D Finite-Difference-Time-Domain Simulation of Low and Mid- Frequency Room Acoustics”, e 13th Int. Conference on Digital Audio Effects (DAFx-10), 2010. Schneider, John B. Understanding the FDTD Method, eecs.wsu.edu/~schneidj/ufdtd/. Shrivastava, Anshuman. “Sound Absorption: Plastic Properties and Testing” Introduction to Plastics Engineering, 2018. Taghipour, Armin. “Room Acoustical Parameters as Predictors of Acoustic Comfort in Outdoor Spaces of Housing Complexes” Frontiers in Psychology, March 4, 2020. The Soundry: The Physics of Sound, www.schoolnet.org.za/PILAfrica/en/webs/19537/physics4.html#:~:text=Sound%20trav els%20fastest%20through%20solids,through%20steel%20than%20through%20air. Thompson, Eric et al, “Acoustics in Rock and Pop Music Halls” (Conference Presentation, Audio Engineering Society Convention, May 1, 2007 Tommaso Crippa, Gareth Davies et.al “Façade Engineering and Soundscapes”, 2019 Ueno, Kanako. “A Consideration on Acoustic Properties on Concert-Hall Stages”, Building 187 Acoustics Volume 18 issue 3-4, (2011): 221-235 Van Der Harten, Arthur. ““Pachyderm Acoustical Simulation: Towards Open-Source Sound Analysis” (2000) Zurek P.M. (1987) The Precedence Effect. In: Yost W.A., Gourevitch G. (eds) Directional Hearing. Proceedings in Life Sciences. Springer, New York, NY. https://doi.org/10.1007/978-1- 4612-4738-8_4 188 APPENDIX A Plug-in c# code using System; using System.Collections; using System.Collections.Generic; using Rhino; using Rhino.Geometry; using Grasshopper; using Grasshopper.Kernel; using Grasshopper.Kernel.Data; using Grasshopper.Kernel.Types; /// /// This class will be instantiated on demand by the Script component. /// public class Script_Instance : GH_ScriptInstance { #region Utility functions /// Print a String to the [Out] Parameter of the Script component. /// String to print. private void Print(string text) { /* Implementation hidden. */ } /// Print a formatted String to the [Out] Parameter of the Script component. /// String format. /// Formatting parameters. private void Print(string format, params object[] args) { /* Implementation hidden. */ } /// Print useful information about an object instance to the [Out] Parameter of the Script component. /// Object instance to parse. private void Reflect(object obj) { /* Implementation hidden. */ } /// Print the signatures of all the overloads of a specific method to the [Out] Parameter of the Script component. /// Object instance to parse. private void Reflect(object obj, string method_name) { /* Implementation hidden. */ } #endregion #region Members /// Gets the current Rhino document. 189 private readonly RhinoDoc RhinoDocument; /// Gets the Grasshopper document that owns this script. private readonly GH_Document GrasshopperDocument; /// Gets the Grasshopper script component that owns this script. private readonly IGH_Component Component; /// /// Gets the current iteration count. The first call to RunScript() is associated with Iteration==0. /// Any subsequent call within the same solution will increment the Iteration count. /// private readonly int Iteration; #endregion /// /// This procedure contains the user code. Input parameters are provided as regular arguments, /// Output parameters as ref arguments. You don't have to assign output parameters, /// they will have a default value. /// private void RunScript(Surface sourceSphere, Surface receiverSphere, double hemisphereSize, int hemSphereRayCount, double dbValue, List<Brep> surroundingGeo, List<double> absorptionCoeff, int startingRayCount, int bouncesNo, double rayLength, ref object hemispheresGeometry, ref object hemispheresCentrePts, ref object distance, ref object primaryRays, ref object dbValuesPrimary, ref object primaryRaysChildCount, ref object secondaryRays, ref object dbValuesSecondary, ref object intersectionPts, ref object directRay, ref object dbValueDirectRay, ref object raysHittingReceiver, ref object dbValuesRaysHittingReceiver, ref object totalElapsedTime) { //Variables List<Point3d> startingPts = new List<Point3d>(); primRays = new List<Rays>(); secondRays = new List<Rays>(); hemisphereCentres = new List<Point3d>(); hemisphereDBVals = new List<double>(); List<Surface> hemispheres = new List<Surface>(); absCoeff = new List<double>(); intersections = new List<Point3d>(); //Check absorption coefficient value for(int i = 0; i < absorptionCoeff.Count; i++) { if(absorptionCoeff[i] <= 0) absCoeff.Add(1.0); else absCoeff.Add(absorptionCoeff[i]); } maxBounces = bouncesNo; 190 //Starting Timer System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch(); sw.Start(); //Random Starting Points for Rays startingPts = PopulateSphere(sourceSphere, startingRayCount); //Initialise list of objects for Rays Class for(int i = 0; i < startingPts.Count; i++) { Rays rayTemp = new Rays(startingPts[i], rayLength, sourceSphere, dbValue); primRays.Add(rayTemp); } //Calculate First Reflections for Primary Rays for(int i = 0; i < primRays.Count; i++) { primRays[i].FirstRefelction(surroundingGeo); } //Creating hemispheres at first intersection points and populating points on them List<int> nullSurfaces = new List<int>(); for(int i = 0; i < hemisphereCentres.Count; i++) { Surface hemSphere = CreateHemisphere(hemisphereCentres[i], i, hemisphereSize, surroundingGeo, hemSphereRayCount, rayLength, hemisphereDBVals[i]); if(hemSphere == null) { nullSurfaces.Add(i); continue; } hemispheres.Add(hemSphere); } //Check for null surfaces foreach(int i in nullSurfaces) { hemisphereCentres.RemoveAt(i); } 191 //Remove null rays List<Rays> temPrimary = new List<Rays>(); int check = 0; for(int i = 0; i < primRays.Count; i++) { check = 0; foreach(int j in nullSurfaces) if( i == j) { check = -1; break; } if(check == 0) temPrimary.Add(primRays[i]); } primRays = temPrimary; //Calculate Second Reflections for Primary Rays for(int i = 0; i < primRays.Count; i++) primRays[i].SecondRefelction(surroundingGeo); //Calculate Reflections for Secondary Rays for(int i = 0; i < secondRays.Count; i++) secondRays[i].SecondRefelction(surroundingGeo); //Check for primary rays intersection with receiver sphere for(int i = 0; i < primRays.Count; i++) { primRays[i].ReceiverIntersections(receiverSphere); } //Check for secondary rays intersection with receiver sphere 192 for(int i = 0; i < secondRays.Count; i++) { secondRays[i].ReceiverIntersections(receiverSphere); } //Creating Polylines for primary rays List<Polyline> tempPrim = new List<Polyline>(); List<Polyline> rayHitRec = new List<Polyline>(); for(int i = 0; i < primRays.Count; i++) { if(primRays[i].discontPts.Count > 1) { Polyline pl = new Polyline(primRays[i].discontPts); tempPrim.Add(pl); if(primRays[i].receiverHit == 1) rayHitRec.Add(pl); } } //Creating Polylines for secondary rays List<Polyline> tempSecond = new List<Polyline>(); for(int i = 0; i < secondRays.Count; i++) { if(secondRays[i].discontPts.Count > 1) { Polyline pl = new Polyline(secondRays[i].discontPts); tempSecond.Add(pl); if(secondRays[i].receiverHit == 1) rayHitRec.Add(pl); } } //Calculate number of valid secondary rays for each primary ray List<int> validChildCount = new List<int>(); for(int i = 0; i < tempPrim.Count; i++) { int num = 0; for(int j = 0; j < tempSecond.Count; j++) { Curve pl = tempSecond[j].ToPolylineCurve(); Point3d startPt = pl.PointAtStart; 193 double dist = hemisphereCentres[i].DistanceTo(startPt); if(dist <= hemisphereSize * 1.5) num++; } validChildCount.Add(num); } //Adding Polyline for direct ray Polyline plDir = GetDirectRayPolyline(sourceSphere, receiverSphere); intersections.Add(plDir.ToPolylineCurve().PointAtEnd); List<double> dbDirect = GetDirectRayDB(plDir, dbValue); //Adding the db Values to a Data Tree for Primary Rays DataTree<double> primDB = new DataTree<double>(); int ind = 0; for(int i = 0; i < primRays.Count; i++) { if(primRays[i].dbValues.Count <= 1) continue; GH_Path pth = new GH_Path(ind); ind++; for(int j = 0;j < primRays[i].dbValues.Count; j++) primDB.Add(primRays[i].dbValues[j], pth); } //Adding the db Values to a Data Tree for Secondary Rays DataTree<double> secondDB = new DataTree<double>(); ind = 0; for(int i = 0; i < secondRays.Count; i++) { if(secondRays[i].dbValues.Count <= 1) continue; GH_Path pth = new GH_Path(ind); ind++; for(int j = 0;j < secondRays[i].dbValues.Count; j++) secondDB.Add(secondRays[i].dbValues[j], pth); } //Adding the db Values to a Data Tree for Rays Hitting Receiver DataTree<double> rayHitRecDB = new DataTree<double>(); ind = 0; for(int i = 0; i < primRays.Count; i++) { 194 if(primRays[i].dbValues.Count <= 1 || primRays[i].receiverHit == 0) continue; GH_Path pth = new GH_Path(ind); ind++; for(int j = 0;j < primRays[i].dbValues.Count; j++) rayHitRecDB.Add(primRays[i].dbValues[j], pth); } for(int i = 0; i < secondRays.Count; i++) { if(secondRays[i].dbValues.Count <= 1 || secondRays[i].receiverHit == 0) continue; GH_Path pth = new GH_Path(ind); ind++; for(int j = 0;j < secondRays[i].dbValues.Count; j++) rayHitRecDB.Add(secondRays[i].dbValues[j], pth); } //Output distance = plDir.Length; primaryRays = tempPrim; secondaryRays = tempSecond; intersectionPts = intersections; hemispheresGeometry = hemispheres; hemispheresCentrePts = hemisphereCentres; dbValuesPrimary = primDB; dbValuesSecondary = secondDB; primaryRaysChildCount = validChildCount; raysHittingReceiver = rayHitRec; dbValuesRaysHittingReceiver = rayHitRecDB; directRay = plDir; dbValueDirectRay = dbDirect; sw.Stop(); totalElapsedTime = sw.Elapsed.TotalMilliseconds; } // <Custom additional code> 195 //Global Variables List<Rays> primRays; List<Rays> secondRays; public static List<Point3d> hemisphereCentres; public static List<double> hemisphereDBVals; public static List<Point3d> intersections; public static List<double> absCoeff; public static int maxBounces; public class Rays { //Variables public Point3d startPt; public List<Point3d> discontPts; public List<double> dbValues; public int bounceCount = 0; public double maxLength; public int receiverHit = 0; public Vector3d moveDir; //Constructor for Surface public Rays(Point3d pt, double rayLength, Surface baseSrf, double dbVal) { startPt = pt; bounceCount = 0; maxLength = rayLength; discontPts = new List<Point3d>(); discontPts.Add(startPt); dbValues = new List<double>(); dbValues.Add(dbVal); //Find surface normal for initial direction double u,v; baseSrf.ClosestPoint(pt, out u, out v); Vector3d dir = baseSrf.NormalAt(u, v); 196 dir.Unitize(); moveDir = dir; } //Constructor for Direct Ray public Rays(Point3d pt, double rayLength, Surface baseSrf, double dbVal, Vector3d direction) { bounceCount = 0; maxLength = rayLength; //Find surface closest point for start point double u,v; baseSrf.ClosestPoint(pt, out u, out v); Point3d closePt = baseSrf.PointAt(u, v); startPt = closePt; discontPts = new List<Point3d>(); discontPts.Add(startPt); direction.Unitize(); moveDir = direction; dbValues = new List<double>(); dbValues.Add(dbVal); } //Methods //Calculate First Reflection public void FirstRefelction(List < Brep > surrounding) { Ray3d rayC = new Ray3d(this.startPt, this.moveDir); //Checking intersection of ray with surrounding geometry for(int k = 0; k < surrounding.Count; k++) { Point3d[] intersectionPts = Rhino.Geometry.Intersect.Intersection.RayShoot(rayC, surrounding[k].Faces, 1); 197 if(intersectionPts.Length == 0) continue;; Point3d intPt = intersectionPts[0]; Vector3d normalPt = BrepNormal(surrounding[k], intPt); //Mirror the direction vector along normal Transform trans = Transform.Mirror(intPt, normalPt); moveDir.Transform(trans); moveDir.Unitize(); //Updating variables values discontPts.Add(intPt); startPt = intPt; maxLength *= absCoeff[k]; CalculateDBValue(absCoeff[k]); bounceCount++; hemisphereCentres.Add(intPt); hemisphereDBVals.Add(dbValues[dbValues.Count - 1]); } } //Calculate Second Reflection public void SecondRefelction(List < Brep > surrounding) { //while(bounceCount <= maxBounces) //{ Ray3d rayC = new Ray3d(this.startPt, this.moveDir); //int flag = -1; for(int k = 0; k < surrounding.Count; k++) { //Checking intersection of ray with surrounding geometry Point3d[] intersectionPts = Rhino.Geometry.Intersect.Intersection.RayShoot(rayC, surrounding[k].Faces, 1); if(intersectionPts.Length == 0) continue; //Checking for length of ray w.r.t maxLength of ray 198 if(intersectionPts.Length > 0) { Point3d intPt = intersectionPts[0]; double dist = this.startPt.DistanceTo(intPt); if(dist <= maxLength) { Vector3d normalPt = BrepNormal(surrounding[k], intPt); //Mirror the direction vector along normal Transform trans = Transform.Mirror(intPt, normalPt); moveDir.Transform(trans); moveDir.Unitize(); //Updating variables values discontPts.Add(intPt); this.startPt = intPt; maxLength *= absCoeff[k]; CalculateDBValue(absCoeff[k]); bounceCount++; //flag = 0; //} if(bounceCount <= maxBounces) this.SecondRefelction(surrounding); } } } } //Calculate db Values w.r.t formula public void CalculateDBValue(double absCoeff) { int index = discontPts.Count - 1; double R1 = 0.1; double R2 = discontPts[index].DistanceTo(discontPts[index - 1]); double B1 = dbValues[index - 1]; double B2 = B1 + 20 * Math.Log((R1 / R2)); if(B2 < 0) B2 = 0; double newDB = B2 * absCoeff; dbValues.Add(newDB); } 199 //Compute normal to brep at a point public Vector3d BrepNormal(Brep surrounding, Point3d intPt) { Point3d closePt; ComponentIndex ci; double s, t; Vector3d norm; surrounding.ClosestPoint(intPt, out closePt, out ci, out s, out t, 1.0, out norm); return norm; } //Checking intersections with receiver sphere public void ReceiverIntersections(Surface receiverSphere) { List<Point3d> pts = new List<Point3d>(); List<double> vals = new List<double>(); int flag = -1; Brep sphere = receiverSphere.ToBrep(); Point3d[] intersectionPts; Curve[] overlapCurves; for(int i = 0; i < discontPts.Count - 1; i++) { Line pl = new Line(discontPts[i], discontPts[i + 1]); Curve crv = pl.ToNurbsCurve(); Rhino.Geometry.Intersect.Intersection.CurveBrep(crv, sphere, 0.01, out overlapCurves, out intersectionPts); pts.Add(discontPts[i]); vals.Add(dbValues[i]); if(intersectionPts.Length > 0) { Point3d intPt = intersectionPts[0]; pts.Add(intPt); double R2 = discontPts[i].DistanceTo(intPt); double dbVal = dbValues[i] + 20 * Math.Log10((0.1 / R2)); 200 if(dbVal < 0) dbVal = 0; vals.Add(dbVal); intersections.Add(intPt); flag = 0; break; } } //Assign new values to rays variables if(flag == 0) { discontPts = pts; dbValues = vals; receiverHit = 1; } } } List<Point3d> PopulateSphere(Surface baseSphere, int ptCount) { List<Point3d> popPts = new List<Point3d>(); int num = (int) Math.Sqrt(ptCount); while (ptCount % num != 0) { num--; } double vCount = num; double uCount = ptCount / num; baseSphere.ToNurbsSurface(); // Reparamterize input surface: baseSphere.SetDomain(0, new Interval(0, 1)); // u direction baseSphere.SetDomain(1, new Interval(0, 1)); // v direction //Divide Surface for(int i = 1; i <= uCount; i++) for(int j = 1; j <= vCount; j++) { Point3d ptSrf = baseSphere.PointAt(i / uCount, j / (vCount + 1)); popPts.Add(ptSrf); } 201 //Retrun random list of points on Sphere return popPts; } //Creating Hemispheres for secondary rays Surface CreateHemisphere(Point3d centre, int parent, double radius, List<Brep > surrounding, int hemSphereRayCount, double rayLength, double dbValue) { Sphere sph = new Sphere(centre, radius); Brep sphBrep = sph.ToBrep(); int flag = -1; hemSphereRayCount *= 2; Brep[] trimBreps = new Brep[2]; for(int k = 0; k < surrounding.Count; k++) { trimBreps = sphBrep.Trim(surrounding[k], 0.01); if(trimBreps.Length > 0) { flag = 0; break; } } if(flag == -1) return null; Brep[] joinedBreps = Brep.JoinBreps(trimBreps, 0.1); Surface hemiSphereGeo = joinedBreps[0].Faces[0]; int num = (int) Math.Sqrt(hemSphereRayCount); while (hemSphereRayCount % num != 0) { num--; } double vCount = num; double uCount = hemSphereRayCount / num; // Reparamterize input surface: hemiSphereGeo.SetDomain(0, new Interval(0, 1)); // u direction 202 hemiSphereGeo.SetDomain(1, new Interval(0, 1)); // v direction //Divide Surface for(int i = 1; i <= uCount; i++) for(int j = 1; j <= vCount; j++) { Point3d ptSrf = hemiSphereGeo.PointAt(i / (uCount + 1), j / (vCount + 1)); Rays rayTem = new Rays(ptSrf, rayLength, hemiSphereGeo, dbValue); secondRays.Add(rayTem); } return hemiSphereGeo; } //Finding Distance of closest point to a brep along a vector double ProjectOnBrep(Point3d pt, Vector3d dir, List<Brep > geo) { Ray3d ray = new Ray3d(pt, dir); int flag = -1; double dist = 1.0; double maxDist = double.MinValue; for(int k = 0; k < geo.Count; k++) { //Checking intersection of ray with surrounding geometry Point3d[] projectPt = Rhino.Geometry.Intersect.Intersection.RayShoot(ray, geo[k].Faces, 1); if(projectPt.Length == 0) continue; Point3d intPt = projectPt[0]; dist = pt.DistanceTo(intPt); if(dist > maxDist) maxDist = dist; flag = 0; } if(flag == -1) return 1.0; 203 else return maxDist; } Polyline GetDirectRayPolyline(Surface sourceSphere, Surface receiverSphere) { //Find Centroid of Source Sphere AreaMassProperties areaPropS = AreaMassProperties.Compute(sourceSphere); Point3d centreSource = areaPropS.Centroid; //Find Centroid of Receiver Sphere AreaMassProperties areaPropR = AreaMassProperties.Compute(receiverSphere); Point3d centreReceiver = areaPropR.Centroid; Vector3d dir = centreReceiver - centreSource; //Find surface closest point for start point centreSource += dir * 0.5; double u,v; sourceSphere.ClosestPoint(centreSource, out u, out v); Point3d startPt = sourceSphere.PointAt(u, v); receiverSphere.ClosestPoint(centreSource, out u, out v); Point3d endPt = receiverSphere.PointAt(u, v); List < Point3d > pts = new List<Point3d>(); pts.Add(startPt); pts.Add(endPt); Polyline pl = new Polyline(pts); return pl; } List<double> GetDirectRayDB(Polyline plDir, double dbValue) { List<double> dbValues = new List<double>(); dbValues.Add(dbValue); PolylineCurve pl = plDir.ToPolylineCurve(); double R1 = 0.1; double R2 = pl.PointAtStart.DistanceTo(pl.PointAtEnd); double B1 = dbValue; double B2 = B1 + 20 * Math.Log10((R1 / R2)); if(B2 < 0) B2 = 0; double newDB = B2; 204 dbValues.Add(newDB); return dbValues; } // </Custom additional code> } APPENDIX B User Manual for Rutabaga Acoustics 1. Prepare Brep model of surrounding geometry within Rhino. 2. Create Source sphere within Rhino model at desired location (red sphere shown as an example). 205 3. Create Receiver sphere within Rhino at desired location (green sphere shown as an example). 206 4. Startup Grasshopper. 5. Open Rutabaga Acoustics.gh file 6. Right-click on input node #1 and click “Set one surface” 207 7. This will take you back to the Rhino model. Select the sound source sphere (red sphere shows in example) 8. Right click on input node #2 and click “Set one surface” 9. This will take you back to the Rhino model. Select the receiver sphere (green sphere shows in example). 10. Input nodes 3 and 4 can be ignored for first-time users. Altering these values increases or decreases the complexity of the calculations. 11. Slide the value in input node #5 to set the starting Sound Power Level at the source. 12. Right click on input node #6 and click “Set multiple surfaces” 208 13. This will take you back to the Rhino model. Select the surrounding Brep geometry, preferably in a specific order. 14. Double click on the input panel #7. This contains a list of the absorption coefficients of the Brep surrounding geometry. The number of values in this panel must match the number of Breps selected for the surrounding geometry. (5 Breps shown as an example, with 5 corresponding absorption coefficient values) 209 15. Set slider #8 to the suggested value for the number of rays, corresponding with the volume of the space. 16. Input nodes 9 and 10 can be altered to improve accuracy. 17. Right click on the main component and “Enable” it. 210 18. By sliding input #11, the rays being propagated form the source sphere can be visualized. 211 19. The values displayed in the panels labelled below provide the results from the simulation. 212 20. These values provide the time in order along with the corresponding SPL values at the receiver. 21. By right clicking on any panel, and clicking “Copy Data Only”, the values can be copied to the clipboard for pasting within an Excel file. 22. Copying the “Time” and “SPL” panel data into the provided Excel file, an SPL-Time graph is created.
Abstract (if available)
Abstract
Acoustic simulation software has been widely utilized by acousticians, engineers and architects for the design of performance spaces (i.e. auditoriums, theaters, arenas). Primarily used by professionals, these simulation engines are often independent programs that are not integrated into 3D modeling software and require the model to be imported and exported between platforms. In order to allow a smooth workflow that incorporates acoustic simulations, designers have created plug-ins for 3D modeling software that visualize results within the model, making it easy to understand and update while altering the model. ? Despite this, there is a lack of acoustic simulation tools that are easy to use. To improve the seamless integration of outdoor acoustic considerations within the design workflow, a simulation tool was created for Rhino’s Grasshopper. Rhino was chosen after comparing a total of five 3D modeling software programs based on their operating system compatibility and plug-in architecture. Rhino displayed the highest capacity for third-party plug-in integration and geometric model complexity. The availability of the Grasshopper plug-in contributed to Rhino’s selection. Within Grasshopper, an acoustic simulation component was created called Rutabaga Acoustics. ? The algorithms applied within Rutabaga are image-source and ray tracing for early and late reflections, respectively, of high frequency sounds. These collective algorithms applied by Rutabaga consider the texture of the surrounding surfaces and calculate for the varying absorption coefficients. The energy lost to the open atmosphere is also accounted for. This allows Rutabaga to be applicable within outdoor spaces. ? The final calculations provide an SPL-Time, a T60 value, and collective Sound Power Level at the receiver. Since providing legible results through Grasshopper and Rhino’s interfaces was one of the main objectives of Rutabaga, the resultant output of the plug-in development was a graphical visualization within the Rhino 3D model, displaying the fall-off of propagated sound. ? To validate the results, Rutabaga was tested using the 3D models of two sites (an enclosed hallway and a central public piazza) and the Sound Pressure Levels simulated were compared to previous research results. A comparative analysis proved Rutabaga provides correctly propagated sound ray visualization, and calculated decibel values were of negligible difference.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Acoustics simulation for stadium design using EASE: analyzing acoustics and providing retrofit options for the Los Angeles Memorial Coliseum
PDF
Daylight prediction: an evaluation of daylighting simulation software for four cases
PDF
Visualizing architectural lighting: creating and reviewing workflows based on virtual reality platforms
PDF
A teaching tool for architectural acoustics
PDF
Augmented reality in room acoustics: a simulation tool for mobile devices with auditory feedback
PDF
Simulation-based electric lighting control algorithm: integrating daylight simulation with daylight-linked lighting control
PDF
CFD visualization: a case study for using a building information modeling with virtual reality
PDF
Tilted glazing: angle-dependence of direct solar heat gain and form-refining of complex facades
PDF
Energy efficient buildings: a method of probabilistic risk assessment using building energy simulation
PDF
Energy simulation in existing buildings: calibrating the model for retrofit studies
PDF
Changing space and sound: parametric design and variable acoustics
PDF
Multi-domain assessment of a kinetic facade: determining the control strategy of a kinetic façade using BIM based on energy performance, daylighting, and occupants’ preferences; Multi-domain asse...
PDF
BIM+AR in architecture: a building maintenance application for a smart phone
PDF
Digital tree simulation for residential building energy savings: shading and evapotranspiration
PDF
Adaptive façade controls: A methodology based on occupant visual comfort preferences and cluster analysis
PDF
A simplified building energy simulation tool: material and environmental properties effects on HVAC performance
PDF
Streamlining precast back-frame design: automated design using Revit plugin
PDF
Structural design tool for performative building elements: a semi-automated Grasshopper plugin for design decision support of complex trusses
PDF
Acoustic cumulus: acoustic improvements in the graduate building science studio
PDF
Double skin façades performance: effects on daylight and visual comfort in office spaces
Asset Metadata
Creator
Ahmad, Maira
(author)
Core Title
Simplified acoustic simulation - Rutabaga Acoustics: a Grasshopper plug-in for Rhino
School
School of Architecture
Degree
Master of Building Science
Degree Program
Building Science
Degree Conferral Date
2021-08
Publication Date
07/21/2021
Defense Date
04/21/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
acoustic simulation,exterior acoustics,Grasshopper plug-in,high frequency sound,OAI-PMH Harvest,outdoor simulation,sound pressure level
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Noble, Douglas (
committee chair
), Kensek, Karen (
committee member
), Schiler, Marc (
committee member
)
Creator Email
mairaahm@usc.edu,mairaahmed44@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC15615976
Unique identifier
UC15615976
Legacy Identifier
etd-AhmadMaira-9813
Document Type
Thesis
Format
application/pdf (imt)
Rights
Ahmad, Maira
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
acoustic simulation
exterior acoustics
Grasshopper plug-in
high frequency sound
outdoor simulation
sound pressure level