Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Augmented reality in room acoustics: a simulation tool for mobile devices with auditory feedback
(USC Thesis Other)
Augmented reality in room acoustics: a simulation tool for mobile devices with auditory feedback
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Copyright 2020 AUGMENTED REALITY IN ROOM ACOUSTICS: A Simulation Tool for Mobile Devices with Auditory Feedback by Zhihe Wang A Thesis Presented to the FACULTY OF THE USC SCHOOL OF ARCHITECTURE UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree MASTER OF BUILDING SCIENCE May 2020 Zhihe Wang ACKNOWLEDGMENTS I would like to express my deep gratitude to the chair of my thesis committee, Professor Karen Kensek, who has guided me through all the challenges I faced during this research. I also appreciate her resourcefulness, patience, and energetic smile. I would also like to express my appreciation to other committee members, Professor Chris Kyriakakis, Professor Michael Zyda, and Erik Narhi for their advice from different areas of experts, as well as the time they spend on this project. I am particularly grateful for the valuable help from Professor Chris Kyriakakis about room acoustics. I would also express my gratitude to Professor Mark Schiler for his help with the text-proofing, and Professor Doug Noble for lending me his equipment and his support in many other forms. I also want to acknowledge the help of Buro Rappold Engineering. I appreciate the change they gave me to present the idea to the Los Angeles office and the ideas and suggestions they gave me, which inspired me a lot in many directions. Thanks to Sara Kingman and Kathleen Hetrik for helping me coordinate with the company for the presentation. Also thanks to Kathleen Hetrik, Matthew Harrison, and Daniel Bailey who were resourceful and gave me precious suggestions. Especially Matthew Harrison, who is an acoustics expert from the UK and enlightened me with the idea of using Steam Audio in this project. I would like to express my deep gratitude to my parents, who supported me to study here, and my boyfriend who shared my happiness and helped me get through all the upset moments. I have to especially thank my friend Zhiying Liu, who was always my best accompany whenever there was a problem comes up. Many problems in this research were solved based on the discussion with her. Thanks to my other classmates for the time we spent and the friendship we have. It was an unusual graduation season because of the COVID-19. The final defense was held online, and the university was closed to protect us from being affected. So finally, I want to thank the University of South California, which put the health of students and faculties in the first place. And also thank everyone who was fighting the virus to save lives all over the world. ii TABLE OF CONTENTS ACKNOWLEDGMENTS ............................................................................................................................................ ii LIST OF TABLES ......................................................................................................................................................... v LIST OF FIGURES ...................................................................................................................................................... vi ABSTRACT .................................................................................................................................................................. x I . INTRODUCTION ........................................................................................................................................ 1 1.1. Augmented Reality (AR) .............................................................................................................. 1 1.2. Room Acoustics ........................................................................................................................... 9 1.3. Room Auralization ..................................................................................................................... 13 1.4. Summary .................................................................................................................................... 15 2. BACKGROUND RESEARCH .................................................................................................................. 16 2.1. Augmented Reality (AR) ............................................................................................................ 16 2.2. Room Acoustics ......................................................................................................................... 23 2.3. Room Auralization ..................................................................................................................... 27 2.4. Summary .................................................................................................................................... 31 3. METHODOLOGY ..................................................................................................................................... 32 3 .1. Explanation of Soundar .............................................................................................................. 32 3.2. Overall Workflow ...................................................................................................................... 33 3.3. Platform Setup ............................................................................................................................ 34 3.4. Database Setup ........................................................................................................................... 38 3.5. Application Development. ......................................................................................................... .40 3.6. Application Test ........................................................................................................................ .47 3.7. Result Analysis ........................................................................................................................... 51 3.8. Summary .................................................................................................................................... 52 4. DETAILED METHODOLOGY ................................................................................................................ 54 4.1. Modules in the Simulation process ............................................................................................ 56 4.2. Script from Other Sources .......................................................................................................... 65 4.3. Simulation process ..................................................................................................................... 71 4.4. User Interface Design ................................................................................................................. 91 4.5. Summary .................................................................................................................................. 102 5. Validation ................................................................................................................................................. 105 5 .1. Software and Equipment .......................................................................................................... 106 5.2. Validation Tests ....................................................................................................................... 110 5.3. Test Result and Analysis .......................................................................................................... 114 5.4. Summary .................................................................................................................................. 125 6. Conclusion, Discussion, and Future Work ............................................................................................... 127 6.1. Current Status ........................................................................................................................... 127 6.2. Limitations ............................................................................................................................... 133 6.3. Future Work ............................................................................................................................. 137 6.4. Conclusion ................................................................................................................................ 141 REFERENCES .......................................................................................................................................................... 143 APPENDIX ............................................................................................................................................................... 148 APPENDIX A: MATERIALS USED IN SOUNDAR ............................................................................. 148 iii APPENDIX B: COMPLETE SCRIPT ..................................................................................................... 149 APPENDIX C: TIME DOMAIN CHARTS ............................................................................................. 183 APPENDIX D: FREQUENCY DOMAIN CHARTS .............................................................................. 187 APPENDIX E: IMPULSE RESPONSE CHARTS .................................................................................. 193 iv LIST OF TABLES Table 1-1 Interaction between four core elements in AR············································································· 2 Table 1-2 Examples for the interaction between four core elements in AR ................................................. 3 Table 1-3 Comparison between Different Development Platforms .............................................................. 8 Table 1-4 Comparison of ARKit3, ARCore, and Vuforia ............................................................................ 9 Table 1-5 Examples of material absorption coefficients (Everest and Pohlmann 2009) . ........................... 11 Table 1-6 Glossary of terms (Everest and Pohlmann 2009) . ...................................................................... 13 Table 2-1 Comparison of feature point detection and photometric modeling . ........................................... 21 Table 2-2 Tested Existing AR Mobile Applications ................................................................................... 21 Table 3-1 List of prefabs used in Soundar. ................................................................................................ .4 0 Table 3-2 Modules used in the simulation process .. ................................................................................... 46 Table 3-3 Modules used for the user interface design . ............................................................................... 46 Table 3-4 Properties of sound sources in Room 1 ...................................................................................... 49 Table 3-5 Properties of sound sources in Room 2 ...................................................................................... 51 Table 4-1 Modules used in the simulation process (in order of use) . ......................................................... 55 Table 4-2 Modules used for the user interface design . ............................................................................... 56 Table 4-3 Modules used in the simulation process (in order of use) . ....................................................... 103 Table 4-4 Modules used for the user interface design . ............................................................................. 103 Table 5-1 Test settings in Room 1 .. .......................................................................................................... 112 Table 5-2 Test settings in Room 2 .. .......................................................................................................... 114 Table 5-3 Data analysis of SPL per time . ................................................................................................. 117 Table 5-4 Data analysis of SPL per frequency (20 Hz to 20 kHz) ........................................................... 120 Table 5-5 Data analysis of SPL per frequency (20 Hz to 12 kHz) ........................................................... 120 Table 5-6 The recorded T6o and the calculated T6o of Soundar. ................................................................ 122 Table 5-7 The recorded T6o and the rendered T6o of Soundar .................................................................... 123 Table 5-8 The calculated T6o and Rendered T60 of Soundar. ..................................................................... 125 V LIST OF FIGURES Figure 1-1 Milgram Continuum (Milgram and Kishino 1994, 1321-1329) .. ................................................ 2 Figure 1-2 Pokemon Go .. .............................................................................................................................. 4 Figure 1-3 Examples of AR headsets in the market. . .................................................................................... 5 Figure 1-4 Litho (Litho 2019) ....................................................................................................................... 7 Figure 1-5 Bose AR sunglasses (Bose Corporation 2020) ............................................................................ 7 Figure 1-6 Decibel Chart (Daniel 2007, 225-231) ...................................................................................... 10 Figure 1-7 Reverberation time for different kinds of spaces and types of sound (Raiche) 2006) ............... 12 Figure 1-8 Binaural direction sense . ........................................................................................................... 14 Figure 1-9 Two ways for binaural recording . ............................................................................................. 14 Figure 2-1 Typical designs of OST (Hainich and Bimber 2011 ) .. .............................................................. 17 Figure 2-2 Google AR glasses (Google 2020) .. ......................................................................................... 17 Figure 2-3 AR on mobile phones .. .............................................................................................................. 18 Figure 2-4 AR Sandbox (Sanchez et al. 2016, 599-602) ............................................................................ 18 Figure 2-5 Feature point detection used on mobile devices .. ...................................................................... 19 Figure 2-6 Use photos from different angles of the object to generates 3D models (Lievendag 2018) . .... 20 Figure 2- 7 Spatial mesh generated by HoloLens (Tuliper 2017) ................................................................ 20 Figure 2-8 Mirror image source model (based on (Everest and Pohlmann 2009)) ..................................... 25 Figure 2-9 Image method for three bounces (based on (Allen and Berkley 1979, 943-950)) . ................... 26 Figure 2-10 Ray tracing in Odeon ............................................................................................................... 26 Figure 2-11 Sound pressure level meter. .................................................................................................... 27 Figure 2-12 Process of Room Auralization .. ............................................................................................... 28 Figure 2-13 Demo evaluations and simulations by Odeon ......................................................................... 29 Figure 2-14 Demo measurement by VSLM .. .............................................................................................. 29 Figure 2-15 Room acoustic simulation by REW . ....................................................................................... 30 Figure 3-1 Auditory feedback for a virtual sound source in a real environment. ....................................... 32 Figure 3-2 Workflow of Soundar .. .............................................................................................................. 33 Figure 3-3 Overall workflow of the developing process . ........................................................................... 34 Figure 3-4 Install Unity 2019.1 through Unity Hub . .................................................................................. 35 Figure 3-5 Installing AR Foundation and other packages . ......................................................................... 36 Figure 3-6 Import Steam Audio into Unity .. ............................................................................................... 37 Figure 3-7 Audio setting in Unity ............................................................................................................... 3 7 Figure 3-8 Installation setting of Visual Studio .. ........................................................................................ 38 Figure 3-9 Material data of floor materials in CSV format. ....................................................................... 39 Figure 3-10 Overall framework of the application development. ............................................................... 41 Figure 3-11 Structure of room setup .. ......................................................................................................... 42 Figure 3-12 Structure of sound setup .. ........................................................................................................ 42 Figure 3-13 Device volume calibration . ..................................................................................................... 43 Figure 3-14 Scale charts show reverberation time and sound pressure level. ............................................ .4 4 Figure 3-15 The logo of Soundar ................................................................................................................ 44 Figure 3-16 UI conceptual design ............................................................................................................... 45 Figure 3-17 Main Menu conceptual design ................................................................................................ 45 Figure 3-18 Relationship between scenes, main scripts, and modules . ...................................................... 47 Figure 3-19 Room 1: Watt 212. (Modeled by Revit.) ................................................................................. 48 Figure 3-20 Sound source and listener positions in Room 1 . ..................................................................... 50 Figure 3-21 Test Room 2: Trapezoid lecture room. (Modeled by Revit.) .................................................. 50 Figure 3-22 Sound source and listener positions in Room 2 . ..................................................................... 51 VI Figure 3-23 Overall framework of the development process of Soundar. .................................................. 53 Figure 4-1 Relationship between scenes, main scripts, and modules . ........................................................ 54 Figure 4-2 CSV to list. ................................................................................................................................ 57 Figure 4-3 Ray cast from where users touch the screen and hit the AR plane ............................................ 57 Figure 4-4 The script for Place objects on an AR plane .. ........................................................................... 58 Figure 4-5 A prefab, PlanPoint, was dropped to the script component to define the placedObject ........... 58 Figure 4-6 Place the linePrefab at point2 and adjust the scale . .................................................................. 58 Figure 4- 7 Changing the rotation of the linePrefab. ··················································································· 59 Figure 4-8 The polygon is made of multiple triangles .. .............................................................................. 60 Figure 4-9 Single-sided mesh and double-sided mesh ................................................................................ 60 Figure 4-10 The indices in mesh. triangles for each triangle are both clockwise and counterclockwise .... 61 Figure 4-11 Different uv settings for vertical meshes and horizontal meshes . ........................................... 61 Figure 4-12 Clone object is for a single game object. .. ............................................................................... 62 Figure 4-13 Clone objects is for a list of game objects .. ............................................................................. 62 Figure 4-14 Do not destroy the game object when loading other scenes .. .................................................. 62 Figure 4-15 Plance object in air places the game object at the location of the camera .. ............................ 63 Figure 4-16 Assign data compare and rewrite the material value .. ............................................................. 63 Figure 4-17 Calculate the triangle area with coordinates .. .......................................................................... 64 Figure 4-18 Formula and script that calculates the area of a double-side mesh . ........................................ 64 Figure 4-19 The T60 is calculated by the room volume, surface areas, and their absorption coefficient. ... 65 Figure 4-20 AR Session Origin in the hierarchy ......................................................................................... 66 Figure 4-21 AR Session in "Set Room" ...................................................................................................... 67 Figure 4-22 Basic set-up of the sound source .. ........................................................................................... 68 Figure 4-23 Reverb mixer on the sound source . ......................................................................................... 68 Figure 4-24 Original scripts for exporting phonon scene . .......................................................................... 69 Figure 4-25 Modified scrips for exporting .phononcene files .. ................................................................... 70 Figure 4-26 Relationship and connections in the seven scenes . ................................................................. 71 Figure 4-27 "Start Screen" shows the logo and the name of Soundar and do the calibration . ................... 72 Figure 4-28 StartScreen _ Main loads the material database and loads the next scene .. .............................. 72 Figure 4-29 Texture map of the default materials .. ..................................................................................... 73 Figure 4-30 Soundar detects surfaces and generates horizontal AR planes .. .............................................. 73 Figure 4-31 Get all points generated by Place objects on AR plane .. ......................................................... 74 Figure 4-32 Lines between the new point and the previous one and the warning . ..................................... 74 Figure 4-33 Link vertices and enclose floor shape . .................................................................................... 75 Figure 4-34 Create floor surface and assign the default texture . ................................................................ 75 Figure 4-35 Clone components from the floor surface to create a ceiling surface . .................................... 76 Figure 4-36 Schematic diagram and the script for the elevation calculation .. ............................................ 76 Figure 4-37 Different conditions for ceiling elevation . .............................................................................. 77 Figure 4-38 Add lines between floor and ceiling as the boundaries of the walls ....................................... 78 Figure 4-39 Create wall surfaces and assign the default material. .. ............................................................ 79 Figure 4-40 Button "edit" and "finish" show up when all room surfaces are set. ...................................... 79 Figure 4-41 Formats all material dropdowns .. ............................................................................................ 80 Figure 4-42 Different dropdowns are shown when selecting different surfaces .. ....................................... 81 Figure 4-43 Select surface to show the dropdown and touch blank space to hide .. .................................... 81 Figure 4-44 Sound source pref ab .. .............................................................................................................. 82 Figure 4-45 Place sound source objects .. .................................................................................................... 83 Figure 4-46 The counter shows how many sound sources are placed in the space . ................................... 83 Figure 4-47 Hide SPL tag and load current sound file options .. ................................................................. 83 Figure 4-48 "Edit Sound" when selecting a sound source and not selecting a sound source . .................... 84 Figure 4-49 Four edit options in "Edit Sound." .......................................................................................... 85 vii Figure 4-50 Users can move the sound source along the coordinate axes .. ................................................ 86 Figure 4-51 Move Sound_ Main moves the sound source along the axis . ................................................... 87 Figure 4-52 Check and set the playing state. ······························································································ 87 Figure 4-53 The SPL tags always face to the device and rescale to make the information is readable .. .... 88 Figure 4-54 Scripts calculates the SPL heard by users . .............................................................................. 89 Figure 4-55 The SPL of the environment background sound . .................................................................... 89 Figure 4-56 Scripts for calculating the environment background SPL. ...................................................... 90 Figure 4-57 The SPL scale shows real-time SPL and reverberation time . ................................................. 90 Figure 4-58 Scripts calculates the reverberation time of the room . ............................................................ 91 Figure 4-59 The menu shows when users point at the menu button .. ......................................................... 92 Figure 4-60 "New project" contains the module New project . ................................................................... 92 Figure 4-61 Settings and About. ................................................................................................................. 93 Figure 4-62 modules in "Settings" that controls the visibility and playing state .. ...................................... 93 Figure 4-63 Setup settings and Update settings can read and write the values of the settings . .................. 94 Figure 4-64 Scripts of module Quit and Clear . .......................................................................................... 94 Figure 4-65 The module Edit Room, Edit sound and edit .. ......................................................................... 95 Figure 4-66 Two directions in Finish Room . ............................................................................................. 96 Figure 4-67 Finish Room assign the material acoustic parameters and export the phonon scenes .. ........... 96 Figure 4-68 Finish Sound saves all sound sources and leads to "Run Simulation." ................................... 97 Figure 4-69 Add Sound leads to the scene "Set Sound." ............................................................................. 97 Figure 4-70 Identify the material object and assign the correct value to the dropdown . ............................ 97 Figure 4-71 Get the material in the library, which matches the current option of the dropdown . .............. 98 Figure 4-72 Change the material of the object when the dropdown value change . .................................... 98 Figure 4-73 Soundar highlights the selected button and shows the corresponding option . ........................ 99 Figure 4-74 Scripts of the module Op tion_SPL. ......................................................................................... 99 Figure 4-75 Op tion_move directly leads to the scene "Move Sound." ..................................................... 100 Figure 4-76 Sound Op tion White hides all options and restores the buttons ............................................. 100 Figure 4-77 SP L Inp ut changes the SPL tag value and the volume of the mixer. ..................................... 101 Figure 4-78 Script for Change sound fi le .................................................................................................. 101 Figure 4-79 Script of Mute .. ...................................................................................................................... 102 Figure 4-80 Script of Delete . .................................................................................................................... 102 Figure 5-1 Using Adobe Audition to convert the file format. .. ................................................................. 107 Figure 5-2 VSLM and its basic information . ............................................................................................ 107 Figure 5-3 Welcome screen and information ofREW .............................................................................. 108 Figure 5-4 Professional sound level meter .. .............................................................................................. 108 Figure 5-5 Bluetooth loudspeaker. ............................................................................................................ 109 Figure 5-6 Measurement calibrated microphone . ..................................................................................... 109 Figure 5-7 Device and earphones used in the test. .................................................................................... 110 Figure 5-8 Room Watt 212 . ...................................................................................................................... 11 l Figure 5-9 Positions of the sound source and listener in Room l. . ........................................................... 112 Figure 5-10 Tester is using Soundar to simulate the virtual sound source . .............................................. 113 Figure 5-11 San Merendino Room ............................................................................................................ 113 Figure 5-12 Positions of the sound source and listener in Room 2 .. ......................................................... 114 Figure 5-13 Time domain charts of Tic to T1i tests . .................................................................................. 117 Figure 5-14 Frequency domain charts of all T1c to T1i (20 Hz to 20 kHz) .. .............................................. 120 Figure 5-15 Impulse response charts of room impulse recordings . .......................................................... 121 Figure 5-16 Impulse response charts comparing the rendered sound and recorded sound .. ..................... 123 Figure 5-17 Impulse responses and frequency domain charts of tests with different materials . .............. 124 Figure 5-18 Impulse response and frequency domain chart of tests with different room sizes .. .............. 124 viii Figure 6-1 Overall workflow of the developing process . ......................................................................... 127 Figure 6-2 Current framework of the development process of Soundar. .................................................. 129 Figure 6-3 The operation flow of Soundar. ............................................................................................... 130 Figure 6-4 Modules and scripts written for Soundar . ............................................................................... 13 l Figure 6-5 Time domain chart .................................................................................................................. 132 Figure 6-6 Frequency domain chart .......................................................................................................... 132 Figure 6-7 Impulse response ..................................................................................................................... 133 Figure 6-8 Choose which scripts to run based on the operation platform ................................................. 135 Figure 6-9 Overall framework of the development process of Soundar including future functions .. ....... 139 Figure 6-10 Soundar is convenient and easy to use . ................................................................................. 142 ix ABSTRACT Augmented reality (AR), as a combination of real and virtual worlds, is getting more widely used in the architecture and construction domain, especially for visualization. However, virtual information can be presented not only visually but also audibly, which is valuable for room acoustic simulation. An application running on a mobile device, Soundar, was developed for simple acoustic simulations for small size rooms and ordinary users. It simulates the reverberation time and sound pressure level based on an existing room, virtual sound source, and the location of the user. Users can change the material of the room surfaces and then test the difference in sound. The application provides both visual and auditory feedback that will let the users not only read the data but also hear the result of the simulation. It was developed on Unity and used Steam Audio for sound rendering. Tests were run to compare the results returned by Soundar, existing sound simulation software, and microphone recordings in real testing rooms. By comparing the results and the frequency response graphs of the sound file, Soundar had good performances in the numeric results and the auditory results of the realtime SPL. The reverb rendering of the auditory result did not match the simulation. More development and tests need to be implemented to get a more realistic reverb rendering to the simulation result. Soundar is acceptable for small-size spaces and for ordinary people to do simple tasks like room decoration and schematic design, but not for acoustic experts and engineers to do scientifically precise analysis such as professional acoustic design. More development on the user interface could be added to the application to provide better interaction and to make Soundar a more user-friendly application that can go from the academic realm to the profession. Research Objectives • To provide location-based sound simulation on a mobile device • To learn how acoustics work in an AR environment • To allow user interaction in an AR environment in changing acoustic characteristics and hearing the results X KEYWORDS: Augmented reality, room acoustics, acoustics simulation, room auralization, mobile application. XI 1. INTRODUCTION Augmented reality (AR), as an overlay of real and virtual worlds, is getting more widely used in entertainment, tourism, exhibition, education, and fabrication. In the architecture and construction domains, most applications are for interior design, measurement, BIM visualization, and simulation. Many other AEC opportunities can benefit from further AR development. AR can be used for design applications like model viewing, simulation, measurement, interior design, and furniture selection. Furthermore, it can also be useful in other phases or relevant areas like facility management and maintenance, construction and manufacture, and education as well. As two parts of the "combined world," the virtual and the real can be more interactive with each other. It is crucial to get reality more involved instead of being the background. More opportunities can be found when AR combines with existing buildings, construction sites, and historical heritage where reality plays an important role. Also, the virtual information can be present not only in a visual way but can also be auditory, which is valuable for room acoustic simulation. There are three basic components of an AR acoustics simulation: augmented reality, room acoustics, and room auralization. This chapter will introduce what they are and discuss some technical terms and items related to them. 1.1. Augmented Reality (AR) Although it is an old idea, AR has been introduced into peoples' daily life during the last several years. In order to further develop the utilization of this technique, it is crucial to have a better understanding of its definition, history, and the basic components to start development, such as the software development kit (SDK) and the development platform. 1. 1. 1. Definition Augmented Reality (AR) is a technique that brings virtual models and information into reality. There are various definitions of AR. The commonly accepted definition of AR is a three-dimension display that combines both virtual and reality and reacts in real-time (Azuma 1997, 355-385). It is the technique that adds another layer of information onto reality (Jain, Manweiler, and Roy Choudhury 2015, 331-344 ). It can go further to contribute and link with reality as a part of it (Schmalstieg and Hollerer 2016). It can also be considered a sub-term of Mixed Reality (MR), which is shown below (Figure 1-1) (Milgram and Kishino 1994, 1321-1329). �---- Mixed Realit y (MR) ----� - Real Envir onment Augmented Realit y (AR) Augmented Virtuality Virtual Environment Figure 1-1 Milgram Continuum (Milgram and Kishino 1994, 1321-1329). The virtual object can be in many forms. It can be a model, text, visualized data, or adj ustment of the performance of reality. There are many different types of interactions between the user, virtual objects, data, and reality. These four core elements influence and react to each other (Table 1-1). User Realit y Data Virtual Object Table 1-1 Interaction between four core elements in AR User Realit Data Users control reality. Users define and chan e data. Reality includes and Reality defines and restricts users. chan es data. Data is available to Data reflects reality. users. Virtual obj ects react Virtual obj ects react to users' commands. to reali Virtual Ob· ect Users control virtual ob ·ects. Reality influences virtual ob· ects. Data reflects virtual ob ·ects. For example, when using the mobile phone app Soundar, users can set virtual sound sources in a real room, change surface materials, simulate the acoustic performance of the room, and hear the sound playback. Most forms of interaction are included in this process (Table 1-2). However, the virtual objects do not directly react to the changes of reality in Soundar. An example of this kind of interaction is when virtual objects cast shadow that depend on the real lighting environment or that depend on the location and solar time. 2 T bl 1 2 E a e - xamp es £ th . t f b t or em erac 10n e ween £ t . AR our core e emen s m User Reality Data Virtual Object User � The user chooses the The user assigns the The user sets a room . loudne ss of the virtual sound source sound source. in the room . Realit y The user can only / The distance The floor surface is move inside th e room . between the sound detected an d made a source and th e user virtual object. changes when the user move s. Data The numerical The room size is a I/ The revelation time simulation results are real piece of data. of the room with displayed on the selected materials in screen. the virtual environment. Virtual Object The loudne ss of the Not included in this Ac oustic properties / sound changes with application. change when a the posi tion of th e different wall user. material is selected. Different from AR, virtual reality (VR), which only contains virtual objects but no physical contents, has no connection and interaction with reality. VR and AR have their own advantage in different usage and are used for different purposes. 1.1.2. The History of AR In 196 8, the first 3D display system, a wearable device, was invented (Sutherland 1968, 757-764 ). It showed different views depending on the users' location and orientation. Although it was mounted on the ceiling and could only display single-line graphics, it was the start of an evolution. The term "Augmented Reality" was first used in 1992 to describe the technology that provides virtual information based on the task in reality (Caudell and Mizell 1992, 659-669 vol.2). Later, the audio was introduced as a part of AR in 1995 with the prototype of the automated tour guide system (Bederson 1995, 210-211). AR has also been introduced into the architecture industry for facility management (Kensek et al. 2000, 294-301). In the early years, due to the limitations of processing power and memory, only simple lines and texts were displayed. With the rapid development of hardware and software, AR is now able to do complex work and provide a large amount of information in real-time. The history of AR is also the history of electronics technology. The development of GPS, network, mobile 3 devices, and graphic processing techniques gave AR the opportunity to grow and evolve. The first wearable AR system was the Touring Machine, which has a head-set and a series of carry-on support equipment including computer and GPS (Feiner et al. 1997, 208-217). The invention of the PDA and camera cellphone brought AR a new opportunity. Researchers started to introduce AR to these mobile devices. The first AR game on commercial cellphones was published in 2003. Then in 2004, the first video see-through cellphone based AR system was also introduced to the public (Mohring, Lessig, and Bimber 2004, 252-253). After the release of the iPhone changed the ways people interact with cellphones, AR also stepped into a new era. More commercial AR applications for the public went into the market. One of the most famous and successful AR games was Pokemon Go (Figure 1-2), which has over one billion downloads (Boom 2019). Now more and more games are developing AR versions to fit the market and attract more players, such as Angry Birds and Minecraft. Figure 1-2 Pokemon Go. Besides cell phones, the development of head-mounted displays (HMD) provided AR another track. The first commercial AR headset, Google Glass, enlightened the world about the usage and value of AR. Then in 2016, Microsoft released the HoloLens which had a higher level of interaction and wider use including 4 manufacture and education. Another AR headset, Magic Leap was later available to the market in 2018. In addition, a newer version ofHololens recently came out in May 2019 (Figure 1-3). Google Glass 2 (Google 2020) Magic Leap (Magic Leap 2020) HoloLens 2 (Microsoft 2020) Figure 1-3 Examples of AR headsets in the market. As the hardware was updated, the software and algorithms of AR were also marching on. The invention of 2D matrix markers allowed the tracking in six degrees of freedom (three degrees in position plus three degrees in rotation, abbreviated as 6DOF) (Rekimoto 1998, 63-68). Then, ARToolKit, which used mark tracking, was developed in 1999 (Kato and Billinghurst 1999, 85-94). In 2004, a new algorithm for graphic locating and matching was introduced to solve the problem of unaligned information (Coelho, Ju lier, and MacIntyre 2004, 6-15). Then, the method of simultaneous localization and mapping (SLAM) made it available for tracking and mapping the environment at the same time, which highly decreases the run time of AR on mobile devices (Klein and Murray 2007, 1-10). The point cloud was introduced into AR tracking in 2009 (Arth et al. 2009, 73-82). The predecessor ofVuforia, an AR software development kit (SDK), was released to the public in 2011. More SDKs were released so far for the public and developers like ARCore and ARKit. 1.1.3. Devices for AR Hardware devices for AR are rapidly developing. Now, instead of carrying heavy extra equipment, people can use AR on headsets or mobile devices. Some of the applications are using headsets like HoloLens and Magic Leap, and some of them are using mobile devices such as cellphones and tablets. These two kinds of display devices have their pros and cons. Goggles, which are equipped with more 5 professional sensors, can provide more accurate data within broader usage, especially in scientific research. They can also release the users' hands, which allows users to do other work with AR as assistance. Moreover, the goggles can also recognize hand gestures, which provides more directness and convenience. However, the goggles are expensive at the current stage. Due to the high price of AR goggles so far ($3000 for HoloLens 2, for example), most of the users are large companies who can afford them. They are difficult to afford for ordinary people who want to use them in daily life or the companies which need thousands of them. AR goggles can also involve privacy issues (CNN 2013). There is still a long way to go for goggles to get into ordinary life. Mobile devices, on the other hand, are cheap and easy to get. Cellphones and tablets, for instance, have already been utilized by the public and have a larger group of users when compared with goggles. This makes it easier for applications made for mobile devices to be popularized. Although they do not have the same level of accuracy as the goggles, they are good enough for users to enjoy the benefits that AR technology brings. Also, since the choice of the platform in a way influences the strategy used for surface detection and other algorithms, the properties of the platforms should be considered by the developers before they start a project. Besides the display devices, there are other devices like controllers and earphones. Controllers can help the users to do more complicated tasks. There are many kinds of controllers. Some of them can recognize users' hand gestures. Litho, for example, is a finger-worn controller for AR purposes. It can be linked to mobile devices (Figure 1-4). By using the sensors, it can catch the movement, rotation, and some simple gestures of users' hands. Users can point, pick, drag, and do more complicated interaction with the virtual components and the user interface. 6 Figure 1-4 Litho (Litho 2019) Most of the AR headsets have integrated earphones that are able to prove binaural sound or ambient sound. Mobile devices, on the other hand, have integrated speakers that can only provide mono sound. To prove binaural sound or ambient sound, external devices are needed, which can be very diversiform. There are also AR devices that focus on the audio experience. For instance, Bose has published several new products that can provide 360 audio and targeted information by tracking head orientation, and body motion and users' location (Bose Corporation 2020) (Figure 1-5). Figure 1-5 Bose AR sunglasses (Bose Corporation 2020). 1. 1.4. Development Platform There are several platforms for developing AR for mobile devices (Table 1-3). Two platforms for developing that are mostly used are Unity and Unreal Engine. Both of them are software-development environments (game engine) that are widely used in the gaming industry, which can provide a high-quality interaction, graphical rendering, and audio rendering. 7 T bl 1 3 C a e - ompanson b etween I eren tD eve opment Pl £ at orms Unit y Unreal Engine Android Studio Xcode DirectX Website htt ps:/ /unity. co http s:/ /www. unreal htt ps:/ /developer.a ht tps:/ /develop http s:/ /www. m icr ml engine.com/en - ndroid.com/studio er.apple.com/x osoft. com/en - US/what - is- unreal- co de/ us/download/deta engine-4 ils .asox?id = 35 Suppor t Windows Mi xed Windows Mi xed Android iOS Windows Mi xed devices/ Reality, Reality, Android, Reality s ystems Android, iOS, iO S, Magic Leap HoloLens, Magic Leap Coding C# C++ Java/Kotlin Swift C++ language Graphic Acc eptable 3D Better 3D Limited graphical Limited Limited graphical Rendering Rendering rendering graphical rendering rendering Audio Ambiance Ambiance sound Basic audio Basic audio Basic audio sound available available setting setting setting SDK Integrate Separate Separate Separate Sep arate Unity started to develop in the AR domain earlier than Unreal, which brings it more support documentation in AR, while Unreal Engine is having a rapid development in this area. However, due to the integrated SDK package mentioned above, Unity became a better choice for the AR development platform at the current stage. 1.1.5. Software Development Ki t (SDK) A Software Development Kit is a set of tools that can help the developers to create applications for a certain platform for different usages, frameworks, functions, and operations. It provides modules and templates to the developers that they can directly use or combine to realize the functions they need. For example, AR Core, one of the AR SDK, contains the script packages for surface detection, figure recognization, object tracking, and a lot more. Developers can use those script modules directly or make changes to them to fulfill their development goals. There is a great diversity of AR SD Ks in the market. Three of the SD Ks that are mostly used in the market are ARKit, ARCore, and Vuforia. Each of them has its own pros and cons (Table 1-4). 8 T bl 1 4 C fARKt3 ARC dV £ a e - ompanson o I ' ore, an u ona ARKit 3 ARCor e Vuforia Suooor ted device s/s y stems i0S Android, i0S Android, i0S, UWP Suppor ted develop platfo rms i0S, Unity, Unity, Unreal, I0S, Unity, Android Studio, Unreal Android Studio Xcode, Visual Studio World coor dination and anchor Yes Yes Yes 3D trackin2 Ye s Yes Yes Surface detection Yes Yes Yes Li2ht detection Ye s Yes No Graphic reco2nization Yes Yes Yes Cloud shar e Ye s Yes No There used to be a problem choosing the SDK to use, especially because iOS and Android do not have perfect compatibility. Developers usually had to use the ARKit for iOS and ARCore for Android, which can double the workload for producing applications. This changed when Unity released a hub system called AR Foundation that integrates multiple AR SDK together, which can solve the problem of publishing one application to different systems. It can also make it available to use features from different SDKs in one application. 1.2. Room Acoustics Room acoustics, as a special part of acoustic science, focuses on the sound performance and properties in an enclosed environment (Everest and Pohlmann 2009). It is characterized by several properties, such as reverberation time, which are important in architectural design. This section will introduce some of the basic terms of room acoustics that are relevant to the acoustic simulation: sound pressure level, decibel, reverberation time, and impulse response. 1.2.1. Sound Pressure Level and Decibel Sound pressure is the deviation between the local pressure and ambient pressure. It is caused by the sound wave. Sound pressure level (SPL) is the logarithmic value of the sound pressure based on the reference pressure that corresponds to the minimum audible sound level heard by humans (Everest and Pohlmann 2009). 9 Where L p = Sound pressure level, dB P = Sound pressure, Pa Po = Reference pressure, equal to 2 x 10- 5 Pa (1-1) Decibel ( dB) is the unit of sound pressure level, which is defined as 20 times the logarithm to base 10 of the ratio of the sound pressure and the reference pressure (Everest and Pohlmann 2009). SPL can be used to describe the loudness of the sound (Figure 1-6). The range of human hearing begins at 0 dB (corresponding to 2 x 10- 5 Pa). When it reaches 85 dB, continuous exposure to sound at this level can cause harm to human ears. When it is 140 dB, ears start to feel pain, and it will cause irreversible damage to the ears when the SPL is higher than this number (Occupational Safety and Health Administration 2015). Decibel Level (dB) I 0 1 0 30 50-65 80-85 95-1 10 1 00 1 1 0 1 20 1 1 0 -1 25 1 1 0 -1 40 1 50 1.2.2. Reverberation Time Source I Typical Physical Res ponse Softest sound that c a n be hea rd o rma l brea th ing Ba rely a udible Wh is per Very q uiet No rmal co nve rsati on Q uiet City rraff i c n oi s e Ann oying Motorcycle Very annoyi ng Schoo l da nce, boom box Very a nnoying Busy vid eo a rca de Very a nnoying ightclub Ca n da mag e hearing aft er 1 5 minutes expos ure per day Stereo, persona l music player Ca n da mag e hea ring aft er 1 5 minutes expos ure per day Rock co nc ert s o ise may c a us e pa in a nd brief expos u re ca n inj ure ea rs Firec ra cker o ise may c a us e pa in a nd brief expos u re ca n inj ure ea rs Figure 1-6 Decibel Chart (Daniel 2007, 225-231) In an enclosed room, the sound does not disappear immediately after the sound source stops because the 10 sound waves continue to reflect from the room surfaces. This phenomenon is reverberation. However, the energy of the sound gets diminished each time it hits a surface. The time for the sound pressure level to reduce --60 dB in a room is called reverberation time (T60) (Everest and Pohlmann 2009). It is influenced by the volume of the room, the material of each surface and its area (Sabine 1900, 4). Furthermore, different frequencies of the sound have different reverberation times because the absorptivity of the surfaces varies by frequency (Table 1-5) Table 1-5 Examples of material absorption coefficients (Everest and Pohlmann 2009). Material 125Hz 250Hz S0 0Hz lk Hz 2kHz 4kHz Acoustical tile (1/ 2 inch thick ) 0.07 0.2 1 0.66 0.75 0.62 0.49 Concr ete blo ck, coarse 0.36 0.44 0.31 0.29 0.39 0.25 Wo od floor 0.15 0.11 0.10 0.07 0.06 0.07 Glass, oridnar y window 0.35 0.25 0.18 0.12 0.07 0.04 Gypsum board : 1/2 inch 2 * 4 studs 0.29 0.10 0.05 0.04 0.07 0.09 Reverberation time is an important aspect of room acoustics design. It influences the sound quality of the space. When sound plays in a room with a high reverberation time, it takes a long time for the previous sound to disappear. The new sound will overlap with the previous sound, so the volume is enhanced. It can be a good thing. For example, cathedrals made of stone usually have a high reverberation time. Chanting echoes throughout the space. However, when people are speaking in such an environment, this phenomenon may mix the speech and make it intelligible. On the other hand, a room with a short reverberation time makes the sound disappear quickly in the space. Spaces for different functions have different requirements for reverberation time (Raiche! 2006). Public places like restaurants and libraries may need shorter reverberation time to keep the environment quiet. Lecture rooms also need shorter reverberation time to make sure the speech clear. Instead, concert halls and churches need the reverberation time to be longer to augment the sound of the instruments and voices (Figure 1- 7). 11 2.4 2.2 2.0 1.8 1.6 1.4 1.2 a: 1.0 a: 0.8 a: 0.6 0.4 0.2 0 ....................... ___ ..___..._ ............................. ___ _.__ ................................. � 100,000 1,000,000 10 000 VOLUME, ft 3 Figure 1- 7 Reverberation time for different kinds of spaces and types of sound (Raiche] 2006). 1.2.3. Impulse Response Impulse is a brief input of the sound, such as clap, balloon blast, or gunshot. It can be used to measure the room acoustic performance by analyzing the response in the space. The im pulse response is a measurement of such an impulse in the space and can be regarded as representative of how the sound will perform in the space. It shows information such as propagation delay, the arrival of direct sound and reflections, and reverberation decay. From this data, many important acoustic characteristics of a space can be analyzed (Everest and Pohlmann 2009). 1. 2.4. Glossary of Terms There are many other terms involved in acoustics simulation that need to be aware of . Below is a table of the definitions of these terms for better understanding (Table 1-6). 12 Table 1-6 Glossary of terms (Everest and Pohlmann 2009). Term Definition Absorption The process of sound energy being taken away. Absorption coeffic ient A parameter describing the fraction of energy taken by a surface in the rage of 0 to 1. Direct sound The sound that issued from the sound source it self without any reflec tion . Echo A phenomenon when th e reflected sound has an obvious delay from the direct sound. Frequency The rapidity of the sound wave in a unit ti me. The unit of frequency is hertz (Hz) Flutter Echo A repetitive echo caused by parallel refle ction surfaces Reflected sound The sound that received after one or multiple reflection s. Reflection The change of th e direction when th e wave hits a surfac e. Refraction The change of the direction when the wave passes from one medium to another. 1.3. Room Auralization Acoustic engineers always want to know the result of their design, and it is important to be able to test the result before the construction of a space to avoid design failures. To mimic the sound in the virtual environment, the first thing to know is how human ears and brains work to hear the sound. This section introduces the human perception of sound and the basic form of the auralization, which is based on binaural audio. 1. 3 .1 . Human Perception of Sound Direction The human auditory system is complex. Sound is differently received by two ears based on their spacing, which lets us estimate the location of the sound source. The ear near the sound source receives a greater intensity of sound than the other one, and also receives the first wavefront earlier. It is also influenced by the head, which blocks a different amount of the sound wave to the ears (Figure 1-8). Depending on the difference in the signal received by the two ears, human brains can make an estimation of the location of the sound source. However, this phenomenon works more accurately on estimating in a horizontal way, but less in a vertical way (Everest and Pohlmann 2009) unless you tilt your head sideways. The head also absorbs, reflects, and passes through the sound, which is the head-related transfer function (HRTF) each ear receives different sound (Brungart and Rabinowitz 1999, 1465-1479). 13 Above Figure 1-8 Binaural direction sense. 1.3.2. Binaural Audio When using headsets, the sound sources are often perceived as close to or even inside the ears. To mimic the sound environment to make the listener get a more realistic auditory feeling, there are technologies that recreate the binaural cues necessary to present sound as it is performed at different locations. Head-Related Transfer Functions (HRTFs) are a key component of binaural audio. When binaural sound is recorded using a dummy head (Figure 1-9), the HRTF has already been included in the final recording. When using a normal two-channel recording, the HRTF is not considered and must be added to synthesize a binaural rendering. To bring the HRTF back, the recorded sound has to processed by the left and right ear HRTF to create a new final left channel and right channel. (a) Dummy head (Georg Neumann GmbH 2020) (b) Normal two-channel Recording (Olympus America Inc. 2020) Figure 1-9 Two ways for binaural recording. 14 1.4. Summary This chapter gives a brief introduction of AR and room acoustics, room auralization about their definitions, histories, and some basic concepts. AR is the combination of the virtual components and the real scene, which also provides certain interactions. It is mostly used in the visual domain, but it has a huge possibility to contribute to the perspective of audio, like room acoustics. Room auralization is a fundamental technology used for generating the sound of a room. By using this technology, it is possible to simulate the acoustic performance of the room and bring it to an AR environment. Past research and practices and technical details associated with the research purpose of these aspects will be introduced in Chapter 2. 15 2. BACKGROUND RESEARCH It is crucial to understand strategies, principles, and theories as well as learning from other researchers and practices. Deeply looking into these legacies, is the only way to be aware of existing limitations and defects, then be able to define a reasonable range of the problem. With the knowledge learned from the previous process, it is possible to develop a more rational methodology and provide a feasible solution. This chapter reviews a few research projects and technologies about AR, room acoustics, and room auralization, which are significant and enlightening. 2.1. Augmented Reality (AR) Great progress has been made during the 50 years since the idea of AR was first raised. The open field of the AR market is attracting more researchers and people in the industry to do researches and practices in many different aspects. including display strategy, surface detection strategy, and interaction. Also, there are many existing applications in the market for users to explore. 2.1.1. Display Strategy There are three different strategies of displaying virtual objects in reality: optical see-through (OST), video see-through (VST), and spatial projection (Schmalstieg and Hollerer 2016). OST uses an optical methodology to combine virtual and reality. The real scene is directly captured by the eyes through the transparent or half -transparent glasses. The virtual images reflect using optical strategies (Zhou and Owen 2007 ; Azuma 1997, 355-385). Both pieces of information overlap on the glasses and then captured by the eyes (Figure 2-1). This strategy is mainly used on head-mounted display (HMD) devices like smart glasses. The OST has the advantage of real-time. The real scene is also displayed without any delay, which is very important in industrial and medical practice for safety reasons (Navab 2003, 2-6 ; Rolland and Fuchs 2000, 287-309). However, the interaction of the virtual and reality is weaker because they are from different sources. Google Glass is one of the examples of the OST display (Figure 2-2). There 16 are many more different OST devices in the market. eye mi rror \ _ _ '":T display � >�, ............... , , , , - - - ' , , ', , , ------ - - --- - - ------ eye eye 2 3 Figure 2-1 Typical designs of OST (Hainich and Bimber 2011). Figure 2-2 Google AR glasses (Google 2020). VST works in the opposite way by combining the virtual and reality together and then display them together to the screen of a tablet rather than glasses (Azuma 1997, 355-385). By using cameras to capture the real scene, the CPU of the device can pre-calculate and combine the graphic by pixels, which can result in a better graphical quality and interaction (Azuma 1997, 355-385). The complexity of the calculation and the image resolution define the time of the delay. With the improvement of the high-speed CPU and more efficient algorithms, the time-lag is getting shorter, which makes the VST more acceptable. Mobile phones and tablets are widely-used VST devices (Figure 2-3). 17 Figure 2-3 AR on mobile phones. Spatial projection adds the virtual information to the reality by projecting it to the surface of the real object. One successful study is the AR Sandbox (Sanchez et al. 2016, 599-602). which projects the color-coded contours on to the sand based on the shape of the sand and gives the children direct understandings of the topography (Figure 2-4 ). Figure 2-4 AR Sandbox (Sanchez et al. 2016, 599-602) 2.1.2. Surface Detection Most of the AR applications on the market are mainly using two surface detection strategies: feature point detection and photometric modeling. In general, feature point detection uses the image features to recognize what is in the picture. By calculating the differences between the pixels, the algorithm finds the "key points" and connects them with an anchor that contains 3D coordinates. It can detect and track planar surfaces (Kanazawa and Kawakami 2004, 1-10) or complex objects with detectable features like faces. One of the advantages of feature point detection is 18 that it only requires one camera, which makes the feature point more suitable for mobile devices (Figure 2- 5). It is widely used for face detection, which is common in beauty cameras and picture recognition. However, this strategy has difficulties in detecting surfaces with less detail. For instance, it is hard to recognize smooth and single color surfaces like painted walls and transparent objects like glass. Figure 2-5 Feature point detection used on mobile devices. Photometric modeling is a technology that uses photos taken by multiple cameras from a different angle to generate the geometric model of the surroundings (lkeuchi et al. 1999, 147-163). It generates models by using photos from different angles of the object (Figure 2-6). It calculates each pixel by using optical triangulation, which is a method of distance measurement by calculating the angles between the light rays. After the match, the pixels represent the same location of the object in multiple pictures, it can get the relative position of the pixels. Then the algorithm generates the model of the object as a series of triangular mesh (lkeuchi et al. 1999, 147-163). Photometric modeling is used on many occasions when there is more 19 interaction with reality, like the games which have the character recognize the environment and react with the surroundings (Microsoft 2018). It also can be used to assist fabrications and judge if the object in the real word is in the correct position. Due to high requests for the cameras and the speed for the system to process the data, this technology is widely used in AR goggles, like Microsoft HoloLens and MagicLeap, and some of the virtual reality (VR) headsets (Figure 2-7). Figure 2-6 Use photos from different angles of the object to generates 3D models (Lievendag 2018). Figure 2- 7 Spatial mesh generated by HoloLens (Tuliper 2017) Comparing these two strategies, feature point detection is a more lightweight solution for surface detection 20 than photometric modeling. However, photometric modeling is a better option when a more accurate model is required for the real environment (Table 2-1 ). T bl 2 1 C a e - ompanson o eature pomt d etection an d h d I' p otometnc mo e mg. Feature Point Detection Photom etric Modelin2: Platform Mobile Devices Goggles Accuracy Low High Ha rdware Request Low High Price Low High Lim itation Not very stable Need a large number of pictures Cannot detect pure col or surface Heavy Models Cannot detect transparent surface Not stable detectin g transparent surface Each strategy has its pros and cons. By comparing possibilities, tools can provide which the properties that are most important to the application, the developers can choose more suitable platforms for AR applications. 2.1.3. Interaction Interactions in the AR environment are now in various forms. By using face recognization and motion capture, virtual objects now can react to user's actions, as well as hide behind people depends on their relative position (Apple Inc. 2019). Virtual objects can also react to the real lighting environment and changes (Google 2019). The user interface (UI) also promoted the interact feedback. 2. 1.4. Existing AR applications on mobile devices There are many cell phone-based AR applications. The author tested several for both Android and Apple iOS (Table 2-2) Table 2-2 Tested Existing AR Mobile Applications 21 APP Tested Categor y Usage Interaction Platform Level SkyV iew Android Education Star observation 0 Kubity GO Android Model viewer View Revit model 1 Qlone Android Model building Photometric model scanning 1 Aruler Android Measurement Measure length and area 1 ARPlan Android Construction Building measurement and calculation 1 Just A Line Apple iOS Graphical design 3D painting 1 Paint Space AR Apple iOS Graphical design 3D painting 1 Light Space Apple iOS Graphical design 3D painting 1 3D Brush Apple iOS Graphical design 3D painting 1 SketchAR Apple iOS Education Sketching tutorial and assistant 2 Magicplan Apple iOS Construction Building measurement, calculation, and 3 budget estimation Pokemon Go Android Entertainment Game 3 Angry Bird AR Apple iOS Entertainment Game 4 Minecraft AR Android Entertainment Game 4 IKEA Android Shopping Preview commodity 3 Google Map AR Android, iOS Navigation Show the direction of the route and 0 signs for turns. The interaction level in the table above is defined as: 0 No interaction, the users can only see the information or data basic on the environment. 1 Only has interactions with the users. Virtual objects can only be created or deleted by the users or be able to react to the position of the users. 2 Have basic interaction with both the users and the environment. Virtual objects are based on real surface and location. Virtual objects can only be created or deleted by the users or be able to react to the position of the users. 3 Have basic interaction with the environment and advanced interaction with the users. Virtual objects are based on real surface and location. The users can control the position, movement, and other properties of the virtual objects or the virtual objects can react to the users' command. 4 Have advanced interaction with both the environment and the users. Virtual objects can react to the environment and the change of it. The users can control the position, movement, and other properties of the virtual objects or the virtual objects can react with the users' command. By testing the applications above, some findings and problems had been identified: • Most of the applications do not have sound related to the environment. • Due to the limited operation area, applications should have a clear and concise interface or larger buttons for users' convenience. • Simple tutorials or directions are useful for users to get familiar with the application. 22 2.2. Room Acoustics In order to run more accurate simulations, the basic principle and theories are fundamental. This section goes through the formulas used for the calculation of sound pressure level and the calculation of reverberation time, as well as the methods of geometrical acoustics analysis and ray tracing. It also introduces the strategies and instruments for measuring sound in the real world. 2.2.1. Calculation of Sound Pressure Level In an open field, the sound is heard mostly as a direct sound, which is the sound from the sound source to the ears without any reflections. However, in a room, the sound heard at a specific position is a combination of the direct sound from the sound source and the sound reflected from the surfaces in the room, which is the reverberation sound. Therefore, the SPL of the position can be calculated with the following formu la (Everest and Pohlmann 2009). Where L p = Sound pressure level, dB Lw � Sound power level of the sound source, dB Q = Directivity coefficient. r = Distance, m R = Room constant, m 2 (2-1) When the sound source is at the center of the room, Q = 1; When it is on a surface in the room, Q = 2. When it is on an edge of the room Q = 4. When it is at a corner of the room, Q = 8. The room constant represents the capacity of the room to absorb sound (Everest and Pohlmann 2009). Where S = Surface area, m 2 R = I S; ·a ; (l -a ) a = Absorption coefficient of the surface a = Weighted average absorption coefficient (2-2) 23 2.2.2. Calculation of Reverberation Time There are two basic formulas to calculate the reverberation time. When the average of the absorption coefficient of room surfaces (a ) is larger than 0.2, the well-known Sabine' s formula is used (Sabine 1900, 4): Where V = room volume, m 3 S = Surface area, m 2 T = 0.161V c- < O Z ) 60 "' a - • L, S r a ; a = Absorption coefficient of the surface ii = Average absorption coefficient (2- 3) Another equation, Eyring' s formula, is used when a is smaller than 0.2 (Eyring 1930, 217-241): Where V = room volume, m 3 S = Surface area, m 2 T = 0.161V (a > O . 2) 60 - Hn(1 -a ) ii = Average absorption coefficient (2-3 ) When the room is large, the absorption from the air in the room can also make a significant influence on the reverberation time, especially at the frequencies above 2 kHz (Everest and Pohlmann 2009). To calculate this, the absorption of 4m Vis added to both formulas above. Where V = room volume, m 3 S = Surface area, m 2 T = 0.161V (a � O . 2) 60 I S r a ; + 4mV T = 0.161V ( a > O . 2) 60 - Hn(1 -a )+4mV a = Absorption coefficient of the surface ii = Average absorption coefficient m = Attenuation coefficient of air 24 (2-4) (2-5) 2.2.3. Geometrical Acoustics Analysis Acoustic designers have used graphical ray tracing to track the sound performance in a room since 1967 (Savioja and Svensson 2015, 708-730). The ray-tracing strategy is widely used in room acoustics calculation and modeling. Similar to the ray-tracing principle of light, the sound performance in a room can be simplified as a ray and use mirror image source model (MISM) construction (Figure 2-8 a). An extension of ray-tracing is beam-tracing, which traces a volume of sound instead of tracing a single ray (Figure 2-8 b). �- - �-- ' (a) Ray-tracing. (b) Beam-tracing. Figure 2-8 Mirror image source model (based on (Everest and Pohlmann 2009)) Based on the MISM theory, another classical method is the im age method which is mainly used to estimate the sound performance in small rectangular rooms (Allen and Berkley 1979, 943-950). This method using grids to simplify the sound bounces in a rectangular room (Figure 2-9). This method can quickly generate a basic bounce reaction from the sound source. However, it is not suitable for complex rooms with irregular shape and curve surfaces. 25 X, X, X, X , X, X, X, X, I X, X , X, Xo X, X, X , I 0 X, X, X , X, X, X, X, X, X, Figure 2-9 Image method for three bounces (based on (Allen and Berkley 1979, 943-950)). Ray tracing is an acoustic simulation method based on geometrical analysis (Krokstad, Strom, and S0rsdal 196 8, 118-125). It simulates the ray performance in 3D and traces the change of the sound during the propagation. Some tools can trace and calculate multiple bounces (Figure 2-10). Figure 2-10 Ray tracing in Odeon. 2. 2.4. Measuring Sound in the Real World There are several instruments, equipment, and sensors that can be used to measure and collect data of the sound performance in the space. A sound level meter is a hand-held device with a microphone that can be used to measure the SPL at a specific location (Figure 2-11). This sensor shows the live decibels of the SPL at its location, usually used in noise specification. 26 Figure 2-11 Sound pressure level meter. 2.3. Room Auralization This section first introduces the general process for room auralization. Then introduces how Unity uses several methods and tools for acoustics and modeling and room auralization well as some of the existing software for engineers to simulate the sound performances. Also, there are some specific tools in Unity to generate the sound for the users to get a direct feeling of the simulation result. 2.3.1. Process of Room Auralization First, using geometrical acoustics to predict ecograms based on the 3D model of the room (room geometry). The room contains information about the acoustic properties of each kind of material involved. Sound sources and listeners also need to be defined. Then, from this information, the echogram of the room can be simulated, and an estimate of the objective acoustic properties of the room and the sound can be obtained. Considering the binaural hearing, the sound received by the listener needs to be adj usted by HRTFs. The left channel processed by the left HRTF impulse response, while the right channel processed by the right HRTF impulse response. To get a binaural playback of other sounds, such as music and speech, the HRTF impulse response is used to convert the original sound to binaural and then render to a specific playback format (mono, stereo, or ambisonic) (Figure 2-12). 27 Room geomeuy Room acoustic prope11 ies Ecogram Sound source and listener HRTF impulse response Sound HRTF left impulse response HRTF right impulse response ( Left channel ._______ � Right channel Figure 2-12 Process of Room Auralization. 2.3.2. Existing Simulation Software for Room Acoustics Final left channel Final right channel There are numerous tools and software that are active in the market. CA TT, EASE, and Odeon are comprehensive software for professional acoustic simulations. For instance, based on the 3D models, Odeon can run various simulations and present reports, graphs, and color maps (Figure 2-13). It can also prevent auditory feedback. Bose also published Bose Modeler that focuses more on room acoustic design and simulates loudspeaker performance. � 30 8illiard Refl. order/colour:[ O] [1) [2] [3) (4] [ 5] [ 6] (7] :a] Path <m> : 20.700 Time <ms> 60 ( 1 Oead balls: 0 � � 'f:! i 30 Reflector c�rage S 11 rfac., s=tt en ng � None fs•Ol Actual Full scatter ( S •ll O Rad111t ion rel.to� oun:" Radiation Q x, On @ xv Q 30Random 0 Reftecto rsonl y Bill ard Balls I s ooo t : -, 1 Ba ll size I 2.o r: 1 mm o ,� t. per updat� m Max Refl . order l 1 ooooot½J I j□ Ma� decay � P l 51 - Poin t sourc e v Ii] ""'''"" 28 0D rawtelect ed layers(or rellectcu) Dr""1rays[R) Al v @0001, � � Reflector coverage: 2500D rays , I. order reflect ions In cluded Grid Distribution graph Fractiles and average Figure 2-13 Demo evaluations and simulations by Odeon T(30) (s) at 1000 Hz >= 2,40 2,20 2,00 1,80 1,60 <= 1,40 Job 1 - all receivers There are also some useful tools for sound measurement such as Virtual Sound Level Meter (VSLM), which is based on Matlab (https://github.com/muehleisen/VSLM ). VSLM can measure the SPL and do other sound analyses of a sound file in .wav format (Figure 2-14). IPJ v. 1m_0_ 4_1 - � � � � � � -� " 0 1 '•"' 1 " '" 1 eso l s,,. . 1 1 =n1 Meas FIie: 2207 47 _jmuehlhans _tuning-for Meas leng1h; k-440-hz-r esonance-box.w a� Sample Rale: 00.36 sec Cal File: 48COJ Hz Cal Factor: None loaded t (sec) Weighting· A Meter S peed. S lo w LASmax =7 8.5 dB att = 1. 7 sec X IPJ vsim_0_ 4_1 File Lpplol LEQ Band PSD Sp«trogram Help .,.,. _______ ____. � � � ��� -� � "<ffic l eso J so,.ml l Meas File: 220747 _jmuehlhans_tuning-for Men length: k-440-hz-r esonance-box.w ,1,1 Sample Rate: 00.::6 sec Cal File; 4EOXI Hz Cal Factor: None loaded Figure 2-14 Demo measurement by VSLM. frequency(Hz) 2k 4k 8k X Room EQ Wizard (REW) is a room acoustics analysis software that can measurand analyze room and loudspeaker responses. It can link and directly read the data from measurement microphones to measure the SPL, frequency, and do other analyses. It can also analyze existing .wav sound files (Figure 2-15). 29 /"i>:. � ;/', 1:1 � ... � � "�- ... .,.,_,_. - .. -� .__. (),.... "'� :,:, .. ·- � � [_w � �- · ·•• ! �� -,� ,� ,, ... . ,.,-. ,,.�,-,_, I:±: 's.J _J er @ : ----- � ··;.:,: ::- ,� '=� �-�-� � -� -- ���� - ----- - ·-·-_"·�--� _-_,-_ . � ·-· � =:: :·:::· z 1::=: -=-- . ,='J Y'---- ,--. ""' I - _,, ••!• -·· · - -···· ·�-- Figure 2-15 Room acoustic simulation by REW. 2.3.3. Steam Audio Steam Audio is an audio SDK that can be used on many platforms, including Unity. It can render the sound based on the built environment and surroundings based on basic acoustic physics. It can provide HRTF based binaural rendered sound which clearly tells the relative position of the sound source. Steam Audio has been used in VR environments and it can track the movement both on users' position and rotation. It also provides real-time experience on mobile devices. Steam Audio has also been used by acoustic engineers for acoustics simulation and analysis (Matthew Harrison 2019). 2. 3.4. Audio Performance in VR/AR Sound is getting valued in VR and AR development. The virtual sound object is involved in the process (Hong 2019, 338-339). Instead of bringing silent models to the environment, the sound makes the virtual more vivid and realistic. Listening to music in a virtual concert hall or having classes in a virtual classroom are also good examples to show how audio can perform in the VR environment (Vorlander et al. 2015, 15- 25). Audio in VR/AR not always combined with solid models; it can also act as a smart guide depends on users' location and body motion. For instance, An audio tour guide can provide guidance and introductions depends on visitors' position (Bederson 1995, 210-211). Also, with the development of the VR/AR gaming market, sound performance became an important feature. Room auralization and binaural audio are used in 30 many games to provide a more realistic environment to the players and better gaming experience. 2.4. Summary This chapter overviewed some basic principles and techniques used by AR and the existing usage of AR on mobile devices, which helps to choose the platform, method, and algorithm for developing the application. With technology advances in display, tracking, detection, and interaction, AR now has the capability to perform more complex tasks. This chapter also went through some of the basic calculations and measurement methods of some of the important room acoustics parameters such as reverberation time and sound pressure level. This knowledge becomes the foundation of acoustic simulation and calculation. The chapter also introduced several methods and tools for acoustic modeling, which helps to generate the methodology to auralize the simulation result. 31 3. METHODOLOGY This chapter introduces the overall workflow for the development of Soundar and then expands the steps in the workflow to include platform setup, database setup, application development, application testing, and results from the analysis. More detailed explanations of detailed application development and coding will be discussed in Chapter 4. 3.1. Expla nation of Soundar Augmented reality is meant to enhance people' s experience and feeling of reality. Those experiences and feelings are from different senses. The visual sense, as one of the two main ways for people to get information in daily life, is the most common one in AR. However, another way using the auditory sense has not been fully involved with the utilization of this technology. People can get information directly from what they hear, and on some occasions, it is even more direct than seeing and reading. Acoustic stimulation is exactly one of these occasions. To have a better understanding of how sound performs in space, it is always clearer and more straightforward to hear it rather than showing data and descriptions. This is the main purpose of Soundar, the application on mobile devices which is developed to provide auditory feedback of the acoustic simulation for an existing indoor environment based on AR technology (Figure 3- 1). (( ( ((( (( ((1 1,e Figure 3-1 Auditory feedback for a virtual sound source in a real environment. 32 By setting up the room based on the real environment the user is in and adding virtual sound sources into space, Soundar can simulate the reverberation time of the room and the sound pressure level based on the location of the user in the room. Then it generates a sound file that represents how it would sound as if the sound source is truly played on this occasion. The users can change the materials of the surfaces, thus changing the absorption coefficient and acoustic characteristics of the space. The feature of changing materials can help users redesign the room to achieve a specific acoustic performance (Figure 3-2). Edit Room Sound Pressure Level Start Set Room Set Sound Run Simulation Reverberation Time Edit Sound Auditory Feedback Figure 3-2 Workflow of Soundar. Soundar works on mobile devices. This application is aimed not for acoustic experts and engineers but for ordinary people who have the need to know how the sound may perform or even just the curiosity. The high price of goggles could be a barrier to get the experience, so a smartphone is used instead. However, this also brings a potential problem of the uneven quality of auditory feedback because of the huge quality range of different headphones, which will be discussed in Chapter 6. 3.2. Overall Work flow The overall workflow for developing Soundar starts from setting up the software platforms and databases and ends up verifying that the deviation of the results of Soundar is below the threshold for human hearing to notice the difference (Figure 3-3). 33 No Pla t fo rm Setu p Applica t ion Developem nt Applica t ion T est Result Ana lysis Fin ish Da t a ba se Set up Figure 3-3 Overall workflow of the developing process. First, the platform was set up for the development process, which includes choosing a development platform, getting the proper SDK, and testing the availability for the devices chosen. A database was also set up for pulling, pushing, and organizing the data for the application. These two steps were the foundation for developing an application. The application was then built by coding in the platform and linking the data from the database. Coding the application was the most time-consuming part of the whole process. An overall framework was built before the work started as a guideline. The entire work was broken into subprojects and then into several small tasks. By combining the demands, the application was finally formed and ready for testing when it was able to do the simulations and return all numerical, graphical, and auditory feedback. There were three tests for Soundar to pass representing three typical occasions. All three types (numerical, graphical, and auditory) of the simulation results were compared with the results returned from other simulation and testing methods. The sound generated by Soundar was also listened to and graded by musicians and acoustic experts. The test results showed how accurate Soundar was, and if there was any problem to fix and any space for improvement. 3.3. Pl atform Setup By considering the existing condition and availability as well as matching the features of the platform and the needs of the development, the development platform was finally decided as Unity. The main reason for choosing Unity over the others was the Unity AR Foundation that makes the process easier and the product more adaptive to different operating systems. 34 To get the platform ready for the development, the following software and tools were installed to the workstation: Unity and Visual Studio, Unity AR Foundation and other AR SDKs, and Steam Audio. Unity can be downloaded from the Unity website (https: // store.unity.com /download ). AR Foundation requires a version of 2019.1 or higher. Since an older version of Unity had already been installed on the workstation, Unity Hub is needed to support multiple versions of Unity (Figure 3-4) fl! Unity Hub 2.1.1 �u nity () Projects � Learn = Installs Installs 201 9.2.5 11 201 8. 3.14 11 • == == . :: - □ X o e LOCATE - Figure 3-4 Install Unity 2019.1 through Unity Hub. After Unity was installed, multiple SDKs were downloaded into Unity, including ARCore XR Plugin, ARKit XR Plugin, AR Subsystems, and AR Foundation (Figure 3-5). 35 Pac � ages •• "l X + All p•clc•;u Adv•nc.d - l l AR ► Analytics Library J.J.2 ' I AR Foundation ,. Hll■IIP:1■11 ► AR Subsystems 2.0.2 , Version 2 0.2 (20 .. . 2. .,, fl,d ) - - View documen tation - Vtew chan9elo9 - V1ew licenses ► ARCor e XR PIUQln 2.0.2 , - com umty. xr. arfoundat,on ► ARKIt Face Trad ino 1. 0. 1 Auth or: Unity Technologies Jnc. ► ARK1t XR PluQin 2 .0 . 1 , - - A collect1on of Mon0Behav10urs and CM ut1l1t1es for workmg ► Core RP Library 6 .9 .1 with AR Subsystems. - ► Quick Search 1. 4 .1 Includes: -- - ► Share WebGL Game preview - 1. 0.5 GameObJect menu items for creatinc;, an AR setup - Mon0Behav1our s th at control AR session hfecycle and ► Vufona Engine AR 8.3.9 , create GameOb1ects from detect ed, real-world tracl able features Scale handl1nQ Face tracking Last update Sep 13, 15 ·26 Up to date I Remove I Figure 3-5 Installing AR Foundation and other packages. Steam Audio can be found on their frontpage on Github (https: / /valvesoftware.github.io /steam- audio /downloads.html ). After downloading the package for Unity, it can be installed into Unity with all its assets and scripts (Figure 3-6). Before using it in Unity projects, it should be set as a spatializer plugin in the project settings (Figure 3-7). 36 0 ProJect Settmt;is Audio Edi tor Graphic s Inpu t Physic s Physics 2D Player Preset Manaoer Quali ty Script Execution Order Taos and Layers ,.- TextMesh Pro Settmos Time VFX 'l' X R AR Core ARK1 t Import Unity Package St eamAudio .,. ,I t�am, j, � • Aud10Enome .cs mD -!l • Aud1 0 Enome Amb110nic1S ource ,c1 Cl1:I ,!!I ,. Aud10EnomeAmb11on,c1So urce_Un1 ty ,cs l.ll:I ./ • Aud10EnomeAmb110nic 1So urcefactory .Cl l.il:I -!l ,. Aud10EnomeSource .cs � -!l • Aud10EnomeSource_Un1ty .cs &.:il:I -!l • Aud10EnomeSourcefactory .cs l.il:J -!l • Aud1 0 E no me State ,cs � -!l ,. Aud1 0 E no me State_ U mt y ,c1 1.iI:11 ,!!I • Aud10EnomeStatefactory .c1 l.il:I -!l • Bake r.cs aII:I � • Common .cs &.iI:1 ,!!I • Compo nentCache .cs l.il:.I -!!I ,. Compu teDe11 1ce .c1 C!I:I ,!!I • Core ,cs &m:I ,!I • Cor e_Unity ,Cl Cl1:I • !/ EJ1· , � :11 Build .cs l.il:I ./ • Mate nalV alueDrawer.cs � !I ;11 StmulattonSe ttm os ValueOrawer .c1 &.:il:I ,/ :11 St eamAud10Amb11on1c 1S0 urceJnspector, a ./ ::11 St eamAud10Bal,e dStat1cllste nerNodeln1p r !I :.t1 St eamAud10C ust o m S ettJno s lnspect o r .cs Cil!I ./ <:.t1 SteamAud10 Dynam1cO b1ect.1n 1pect o r .cs mD !I r::11 St eamAud10G e o me t ry lnspec:t or .c1 &.II:1 ./ • St eamAud10 L1stenerlnspect 0 r.cs l.:il:I ./ r::11 St eamAud,oManaoerlnspe or.cs Uil:11 --- - - ' • .-.St.c.arnAwiJ AM.ate. nall. nt.D.e..d.gr ..A S 00� { Cancel J\ 1mport J Figure 3-6 Import Steam Audio into Unity. Audio Global Volume Volume Rolloff Scale Doppler Factor Default Speaker Mode System Sample Ra te DSP Buffer Size Max Virtu al Voices Max Real Voices Spat1alizer Pluo1n Amb1sonic Decoder Plugm Disable Uni ty Audio V1rt ualize Effects 1 Sttrto { s 11 1tp 11 1f o nT1u 1C11 512 32 \ St11 <1m Aud1a Amb11an1c1 Figure 3-7 Audio setting in Unity ... lj )( o, I ,j I ! • _,, The main coding language of Unity is C#. Microsoft Visual Studio is used as the coding environment for this application. While installing Visual Studio, choose Unity as the workload, so it will install its specialized tools for Unity developers (Figure 3-8). 37 Installing -V isual Studio Community 2019- 16.3.2 Workloads Individual compon ents language packs Windows (3) .NET desktop development Build WPF, Windows Forms, and console applica tions using C#, Visual Basic, and F# with .NET Core and .NET... •· Universal Windows Platform development ■■ Create applications for the Universal Windows Platform with C#, VB, or optionally C++. Mobile & Gaming (4) Mobile development with .NET Build cross-platform applications for iOS, Android or Windows using Xamarin. I L:1\ Game development with Unity � Create 2D and 3D gam es with Unity, a powerful cross platform development environment. Location C:\Program Files (x86)\Microsoh Visua l Studio\201 9\Community Change ... ·o Installation locations Desktop development with C+ + Build modern C + + apps for Windows using tools of your choice, including MSVC, Clang, (Make, or MSBuild. Mobile development with C+ + Build cross-platform applications for iOS, Android or Windows using C++. By continuing, you agree to the � for the Visual Studio edition you selected. We also offer the ability to download other software with Visual Studio. This soft w are is licensed separately, as set out in !he � or in its accompanying license. By continuing, you also agree to those licenses. D X X In stall ation details v Visual St udio core editor The Visual Studio core shell experience, including syntax aware code editing, sou rce code control and work item management v Ind ividual componen t s * � C+ + core features � MSVC v142 -VS 2019 C++ x64/x86 build tools (v1 4 ... � Graphics debugger and GPU profiler for DirectX Total space required 2.85 GB Install while downloading • � I Figure 3-8 Installation setting of Visual Studio. 3.4. Database Setup The database is the foundation of an application. It saves all the information within a specific structure. The structure can reflect the relationship between data. There are a variety of types of databases, and some of them can deal with very complicated relationships. The data structure of Soundar is relatively simple, so it uses CSV files to record data which is smaller in size and easier to use. 3. 4.1. Material Database A CSV file is a data format that separates different data with commas and line breaks. It is easy to generate by using Microsoft Excel. There are several benefits of this file format: small file size, simple structure, and it can be directly read and written by Unity. It is hard to store complex data relationships, but it is good for saving data with an array structure, like parameters of materials. The database for materials was separated by the surface categories: wall, floor, and ceiling. Each database includes all the default choices of that category for the user to choose. Each line of the text is one kind of material. For instance, in the data in line i, the first data m[i] [O] is the name of the material. Following data 38 from m[i] [l] to m[i] [7] are the acoustics parameters of this material, which are low frequency absorbing coefficients, mid-frequency absorbing coefficients, high frequency absorbing coefficients, scattering, low frequency transmission coefficients, mid-frequency transmission coefficients, and high-frequency transmission coefficients (Figure 3-9). The definition for low, mid, and high frequency is 400 Hz, 2. 5 kHz, and 15 kHz (Valve Corporation 2017). When a user creates new customized material, the user types in the parameters and uploads a new picture file to a new texture, the new data will be written into the CSV file with the same format and structure. Then when the user wants to reuse this new material, it will be found in the database. When a user picks one kind of material m[i] [0] and attaches it to a mesh surface, the parameters of steam audio material will be replaced by the data from m[i] [l] to m[i] [7]. There are only limited default choices of materials in the current version of Soundar. All available materials are listed in Appendix A. 3. 4.2. Sound File ,J FloorMaterials,csv - Notepad File Edit Format View H elp c arpet,0. 14 ,0.6,0.65 ,0.05,0.02,0.005,0.003 woo d,0.11,0.07,0.06,0.05,0.2,0.025,0.00S □ X " .., Figure 3- 9 Material data of floor materials in CSV format. There are five default sound files in Soundar which are an impulse, a constant sound at 500Hz, a piece of music, and two pieces of speech of male voice and female voice. All these sounds were recorded in anechoic chambers. The impulse sound is a balloon burst. The music file is an anechoic recording of symphonic music (Patynen, Pulkki, and Lokki 2008, 856-865 ), and the speech files are recordings of several Harvard sentences from the TSP Speech Database (Kahal 2002, 9). 3. 4.3. Unity Prefab Prefab is the game object which is defined as a reusable asset. The prefab contains all components and property values. It is convenient to use prefab for the game objects that are used multiple times in the program. Soundar uses prefab in many ways (Table 3-1). 39 Prefab Name Plane point Plane line Arrowhead Sound source _point AR plane visualizer Table 3-1 List of refabs used in Soundar. Introduc tion A sphere that is used as the room vertex. A cylinder that is used as the boundary lines of the room surfaces. A tripod that is used as the arrowhead of the coordination axis when moving the sound source. This prefab contains a sphere that represents a point sound source that generates sound in all directions. The sphere contains the Audio Source components and Steam Audio Source components which are used to play sound clips and adjust sound performance. The prefab also contains an SPL tag to show the SPL of the sound source and an invisible text to store the playing status of the sound source. This prefab is used to render the AR plane detected by the program. It does not have a certain geometry. The shape mesh of this prefab is generated by the script component AR Plane Mesh Visualizer. This prefab set up with a mesh renderer which defined the pattern and texture of the AR plane. 3.5. App lication Develo pment Pictur e The overall framework includes the architecture of the application, operation sequence, as well as its internal logic (Figure 3-10). The development of Soundar contains two maj or parts: the simulation process and user interface (UI) design. Each part was broken down into subprojects by different aspects. 40 Application Development t t Simulation Process ser Interface Design Set Room ___. Set Sound ! Run Sinrnlati� Get Feedback Theme and Layout Main Menu I Geomerty I I Position I I Reverberation Time I I mueric I I Logo and Icon I I Proj ect Operation I Theme Color I I Settings I I I I Sound Properties I I I Sound Pressure I I I Material Audito1y Level I I I I Layout Quit Application I Sound file I About I Figure 3-10 Overall framework of the application development. This section introduces the detailed structure of the simulation process and user interface (UI) design, as well as the strategy used during the development which is a modular design. 3.5.1. Simulation Progress There are four steps in the simulation progress: setting room, setting sound, running simulation, and getting feedback. The order of these four steps is not only the order for the user to operate but also the order of the data flow. The inputs are created by the user at the during room setup and sound setup. Then Soundar can use this data to run the simulation and return the simulation results to the users in multiple formats. 3. 5.1.1. Set room Soundar allows the users to define the geometry of the room base on the reality, change materials of the surfaces, and add virtual contents into the room (Figure 3-11). The room geometry is built based on the vertices of the room. The users place these vertices pointing at the corners of the room using the screen. The surfaces of the geometry are defined as floor, walls, and ceiling. After building the geometry of the room, users can change the material of different surfaces. Furniture and people can introduce a large amount of sound absorption, which will significantly influence the reverberation time. Users are also allowed to add these elements into the room. All elements need one surface as the host and can be moved, rotated, and deleted. 41 Set Ro o m Geo merty Material Detect fl o o r plan e Select surface Set floo r vert ex Pick material Set ceiling hight Replace material data Figure 3-11 Structure of room setup. 3. 5.1.2. Set Sound After setting up the room, the next step is adding a virtual sound source(s) into the room (Figure 3-12). Different from adding content, the sound sources do not need a host surface. They can float in the air to represent some specific locations such as a person speaking. Users can move and rotate the sound sources within three dimensions. Depending on the relevant location between the sources and room geometry, the sources are automatically defined as interior sources or exterior sources. Other parameters need to be assigned for the sound source such as shape, sound pressure level, and repetition mode. Users can choose either single or repeated as the repetition mode to control if the source plays the sound only once or repeatedly. The default sound is a piece of symphonic music. Users can play their own sound by uploading their location files from their devices. Set Sound Pos ition Sound Prop ert ie s So und Fi le S et in air Select oun d ource Se lect oun d ource Ty ple in new data Br o e fro m lo cal Move along axi R eplace data R eplace defau lt sound Figure 3-12 Structure of sound setup. 3. 5.1.3. Run simulation 42 The SPL of the sound is calculated by the sound signal sent to the speaker or earphones. Different devices and earphones may have a different sound performance which will influence the final result. At the beginning of the application, Soudar asks users to do a volume calibration for the device (Figure 3-13). This calibration plays a 30dB constant sound at 500 Hz and lets users adj ust the volume of the device to where they can barely hear the sound. This process can make the output fit the individual hearing of users. Please adjust the volume of your device to wher e you can barely hear this sound. Figure 3-13 Device volume calibration. Reverberation time is calculated by Eyring' s formula (2-3). The room volume V and surface area S are calculated from the room set up by users. The average absorption coefficient ii is calculated from the acoustic parameters assigned from the material database. 3. 5.1.4. Get Feedback Soundar presents three kinds of feedback: number, graphic, and audio. Parameters such as the value of the reverberation time and the SPL are easier to show as a number. However, the number itself cannot present enough information for the user to understand the meaning of it. Scale charts can help users to have a better understanding of whether the number is relevantly low or high (Figure 3-14). 43 Reverberation Time 40 60 Figure 3-14 Scale charts show reverberation time and sound pressure level. The most important form of feedback is the audio. Soundar generates binaural sound based on the simulation results and the relevant location of the user and the sound source. The audio is a simulation of what the user would hear in the same position with a real sound source playing the same sound. With the simulated audio, users can have a direct feeling of the acoustic performance of the room and judge if this sound meets their requirements. A notice is shown at the bottom of the screen while Soundar is playing the audio feedback to remind users to use headsets or earphones for the correct experience. 3.5.2. User Interface (UI) Design UI is an important part of the development of all applications. A better UI can present a nicer experience to the user and make it easier for the user to understand the information. The logo is a combination of the letter "S" and sound wave (Figure 3-15). All the tool buttons are using both icon and keyword to show their functions. The theme color is orange. The main goal of the layout design is to be clean, clear, and straightforward (Figure 3-16). Figure 3-15 The logo of Soundar 44 Star t Screen - Room Set-up - Chang e Envir onment - Place Sound Source _ Ch ange Sound Sour ce Figure 3-16 UI conceptual design Run - Simulation The main menu button that is on th e upper left comer of the screen includes starting a new proj ect, save the sound file, setting s, about, which is th e information of Soudar an d th e develop ers, and quit the application (Figure 3-17 ) . Figure 3-17 Main Menu conceptual design 3.5.3. Modules Instead of using a single line developing method that starts from the beginning and follows by the step of the application usage, the development of Soundar used the modular design method. Individual modules were generated before building the application. Each module focused on one independent simple function. 42 modules were made for Soundar for a different purpose (Table 3-2, Table 3-3). Modules can be used multiple times in different tasks or even other modules, and the tasks can be a huge combination of the modules. 45 'I bl 3 2 M d 1 a e - o u es use d . h 1 m t e Slll1 u ation process. Module Script File Name Function CSV to list Read the .csv file into the program as an array. Assign acoustic Data Operation Search certain material from the array and assign its material parameters acoustic data. Place obj ects on AR plane PlaceObj ects Place a specific type of game obj ect on an AR plan. OnARPlane Link two points LinkTw oPoints Link two given game ob jec ts with a line. The format of the line is defined by a prefab. Create surface Create Surfac e Create a surface by the given list of vertex and the direction of the surface (horizontal and vertical). Clone obj ect CloneObject Copy a game obj ect and paste it at the same location. Clone objects Copy a list of game ob jec ts and paste them at the same location. Do not destroy DoNotDestroy Keep the game obj ect when switching betwe en scenes. Place ob jec ts in air PlaceObj ects InAir Place a given game obj ect in the air without any reference plane. Mesh area Calculation Calculate the area of a given mesh. Eyring formula Calculate the reverbe ration time using Eyring's formula. * This table only shows the modules and codes written by the author. Module Show menu New project Save sound Show setting Show about Quit Clear Visibility scale Visibility frequency graph Visibility wave graph Mute all Setup settings Update settings Finish room Finish sound Add sound Edit Edit room Edit sound Show dropdown Change material Option SPL Option sound file Option mute Option move Option menu white SPL input Change sound file Mute Delete a e - o u es use 'I bl 3 3M d l or t e user mter ace es1gn. d £ h £ d . Script File Name Function Show the main menu. Create a new project and return to the scene "Set Room." Save a .wav file of the internal sound of the simulation. MenuButtons Show the window of "S ettings." Show the window of "About." Quit the application. Unshow all windows when touching the blank space. Show/hide the SPL scale in the scene "Run Simulation." Show/hide the frequency graph in the scene "Run Simulation." Settings Show/hide the wave graph in the scene "Run Simulation." Mute/play all sound sources in the scene "Run Simulation." Read the current setting in the current scene. Update the setting from the last scene. Go to the scene "Set Sound." Go to the scene "Run Simulation." StepButtons Go to the scene "Set Sound." Show the two choices of "Edit Room" and "Edit Sound." Go to the scene "Edit Room." Go to the scene "Edit Sound." Material Show the material dropdown at a touch. Dropdown Change the material when the dropdown value changed. Change the UI for m and show the SPL input filed. Change the UI for m and show the sound file dropdown. SoundOption Change the UI form and show the mute toggle. Change the UI form and go to the scene "Move Sound." Reset the UI form and hide all options. Change the SPL of the choosing sound source. SoundControl Change the sound file of the choosing sound source. Mute the choosing sound source. Delete the sound source. * This table only shows the modules and codes written by the author. 46 Although the modules are for simple tasks, each of them has single or multiple variables. With the changes in the variables, the module can realize multiple purposes. All these modules are small pieces of Unity scripts, and they are called by the main script for each task or other modules (Figure 3-18). The code and algorithms of each module will be introduced in Chapter 4. AcousticMaterialPar a ! I CSV to list I: Assign data Ii � objects on AR plan e L Place obj ects in air _}- I Creat surfa ce Acoustic:'. fa terialP ara ----+ ; I Clone object f-f-- •,•, I L:____ . '· --+- +-++- - c 10 n e objem r-----;:::::: � I Li nk two points f----+ I Do not destr oy Calculation :, I ,_ : _ _ _ - ,- 1 --. Mesh area r:- - ' ,_ ______ _, : � f----+ [ l � - Ei _ · r i n � g _ fo r _ mu _ l a � j--:- .- f----+ MenuButt ons Show menu New project Sa,·e sound Show settin g Show about Quit Clear Materia!Dropdown :, I I · , · Shoff dropdown r:- , '-----'----' t l � c_ han � ge _ mat _ eria _ l �i--:-- / Start Screen Start_Matn Set Room Set Room_ Main Edit Room Edit Room_ !\fain Set Sound Set Sound _!\fain Edit Sound Edit Sound_ Main l'llove Sound Mo,·e Sound_Main Run Simu lation ,-- ---- �:· -i ·· · ----!:::� :::----- � +- +-- : I Fin ish sound I ' I Add sound I Edit I Edit room Edit sound I Settings :,· L __ � � Visibili ty_ sd e Visibili ty fr equncy graph Vis tbility_ wave graph � �------� Mute all Setu p sett ings Update sett i ngs SoundOption Option_ SPL Option sound fil e Option mute Option_ mo,·e Option menu white SoundContr ol SPL input L......J Chan ge sound f il e Run Sunulation _Main i+-----' Mute I Delete I ,: � � -------------------.. Figure 3-18 Relationship between scenes, main scripts, and modules. 3.6. App lication Test Two groups of tests were implemented in two rooms with different sizes and materials. Both rooms were enclosed and had no window. In each room, a loudspeaker generates the sound, and the live sound 47 performance was recorded by a microphone. Then used Soundar to simulate the same sound source at the same location in the room. By comparing the impulse responses, as well as the sound SPL graph in both the time domain and frequency domain of the sound generated by Soundar with the results form and the live recording, the accuracy of Soundar was able to be analyzed. 3. 6.1 . Room 1: Watt 212 WA TT 212 at the University of So uthern California. It was a simple box room which was used as a classroom. It was 2. 40 mi n height, 7.70 mi n length, and 4.96 mi n width. The floor was covered with thin carpet, the ceiling was acoustic ceiling board, and the walls were gypsum board. The room had an even background sound which was at an average of 65. 5 dB(C). All the surfaces in the room are flat, and there is no slope in any direction. There is only one door in the room and no window and other openings (Figure 3-19). Acoustics Ceiling Gyps um Board Figure 3-19 Room 1: Watt 212. (Modeled by Revit.) There were nine groups of tests implemented in this room (Table 3-4). These tests covered occasions including different positions of the sound source and listener (Figure 3-20), different kinds of sound, different sound source SPL, and different room materials. All tests except T lb were performed both with 48 Soundar and with real sound from the loudspeaker. Test Tia played the impulse at 85 dB (C) at the center of the room both in the room and in Soundar. The listener and the microphone were in the same position as the room. The virtual room in Soundar used the same material with the room. Test T1b changed the room material based on T1a and played the impulse in the same SPL. Tic used the same material with the test room and played a constant 500Hz sound at 75 dB(C) at the same position of the listener. T1d keep the same sound and listener position but moved the sound source to the center of the room. Tie raised the SPL of the sound source to 85 dB(C) based on T1d. In T1r, the music at 75 dB(C) was played at the center of the room, and the listener was in the front of the room. T1g moved the sound source to the front of the room and raised the height and moved the listener to the back corner of the room, which represents a person who sits at the back row of the lecture room. Test T11i and T1; played a speech of a female and a male in the same position with the last test, which can represent a person standing and talking. a e - T bl 3 4 P rope rt 1es o f soun d 'R sources m oom 1 Sour ce Sour ce (m) Listener (m) Material Sound Average Test Test File SPL Method (dB(C)) X y z X y z Floor Walls Ceiling Tia Impulse 90 2.48 3.85 0.75 2.48 3.85 0.75 carpet acoustic Soundar / gypsum Recording T1b Impulse 90 2.48 3.85 0.75 2.48 3.85 0.75 Wo od Concrete concrete Sou ndar Tic 500 Hz 75 2.48 1.90 1.05 2.48 1.90 1.05 carpet acoustic Soundar / gypsum Recording T1d 500 Hz 75 2.48 3.85 0.75 2.48 1.90 1.05 carpet acoustic Soundar / gypsum Recording Tie 500 Hz 85 2.48 3.85 0.75 2.48 1.90 1.05 carpet acoustic Sou ndar / gypsum Recording T1r Mu sic 75 2.48 3.85 0.75 2.48 1.90 1.05 carpet acoustic Soundar / gypsum Recording T1g Mu sic 75 2.48 1.77 1.50 1.00 7.05 1.05 carpet acoustic Sou ndar / gypsum Recording T11, Speec h 75 2.48 1.77 1.50 1.00 7.05 1.05 carpet acoustic Soundar / (Female ) gypsum Recording T1; Speec h 75 2.48 1.77 1.50 1.00 7.05 1.05 carpet acoustic Sou ndar / (Male) gypsum Recording 49 Os o und Source • Listener \J \J \J \J 1 00 m 2 48 m 2 •e m 2 48 m 2.48 ff! E E 2.48 m 2.48 m 2 413 m c,j E - (a). Tia, T1b, (b). Tic (c). T1ct, Tie, and T1r (d). T1g, T1h, and T1i Figure 3-20 Sound source and listener positions in Room 1. 3. 6.2 . Room 2: San Merendino Room Room 2 was tested in the San Merendino Room, which is in the basement of the Watt Hall (). The room was also an enclosed room with no windows. It was 2. 75 mi n height, 4.40 m in length, and 4. 80 mi n width. The floor was covered with thin carpet, the ceiling was unpainted concrete. One of the walls was gypsum board and the other walls were unpainted concrete. The background sound in this room is not even. An air conditioner filter was on one side of the room, which generated an uneven background noise. The noise at the test position was about 54 dB(C). Gypsum Board It) .... Concrete �-- Carpe t Figure 3-21 Test Room 2: Trapezoid lecture room. (Modeled by Revit.) 50 Two group tests were implemented in Room 2. Test S2a was an impulse test using at the center of the room. Test S2b set the sound source in the front of the room at 5 feet and played a recording of the tester' s speech (Table 3-3). The same material schemes were used in this room. a e - T bl 3 5 P ropert1es o f soun d . R sources m oom 2 Sour ce Sour ce (m) Listener (m) Material Sound Average Test File SPL (dB(C)) X y z X y z Floor Walls T2a Impulse 90 2.20 2.40 0.80 1.10 2.40 0.80 Carpet Gypsum T2b Impulse 90 2.20 2.40 0.80 1.10 2.40 0.80 Carpet Concrete Os o und Source • Listener 2 .20 m 2 .20 m /'",_ ...... E E 0 0 Figure 3-22 Sound source and listener positions in Room 2. 3. 7. Result Analysis Test Ceiling Method Ac oustic Soundar Concrete Soundar I Recording The performance of Soundar can be validated by analyzing the SPL change as a function of time, frequency response, and reverb performance of the simulation results from Soundar and the live recordings. This session briefly introduces these three analysis methods. The detailed analysis process and results will be introduced in Chapter 5. 3. 7.1 . SPL Change as a Function of Time Time domain charts were used to analyze the SPL changes as a function of time. These charts were generated by Virtual Sound Level Meter (VSLM). The time spacing of the time domain charts is 100 ms. 51 The difference of SPL (i1SPL) can be calculated from data logs, which is the absolute value of the SPL of Soundar simulation result minus the SPL of the live recording at the same time. Generally, untrained listeners can distinguish the difference of SPL when it changes by about 3 dB. Therefore, the validation of used 3 dB as the tolerance and calculated the percentage of the data that has the i1SPL under 3 dB. 3. 7.2 . Frequency Response Frequency domain charts were used to analyze the frequency responses, which were generated by REW. frequency domain charts indicated the SPL at each frequency. The frequency responses were 1/12 octaves smoothed, which separated each octave band into twelve parts, and the value on the center frequency of each band is the average of the values on both sides. The 1/12 octaves smoothing corresponds more closely to human hearing and also make it easier and clearer for the comparison. The analysis used the coefficient of determination (R 2 ) to indicate the similarity of Soundar from the live recording, which was in the range of Ot o 1. When R 2 is close to 1, it indicates that the Soundar simulation is close to the live result. 3. 7.3 . Re verb Perform ace The impulse responses were used to analyze the reverb performances, which showed the sound decay per time after the sound stops. The reverberation time can be estimated from the impulse response. The threshold for human hearing to notice the difference of reverberation time is a deviation of 20% (Meng, Zhao, and He 2006, 418-421), which means when the deviation ofT60 is lower than 20%, the reverb sounds the same to the listeners. Therefore, the validation used 20% as the threshold of the accuracy of the reverb simulation. Besides the reverberation time, the shape of the impulse response also shows the decay rate and how the room reacted to the impulse. 3.8. Summary Soundar is a mobile phone application that produces auditory feedback for a virtual sound source in a real environment by using AR technology. The two major parts of the workflow are preparing the room and sound source for simulation and creating the user interface design (Figure 3-23). 52 Application Development Simulation Process User Interface Design Set Room � Set Sound I Run Simula! � ➔ Get Feedback Theme and Layout Main Menu I Geomcrty I I Position I ! _ Reverberation Time I I Numeric I I Logo and lcon I I Project Operation I I Theme Color I I Settings I I I I Sound Properties I Sound Pres � I I Material Auditory Level I I I I Layout Quit Application I I - Sound File I About I Figure 3-23 Overall framework of the development process of Soundar. Simulation progress is the main priority which is the basic function of Soundar. By using Unity and CSV data, small modules are coded to finish the integrated projects. Chapter 4 details the coding involved to create the app, and specific case studies and test results are in Chapter 5. 53 4. DETAILED METHODOLOGY This chapter describes in detail the application and introduces the scripts from other sources and how they were used in the application. The chapter also explains the coding algorithm and the scripts for each step of the simulation process and the UI design. The application was developed in Unity using the programming language C#. In Unity, all objects, interface components, and the environment are contained in a "scene. " The application can switch between different scenes to change its environment, layout, and function. Soundar contains six scenes which are "Start Screen," "Set Room," "Set Sound," "Run Simulation," "Edit Room," "Edit Sound," and "Move Sound." Each scene has its main script to which contains a variety of different modules. Some of the scenes also contain multiple main scripts based on their function (Figure 4-1 ). AcousticMaterial Para CSV to list Assign data Place obj ects on AR plane Lfuce ob j ects in air Creat surfa ce L ink two points Do not destr oy Calculation Mesh area Eyring: fo rmula 1,fen uButt ons Show menu New pro j ect Save sound Show sett in g Show about Quit Clear Show dropdown Change material � - ---; M Start Screen Start_ : Main Set Room Set Room_ !\ fain Edit Room Edit Room_ Main Set Sound Set Sound_ Mam [dit Sound Edit Sound_tfain Mon Sound �fo\·c Sound_Main Run Simulation Run Simulation_Main ---� Settings ! I Visibili ty scle ! I v isibility fr equncy graph I : I \iis ibiJi� , wm graph i I Muteall :: :=I ======:::I . Setup settings . < ._ 1 __,_ Up _ dat _ ,s e _nin --"- gs __, 1 •• SoundOption i I Option SPL I ] ! I Option sound file I j ; I Option_mute I i i I Option mo,e I j • SoundControl • f I SPL input I ) : I Chang(' sound fil(' ] 1 ! J Mute I 1 i.! ........ �'. '. '.t '. .... .... l) Figure 4-1 Relationship between scenes, main scripts, and modules. 54 Modu les are small pieces of codi ng that aim to perform a simple task. So me of the modules are used in the simulation process (Table 4- 1). The se mod ule s will be introduced in Sec tion 4. 1. So me of them are used for the user interface design (Table 4-2), which will be intro duc ed with the UI design in Se ction 4.4. For the comple te scripts, please see Appendix B. T bl 4 1 Md I a e - o u es use d . th m I f e s1mu a 10n process d m or er o C f ) use . Module Script File Name Function CSV to list DataOperation Read the .csv file into the program as an array. Place ob jec ts on AR PlaceObjec ts Place a specific type of game ob ject on an AR plan. plane OnARPlane Link two points LinkTwoPoints Link two given game obj ects with a line. The format of the line is defined by a prefab. Create surface Create Surfac e Create a surface by the given list of vertex and the direction of the surface (horizontal and vertical). Clone obj ect CloneObj ect Copy a game obj ect and paste it at the same location. Clone obj ects Copy a list of game objects and paste them at the same location. Do not destroy DoNotDestroy Keep the game object when switching betwe en scenes. Place ob jec ts in air PlaceObjec ts In Ai r Place a given game ob ject in the air without any reference plane. Assign acoustic Data Operation Search certain material from the array and assign its material parameters acoustic data. Mesh area Calculation Calculate the area of a given mesh. Eyring formu la Calculate the reverbe ration time using Eyring's form ula. * This table only shows the modules and codes written by the author. 55 T bl 4 2 M d l a e - o u es use d £ th or . t f; e user m er ace d es1gn. Module Script File Name Function Show menu Show the main menu. New project Create a new project and return to the scene "Set Room." Save sound Save a .wav file of the internal sound of the simulation. Show setting MenuButtons Show the window of "Settings." Show about Show the window of "About." Quit Quit the application. Clear Unshow all windows when touching the blank space. Visibility scale Show/hide the SPL scale in the scene "Run Simulation." Visibility frequency graph Show/hide the frequency graph in the scene "Run Simulation." Visibility wave graph Settings Show/hide the wave graph in the scene "Run Simulation." Mute all Mute/play all sound sources in the scene "Run Simulation." Setup settings Read the current setting in the current scene. Update settings Update the setting from the last scene. Finish room Go to the scene "Set Sound." Finish sound Go to the scene "Run Simulation." Add sound Step Buttons Go to the scene "Set Sound." Edit Show the two choices of "Edit Room" and "Edit Sound." Edit room Go to the scene "Edit Room." Edit sound Go to the scene "Edit Sound." Show dropdown MaterialDropdow Show the material dropdown at a touch. Change material n Change the material when the dropdown value changed. Option SPL Change the UI form and show the SPL input filed. Option sound file Change the UI for m and show the sound file dropdown. Option mute SoundOption Change the UI form and show the mute toggle. Option move Change the UI for m and go to the scene "Move Sound." Option menu white Reset the UI form and hide all options. SPL input Change the SPL of the choosing sound source. Change sound file Sound Contro l Change the sound file of the choosing sound source. Mute Mute the choosing sound source. Delete Delete the sound source. * This table only shows the modules and codes written by the author. 4.1. Modu les in the Simulation process This section only contains the mod ules that were written specifi cally for Soundar . The se modules will be intro duced in the order of use. 4. 1.1. CS V To List The application reads CS V databases that contain information about different materials for the floor, ceili ng, and wall s. To easily call this data, the .csv file is tran slated into an array form . This mod ule returns a two dimension string list database [}[} that contains all data from each .csv file (Figure 4-2). 56 publ ic Li .s t <st :r ing[j > C.s v2 i s� (string f i •eName ) { TeKt As.s et csv at a = new TextA.s set O ; Li .s t <st ri ng = > dat abase = new Li.st { .st ri ng - = > 0 ; st ri ng [] ma t e:r i a.l = new su i n g [ { } ; svDat a = Resour c es. Load{ Te xtA sset> (fi.l eN=:: ) ; lila eri a = c.s vDat a. "" •en . Sp i , ( \n' ) ; for (i nt i = 0 : i < mat eri a . Leng t h - 1 ; i -h- ) { } s. tr i ng [] para = mat eri a - i] ,. Spl i t(',' ) ; VJ I para [O -> ma teria.1/ obj ect name .. . I da abase. Acid (par reiurn da abase ; 4.1.2. Place Objects on AR Plane Figure 4-2 CSV to list. The function of this module is to place a game object on an AR plane. When the user touches the screen, it casts a ray from the position of the touch (Figure 4-3). If the ray hits an AR plane, it returns an ARRayc astHit as the intersection and places the placedObject at the position of the hit (Figure 4-4). ' ' ' ' ' ' ' ,.. Hit ' Figure 4-3 Ray cast from where users touch the screen and hit the AR plane. 57 voi d pdate O { if (Input . t ouchCount > 0 && Inpu t. Get ouch (0) . phase == TouchPh ase. Began ) { Touch touch = lnpu t .GetTouch (O ) ; i f (BventSysi: ern . curre1n . sPoi nt erOverGam eO bj ect (;:; ouch. fi nger I d) ) { return; ]I Vector2 touchP os i tio n = touch. po,; t i on ; Lis t <ARRa ycastHi t > hi t s = new Li st<ARRa yca stHit > O ; " f (ar !Ra ycas- Manager. Raycast (touc hPosi . i on, hi t s, n i tyEngi ne. XR .. A.RSu bsyst ems. Track ableType. P anes) ) Game-O bj ect nev. � bj ect ; newO bj ec. = Insta.J1 Jt ia te (placedObj ect , hi t s [ 0} . po,SP . p si ti o:i, hi ts [ 0} . po,S P . rotati o. obj ec.Li st .Add(newO jec t) ; Figure 4-4 The script for Place objects on an AR plane. The placedObject can be any game object, which is defined by dropping a prefab into the script component (Figure 4-5). In this case, when the user touches the screen, a PlanPoint will be placed on the AR panel, which is a vertex of the floor surface. T e11 Place Obje cts_O n AR Plane (Script) �cript Plac eObJects_OnAP Plane Plac ed Object PlanPo int ObJect List =i! o. 0 Figure 4-5 A prefab, PlanPoint, was dropped to the script component to define the placedObject. 4.1.3. Link Two Points This module places a game object, linePrefab, to link two given game objects, point] and point2, by changing its scale and rotation in three dimensions. It is used to generate a line between two objects, for example, linking the vertices and generating the boundaries of the surfaces. The linePrefab is firstly placed in the same position as point2. Then the scale is adj usted in the y-axis to match the length of the distance between the two given game objects (Figure 4-6). var n ew in e = Ins t ar. "" i a ,e (li nePr efa b, p oi n- 2. t ransform,. p os. i t i -o n, poi nt 2. - ra ns form . rot at io nev.-L. · ne. t ran sform , loc al S,cal e = new Vec tor3 (1, L 1) ; nev, · ne. tran sform , e "" Cl- .i - (0) . t ransforn. ,oc a Posi - · -o n = new Vec t or3 ( di s t an c e, 0, 0) ; new L · ne. trans :0!111 , e "" Cl- .i - (0) . t ransforn. ,oca Sc al e = new Vecto r3 (0, 01:, di s - w. c e , 0, 0lf) ; Figure 4-6 Place the linePrefab at point2 and adj ust the scale. 58 The rotation for the linePrefab is calculated by using the trigonometric function and the position coordinates of these two game objects. Several special occasions need to be set up specifically, such as when the coordinates of point] and point2 are the same in the x-axis (Figure 4-7). f ,oat dX = point 1. t ran sfo rm . posi i on. x - poi n t 2, t r a nsform, posi t i on. x: f oat d Y = point 2. . tr an sform . posi i on. y - poi n t 1. t ra nsform . posi t i o n. y: L oat d� = point2.. t ran s: orm . po si t i on. z - poi nt l. t ransform , posi•i on. z : if (dX = 0) { e se rad X = Math f . At an2 (dZ , dY) ; ang l eY = 0 ; ang l eZ = - 90 : an gl eX = rad!X * Mat hf . Rad.2 eg; ra d Y = Math f . At an2 (dZ , poin t 1. t raTt s: orm. po si t i on . x - poi nt2 . t ran sfonn , posi t ion. x) : radZ = Math f . At a n2 (dY, po i n t 2. t ran s: orm. po si • i on. x - poi nt 1. t ran s:ronn , posi t ion. x) : an gl eX = 0 ; angl eY = rad!Y * Ma thf . Rad. 2Deg; an gleZ = raci.Z * Mathf , Rad.2 eg; i f (an gl eZ == 80 ) { an g eZ = 0 ; } el se if (an gleZ = - 80 ) { an g eZ = 0 ; } a ngleZ = 0 ; ne wt i ne. t ran s: orm . l ocalRotat io n = Qu at ernio n, Eu er (ang eX,. angle Y, an g eZ) : Figure 4-7 Changing the rotation of the linePrefab. 4.1.4. Create Surface This module returns a double-side polygon mesh given a list of points. The polygon mesh is made of triangular meshes. The mesh. vertices saves a list of point positions in the form of coordination, which are the vertices of the polygon. The mesh. tr iangles saves a list of indices of the mesh. ver tices. From the beginning of the mesh. tr iangles, every three items in the list define a triangle (Figure 4-8). 59 I ';::: :·I : i 0 I 2 3 4 5 B i+ 1 I 2 3 4 5 6 2 3 5 C D F pointCount - i - 1 Triangle 5 ABF F 4 BCE 3 Null 2 DEC 1 EFB 0 Null D Figure 4-8 The polygon is made of multiple triangles. The triangle mesh is single-sided, which only shows the mesh renderer and the mesh collider on one side and is transparent on the other side (Figure 4-9). In order to generate the polygon with double sides, the indices in mesh. tr iangles for each triangle are in both clockwise and counterclockwise, which creates two triangles on each side at the same time (Figure 4-10). Single-side surface Front Back Double-s ide surface Front Back Figure 4-9 S ingle-sided mesh and double-sided mesh. 60 for (int i = 0 : i < po in t,Cou n t ; i++ ) { po in t ocati on . Add (poin tList i . - ransfo m . pos it · o. for (int i = 0 : i < p oin t,Ccunt - ; i++) { m eshl n d i ce s. Add ( i .) ; m eshl nd ices. Add (i. + 1 ) ; m eshln dices . ,!l,, dd (po int. C oun t - i - 1 ) ; m e shl n dices . Add (po in t C oun· 1 ) ; m eshln dices. Add (i. + 1 ) ; m eshln dice s. Md (i.) l'J es h. vert ices = point Locat ion . o,m ay () ; l'J es h. tr . ang les = me shin dice s. ToArra y O ; l'Jesh. Optim ize O ; l'J esh. Re,c. alculat eNonoal s O ; Figure 4-10 The indices in mesh.triangles for each triangle are both clockwise and counterclockwise The uv of the mesh defines the texture coordinate of the mesh. In other words, it controls the direction of the material texture. Therefore, the surface needs to be defined as "vertical" or "horizontal" to assign the correct texture uv (Figure 4-11 ). Vector2 [] uv_ h = ne•.- Vector 2 1D esh. vertices. Len.g t Vector2 [ uv_ v = n e..- Vector 2 1D esh. vertices. Len gth if (di .reciion == •vert ical�) { for (in t i = O ; i < uv_ h. l en gth ; i+- ) { uv_ v :il = ne..- Vector2Cm esh. ven ices[i} . x, m esh. ven ice.s [i]. y) ; rn esh. uv = uv_v ; if (direct ion == "horizom.al ") { for (in t i = 0 ; i. < uv_ h. :; ng th ; i+-) { uv _ h :i. J = ne..- Vector2 Cm esh. vertices [i}. x, m esh. ven i. ces [i_ . z) ; Figure 4-11 Different uv settings for vertical meshes and horizontal meshes. However, this algorithm can not create all kinds of polygons. It can create all convex polygons and some of the concave polygons. A further discussion on this limitation is in Chapter 6. 61 4.1.5. Clone Object I Clone Objects These two modules are for the same purpose. They copy the game object(s) at the same location, for two different data structures. Clone object is for a single game object (Figure 4-12), while Clone objects is for a list of game objects (Figure 4-13). pub l ic Game O b jec t � Jone O b jec t (GameO b jec gameO b j ect ) { Game Ob j ect ne . .O b j ect = nst ant i. at e (gam eO b ject, gam eO b ject . - ransform . pos i ti. on, gam eO b je, ct. t n ms :onn. rnt at ion) ; reiurn ne, .r, -O b je c- ; Figure 4-12 Clone object is for a single game object. public List<Garn eObject> \: .lo neObjects (List(Garn eObject> gano eOb ject s) { List<Garne Ob ject > c = m ,.- List<GameObj ect > () ; for (in t i = 0; i < gam eObject s. Count ; i++ ) Game-Obj ect ne"''() bject = Instantiat e (game Object s [ i], gam eObjec- s [i: . tr an sform. posi- ion, gam eObjects [i]. tr an sfo rm. r ota t ion) ; c. Add (nev, '() bje-ct ) ; r et.urn c: Figure 4-13 Clone objects is for a list of game objects. 4.1.6. Do Not Destroy When the application loads another scene, Unity will automatically destroy all game objects from the previous scene. Game objects that assign this module as their component are kept from being destroyed and remain in the following scenes ( Figure 4-14). vo- id St ar t 0 { Do n- est:rn yOn oad (th1s . ga meO b je ct ) : Figure 4-14 Do not destroy the game object when loading other scenes. 4.1.7. Place Objects in Air Similar to Place object on AR Plane, this module places a game object without any reference plane. The 62 game object is created at the position of the device, which is defined as the location of the camera (Figure 4-15). if (Inp ut . t ou ch Cowi -> 0 && Input . to uc h e s O] . p h ase == T ouc hP h a.s e . Be gan) { · f (Eve nt Syst e rn . cur r en . Isl'o i nt e r Ov e rG am eO b je ct ( t ou c n. fi ng e r Id) ) { r et ur n ; n ew Ob j ec t = Ins t am . ia te (p la ce dO b jec, , pnoneCamera . t r ans form . po s i. t ion, p h c m e Caine r a . t r an sfonn. ro,a ti on) ; ob jec - Li st . Add (ne,;() bject ) , Figure 4-15 Plance object in air places the game object at the location of the camera. 4.1 .8. Assign Acoustic Parameters This module can search the materials database and assign the appropriate acoustic property data to the game object based on its material. Assign data goes through the whole database and compares the first item of each data line (database[i][OJ) and the material name of the game object. When database[n][OJ matches the material name, the value of SteamAudioMaterial will be replaced by the values from database[n][l} to database[n][7] and break to exit the loop (Figure 4-16). public mid Assi gnDa t a (GameObje ct gan: ;, Obj ect , .str in g: [] dat abase, str in g nao e) { str ing su:f i x = • (Instanc e)• : game Ob j;, ct . Ge t Comp onen- ( St eam Audiol!a t eria l> () . Pr e s et = Ma t eria l Pr e s et . Cust om; for ( i :1 t i = 0 : i < da.a base. Get L-en gt h (O) . i ++ ) { if ( str in g. Compar e (da t a b ase i ] 0] , game Ob jen . G et Comp onem <MeshR endere r > () . □a ter i al. name . Rep l ac e ( suf ix, " ) ) == 0) { gao eObject. Get.Comp onent <St eamAudio Materi al > () . Val ue = new Material Val ue ( : l oat . Parse ( dat abase : i l : ]) , float . Pars e ( data base [ i ] _ 2]) , fl oat . Par se (dat. abase [ i ] [3]) , fl oat . Par se (da t aba se[ i _ [ 4 ). : lo at . Par se ( dat abase [i] : ., J) , float . Pars e ( dat abas e [ i ] :oD, fl oa c Parse (dat abase[i ] [7])) : br eak ; Figure 4-16 Assign data compare and rewrite the material value. 4.1.9. Mesh area This module calculates the area of a mesh. Since the mesh is made of triangles, the area of the mesh is the summary of thee triangles. The area of a triangle can be calculated by using the coordinates of its vertices (Figure 4-17), which are saved in mesh. vertices (Figure 4-18). 63 0 X SAABC = ½ X l(x 2 � X1) (X3 - X1) J (yz - Y1 ) Cy3 - Y1 ) Figure 4-17 Calculate the triangle area with coordinates. fl oat area = 0; in t tri angl e_n = mesh. tri angl es . Length / 3 ; fo r ( i nt i = 0 ; i < tri an g l e_n ; i+ +) { Vector3 A = new Ve ct or3 0 ; Vector3 B = new Vect or3 0 ; Vector3 C = new Vec tor3 0 ; A= mesh. vert :i c es [mesh. tri angl es [ i >I' 3] ] ; B = mesh. v ert :i ces [mesh. tri ang l es i * 3 + 1 ] . ; C = mesh. v ert i c es mesh. tri ang l es i >I' 3 + 2 ] j ; flo at a = (B. y - A. y) >I' (C. ·z - A. z) - (C. y - A. y) * (B. z - A. z) ; flo at b = (B. ;; - A. x ) >I' (C. ·z - A. z) - (C. ;; - A. ;;) * (B, z - A. z) ; fl oat c = (C. ;; - A. y) * (!! . z - A. y) - (B. x - A. x) * (C, y - A. y) ; area += 0 . 5.f * Mathf . Sqrt (Mathf . Pow (a, 2) + Mathf. Pow (b , 2) + Mathf . Pow (c, 2)) ; return area/2 ; Figure 4-18 Formula and script that calculates the area of a double-side mesh. 4.1.10. Eyring formula Based on Eyring' s formula (4-1), the reverberation time (T60) is calculated by the given room volume, surface areas, and their absorption coefficient (Figure 4-19). T 0.161V ( - O Z ) 60 = -S·ln(1 - a) a > 64 (4-1) public f _o a t Eyri ng � orm u l a ( flo a t v o _ume, Lis t <f� o a t > ar ea , Li st ( fl oa t ) a b sorp tio n) float 60 = 0; fl o a t t o ta l Area = 0 : flo a t t o ta lAbso rpt i o n = 0 ; fl oat a v erag eAbs o rpt i o n = 0; for ( int i = 0 ; i < a rea. Co u nt : i++) t o t a lArea += area i _ ; to t a lAb sorpt io += ar ea [i] * a bso rp t i o n [i ] : a � •erag eAb sorp t i o n = t o tal Ab so rpt io n / t o t a lArea ; T50 = - 0 , 16l f 'I' v o _ume / ( t o t a -Area 'I' Ma th£. . Lo g ( l - ave ra g eAbso rpt : o n) ) : return 160 ; Figure 4-19 The T6o is calculated by the room volume, surface areas, and their absorption coefficient. 4.2. Script from Other Sources In addition to the custom modules, Soundar also used scripts from other sources such as SDK, Unity assets, and codes written by other developers and released online. These scripts can be directly used as a script component in Unity. 4.2.1. Scripts from AR Foundation AR foundation provides the scripts to detect surface, create AR plane, generate AR ray cast, and build the AR environment. In each scene, an empty object AR Session Origin needs to be created in the hierarchy, which contains an AR Camera as the child. Three scripts components are added to the AR Session Origin, which are AR Session Origin, AR Plane Manager, and AR Raycast Manager (Figure 4-20). 65 ·:: Hier ar chy I Cte• te ,.. "'J;'"' AII T � I S..t Room D1rect 1onal Ligh t AR Session • AP Session 01, AR Camer a ► Canvas EventSystem g ii · = ·= ,n Steam Audio Manager S ett ing s I 0 Inspect or I O Pr o1ect S etti ngs ii •=I AR Session Ong1n O s tat,c � Tag ( Unuggod : ) Layer ( Dofaul t :) T Transform ICll :;! o� P0s1t10n X 0 y 0 z 0 I X O Y O Z 0 R otat,on Scale --- --- ---I X 1 Y 1 Z 1 •• !!I, AR Seu ion Origin (Script} Sn 1pt AP .;.,ess1 onOrig1n Camer a • AR Camer a {Camera) •• � AR Plane Manager (Script} ri ,pt APPlaneManag,i Plane Pr efab AR Plane Visualizer Detection Mode r Honzon t•I :;! (t 0 0 :;!O 0 • • � AR Rayca,t Manager (Script} :;! 0 Sn1p t APPa r-astManager '"'I Figure 4-20 AR Session Origin in the hierarchy The AR Session Origin script builds the basic AR environment. The AR Plane Manager detects the surface and creates AR planes. The plane prefab controls how the AR plane looks like. The detection mode restricts which direction of the AR planes can be created. Because Soundar only needs to detect the horizontal plane for the floor surface, the detection mode is set up to "Horizontal." The AR Raycast Manager is the script for casting rays which can interact with the AR objects like the AR plane. Not all these script components are used in each scene. Meanwhile, other script components are added to AR Session Origin depends on the need of each scene. For instance, the main script of each scene is added under AR Session Origin. Another employ object AR Session is also needed but only in the scene "Set Room" (Figure 4-21). AR Session saves the AR environment and controls the tracking status. The AR Inp ut Manager is necessary for the system to detect the pose of the device. As all scenes share the same AR environment, the AR Session only needs to be defined once. The module Do not destr oy is added to this object to keep it remain in all the following scenes. When Soundar restarts or creates a new project, the AR Session will be initialized and save the new AR environment. 66 ·;:::: Hier ar chy O Pro ect Settings iii •= Cre•tt • ;_· :....• A � l !!: I =======� =:-t --=-...::....:.::.c..::_� � -=,-:=_ ':_ �':_'=._':_ '=._"=- '=._':_':_'=-. � '=.'=.'=.'=.'=.'=.'=.� """""""" """="""""" ,. � t_Se t Room AR Session O Static • Direc tional Light Tag [ Untagged : ] Layer [ Defaul t :J AR Session Ong1n AR Camer a ► Canvas EventSystem Steam Audio Manager S ettings Transform :i! 0,. X O Y O Z 0 P0s1 t10n R otat1 on Scale --- --- --- x O Y O Z 0 --- Xl Yl Zl T •• AR Seu ion (Script} ... npt AP S �ss1on Atte mpt Upda te Match Frame Rate T .. AR Input Manager (Script} 11 pt APJnputM anage, T ·• Do �o t Des troy (Script} - 11 pt DoNotD estro Add C omp o n e nt :; ! 0 :;! 0 Figure 4-21 AR Session in "Set Room" 4.2.2. Steam Audio Steam Audio simulates the sound performance based on the scene environment and renders the sound including the direct sound, indirect sound, and the reverb. The sound source has to add the component of "Steam Audio Source," which is the basic setting of the sound source. In this panel, the prefab of the sound source is set up with both direct and indirect sound, which can generate a more realistic simulation (Figure 4-22). The reverb is simulated by a sound mixer that provides real-time mixing on the sound source (Figure 4-23). 67 "' Ste am Audio Source (Scrip t} =i! (t, Direct Sound Direct Binaur al HPTF Interpola tion Direct Sound Occlusion Occlusion Method Sour ce P ad1us (meters) Physics Based Attenua tion Air Absor ption Direct Mi x Level Dire cti v ity Dipole Weight Dipole Power Preview Indirect Sound P eflect1ons S1mulat1on Type Indirect Mi x Le v e l Indirect Binaural [ Bilinur [ On, Frequenc y Depend en t Tr•nsmissian [ Rul tim• 1 0 0 1 Go ta W1ndo" > Phonon > S1mula t1an ta upda te tho �lobal s1mula t1an sett ings. r The b1n•u1 •I setti ng 1s 1gno1 ed 1f Phonon L1st ene1 componen t 1s •tt •ched with m ll< 1ng enabled, Advanced Options Avoid Silenc e During !nit 0 Figure 4-22 Basic set-up of the sound source. ■ Ste am Audio Rever b Binaural S1mulat1on Type Advanc ed Options I Rultim• Avoid Silenc e Dur ing lnit 0 Figure 4-23 Reverb mixer on the sound source. : ] : J *] : ] To simulate the sound source, the environment in the scene also has to be set up as "Steam Audio Geometry" and assigned with a "Steam Audio Material." Normally, after the environment is set up, the scene needs to be pre-exported as a .phononscene file to the "streaming asset" folder, which is a special file format that will be loaded when the application runs (Figure 4-24). 68 if (exportOBJ) { els e Ph ononCo r e. ip l Sav eSc eneAsObj ( s c ene, Cornm on . Gonvert St r ing (ObjF il eName ()) ) , Debug . L og ( H Sce ne export ed to H + ObjF i l eNam e () + µ. H) ; var dataSiz e = Phonon Core. ip _Sav eScene ( s c ene, nu l l) ; var dat a = new byt e [ dataS:. ze ] : Ph ononCore . ip : Sav eSc ene ( s c ene, da a) ; var fil eName = SceneFile Narne O : Fil e . lrr i t eAL Byt es (fil eName, da a) : Debug . L og ( H Sc ene export ed to H + fi l eName + 6 , H ) ; Ph ononCor e. ip l Dest royS t at icMe sh (ref st at icMesh ) ; Pho nonCor e. iplD e s t royScene (ref s c ene) ; return error ; s ta� ic string SceneF: l eName () var E l e�ame = Path . GetFi l el'i ameWi th outExt ens ion (SceneManager, GetAct i veSc ene () . name ) + 6 , phono nsc ene H , var stream ingAss et sFil eNam e = Path ,Combine (Ap plica t ion. streamingAss et sPat.h , fil e." ame) : # i f "NI Y_AN DROID && !UN ITY _BDI OR ] var tempF ile Name = Path . Com bine (Appli cation. te mporaryCache Path, fil eName): if (Fil e . Ex is t s (tempFil eNam e)) Fil e. Dele te (tempFil eNam e): try { var str eamingAs setLoader = new WWW (s treami ngAsse t sFi l eNam e) : whil e (!stream ingAss etLoader. isDone) : if (str ing. Is Nul lOrE mpty (streami ngAs set Loader. error) ) { el se var ass et.Data = str eami ngAs se tLoader. bytes: us ing (var dataW ri ter = new BinaryWr iter (new F i l eStream (t empF i l eNam e, File Mode. Cr eate))) { dataWriter. Wri te (assetD ata) : dataWriter. Clo se () : Debug. Log (str eam ingAssetL oad er. error) : catch (Except ion except ion) Debug. LogError (except ion. ToString ()) ; return temp File Name : Figure 4-24 Original scripts for exporting phonon scene. However, since the environment in Soundar is built in real-time, the environment cannot be pre-exported. Also, the "streaming asset" folder is read-only, which means Soundar can only read from this folder, but 69 cannot write to it. Therefore, the script for exporting has been modified to export the scene in real-time to the "persistent file" folder, which can both read and write (Figure 4-25). if (ex portOBJ) { els.e Phonon C or e. ip _Sav eSceneA s 0bj (sc en e, Common . C onvert St r in g (ObjF i le Nam e ())) ; Debug . L og C Scene export ed to # + ObjF i · eName () + H . *) ; var dat aS iz e = Phon onCore. ip_Sav eScene ( s c en e, nul l) ; var data = new byt e dat aS:. ze ] ; Ph ononCo r e. ip _Sav eSc ene ( s.c ene, dat a) ; var fi l eNam e = S c eneFil eKame (Se tRoom_Main. �sF ir.s t ) ; Fi l e . Wr i t eAl : Byt es (fi l eKame, dai:. a) ; Debug . Log ( # Sce ne expor t ed t o w + file Name + " , # ) ; Pho nonCor e. iplDestroyS t at icMesh (ref s · at icMes h ) ; Pho nonCor e. iplDes troyS c ene (ref s c en e) . ret rn error , sta t ic s tr ing SceneFi : eNam e (bool i sFirs t T ime) var f:. l e�arn e = Path . GetFi l eKameWi th outExt en.s ion (SceneManager . GetAc t i veSc ene O . name) + " , phono n sce ne" ; var stream ingAss et sFi l eNam e = Path . Combine (Ap pl icat ion. streamingA.sse ts Path , file Name) ; #if "NI Y� ;\ND R0ID && !UN ITY_BDI OR var per s isten tFil eNam e = Path. Combine (Appl ication. persi stentD ataPath, fileName) ; var tempFile Name = Path . Co mb ine (App l ication. t emporar yCach ePath, file Name): if (isFirst Time) { else return pers istentF il eName ; if (File. Exis ts (t empF ile Name) ) File. Delete (t empFi le Name) ; try { byt e[] bytes = Fil e.ReadA l lByt es (per s i s t entFile Name ); Fil e.W r iteA l l Byt es (fileName , bytes) ; catch (Exc ept ion except ion) { Debug. Log Error (exce pt ion. ToString () ) ; return te mpFi l eName ; Figure 4-25 Modified scrips for exporting .phononcene files. 70 4.3. Simulation proc ess Soundar has seven scenes: "Start Screen," "Set Room," "Edit Room," "Set Sound," "Edit Sound," "Move Sound," and "Run Simulation" (Figure 4-26). The application starts with the "Start Screen" and then automatically turns to "Set Room" after reminding users to wear earphones and guide them through the volume calibration. After setting up the room in "Set Room," users can choose to edit the room or continue to set the sound sources in "Set Sound." Similarly, after setting up the sound, users can edit sound or go to "Run Simulation." In "Run Simulation," users can see the simulation results and listen to the feedback. If users want to edit the room or sound, they can choose to go to "Edit Room" or "Edit Sound." When users choose to move the sound source in "Edit Sound", it will lead to "Move Sound." Users can create a new project from the menu in all scenes except "Start Screen." The New Project button will take users back to "Set Room" and clear all data from the previous project. ------------------------------- -------------------------------------------, -------. Start Scene Start Scene_Mam Edit Room Edi t Room Main Figure 4-26 Relationship and connections in the seven scenes. 71 Yes : 'Set Sound?_'; 0 Mo v e Sou nd Mo \'e Sound Mam 4.3.1. Start Screen Main "Start Screen" is the first scene shown after the application run. It shows the logo and the name of Soundar and leads users to finish the calibration (Figure 4-27). The calibration plays a constant sound at 500 Hz in 30dB, and let the user adj ust the volume setting of their devices to where they can barely hear the sound. Meanwhile, it runs the main script StartScreen_Main, which loads the material database to the application and then load the next scene (Figure 4-28). Soundar Please put on your headphones. Please adjust the volume of your device to wher e you can barely hear this sound . You are all set! Touch screen to start Figure 4-27 "Start Screen" shows the logo and the name of Soundar and do the calibration. vo · d St art 0 { /// S et up ma teri al/ob ject da t aba se . f - oo :r M at erial = se t.Sur facePara . C sv2List C DataBa se/FloorMat erials w ). T o A:rr ay O ; wall Ma- eria = set Surf ac ePa:ra . C s· ,2Lis t C DataBase /W al lMa t erial s") . ToAr ray O: ce ilingMaterial = setSu rfa cePa:ra . Csv2List C DataBa se/Ce ill ing!.! aieria s") . To.1.r :ra y () ; Sc enel! anager . · oadS cene (l) ; Figure 4-28 StartScreen _ Main loads the material database and loads the next scene. 4.3.2. Set Room Main This main script is used for creating a room based on the real environment. It can only create a room with a horizontal floor and ceiling and vertical walls. The ceiling has to have the same shape as the floor. The 72 surfaces of the room will be first set up with default materials: wooden floor, unpainted concrete wall, and acoustic ceiling (Figure 4-29). All material textures in Soundar are set to be translucent so that users can look through the room surfaces. (a) Wo oden floor. (b) Unpainted Concrete. (c) Acoustic ceiling. Figure 4-29 Texture map of the default materials. Soundar first detects the surface in reality and generates horizontal AR planes using the scripts from the AR foundation (Figure 4-30). Figure 4-30 Soundar detects surfaces and generates horizontal AR planes. When Soudar successfully detects at least one AR plane, users are able to touch the screen and place floor vertices on the AR plane by using Place objects on AR plane. All points are saved in the listfloorPointList (Figure 4-31). 73 /// G et poin- li st floorPoi ntLi ,st = G etC01Dp one mt<PlaceOb j ec ts_ OnARP lane) () . obj ec -'- Li st ; pointCount = floo rPoi ntLi st.Count: Figure 4-31 Get all points generated by Place objects on AR plane. For each additional point, it runs Link two points to draw lines that connect the new point to the previous one (Figure 4-32). All lines are saved into the listfloorLineList. When the new point is less than 0.1 meter from the first point, it deletes the last item of jloorPointList and links the new last point with the first point, which considers the floor shape a closed-up shape (Figure 4-33). If the points are fewer than three, which can not define a surface, a hint text will show up to warn the user (Figure 4-32). Then use the jloorPointList to create floor surface and set the surface material as the default material (Figure 4-34). Figure 4-32 Lines between the new point and the previous one and the warning. 74 else if (p oi nt Coum > 1 &Z: poi ntCount > p oi ntl + l) ( Di r ecti on. te xt = 'Pl ease set fl oo r c orner s. • ; floa, backD i st a,1ce = Vec t or 3. Di st anc e (fl oorPoin t Li st [p ointC ou."l t - 1: . transform. po si -. ion , floor Poi nt Li n [O]. tra nsfonn. p osi t i on) ; i :" (backDi stan ce < 0. I ) else { if (poi nt Coum > 3) { e l se Get Compon ent <Pl aceO bject s_ On ARPlane > 0 . enab l ed = fal se ; poi nt ! = 0 ; Dest ro y( fl oor P oi ntL i st : poi nt Coum - l]) ; floorPo in-. Li st . Relll oveA-. (poi nt Count - 1) ; point Count = poi nt Cooo t -1; GaineO bje ct ne, Li ne ; ne�i.in e = GetComp onent <Li nk Tw oPoi nts ) 0 . LinkP oi nt s (fl oor P oi nt Li st [po i nt l _ , :" l oor Poi nt Li st [po i ntC-0u nt - 1 ) : fl oor Li neLi st. Add (n e,L i ne) ; Di r e ct i on. t EX t = ·s et ro om hi ght. · : get Fl oor = true . ///Hid e ARP l ane . foreach (var p l ane in ar Pl an eMan ager. t ra ckab l e s) { plan e. ga'll€0 bject . SetA ctiv e ( fal se) : Di re ct i on. te xt = 'Less t h an 3 point s. \n\r Can 't get sur face. • ; bac kDi st anc e = 1 ; Dest ro y ( fl oo rPoi ntL i st : poi nt Coun -. - l]) ; fl oor Poin-. Li st . Re□ove .. -. (poi ntCount - l) ; po i nt Coun, = po i nt Coun t -I: Gam eObje ct newLi ne : newl i ne = GetCcrnpo nent ( LinkT�o Poin ts > 0 . Lin kPo i nts (floor P oin t Li st [poi nt IL floor Poi nt List[ po i nt C ou nt - 1 ] ) : poin t! = poin tCount - 1 ; : l oor Li neL i st. Add (ne, Li ne) . Figure 4-33 Link vertices and enclose floor shape. floorSurface = CreateSurf ace. CreatSur: ace (fl oor?ointList, #h orizontal " ) ; li!at erial fl oorMaterial = Res our ces . Load (' Material s/RoorrMa ter ial /De:a l tLis /fl oo r/�ood" , typeo: O,la t erial ) ) as Material ; floorSurface . GetCo□p om mt(lileshRender er) () . materi al = floorMa1u;ria floorSurface. tag = " floor" ; Figure 4-34 Create floor surface and assign the default texture. When the floor surface is set, Set Room_Main copies thefloorPoin tList, jloorLineList, and the floor surface by using Clone object and Clone objects as the ceiling and assign the default ceiling material to the ceiling surface (Figure 4-35). The elevation of the ceiling is calculated by the rotation of the device (Figure 4-36). If the rotation of the device is out of range, which makes the elevation of the ceiling below the floor or in an infinity distance, a hint text will show up to warn the user. If the elevation of the ceiling is in the proper range, users can touch the screen to confirm the location of the ceiling (Figure 4-37). The script also draws lines between the corresponding points of the floor and ceiling as the boundaries of the walls (Figure 4-38). 75 ce i l i ngSur face = c l one . cl orn eOb ject (fl oor Sur face); Yat er i al ce i lin gM at er i al = Resour ces. Loa d ( "Mat er i als /RoomMat eria l /Defalt Li st/c e i li ng/ac oust i c ti les •, t. y p eof (Ma t er i al ) ) as Ma t er ia l : ce i lin gS ur face. Ge, Co mponent <MeshR enderer > 0 . mat er i al = ce i l i n..._!!Ma t er ia l : ce i li ngSur fac e. t a,; = ·c e i lin g· : ce i li ngLi ne i s, . Clear O : ce i li ngP oin t Lis t . Clea r () : ce i li ngPoin t Li st = cl one. cl oneObject s (fl oor Poin tL i st ) : ce i li ngLi neLi s- = cl one. cl oneO bject s(floorL i neLi st); Figure 4-35 Clone components from the floor surface to create a ceiling surface. o· PP' ✓P0 2 - P'0 2 • P'O' PP' x tan a . P' --------- · 0 /// Se t u p r o om h e ig h t Me sh floo r Me sh = fl o o r Su rface . Ge tC ornpon e nti nCh i l dr e n <M: eshf i l t e r > 0 . n: es h : fl oo :r. d i s t a nc eY = p,h oneC arn er a . r ans fo m . p o s i h o n . y -fl o orli!es h . b ou nd s. c -e n , er . y : fl oa, di s t a nc e L = l ath f. <:qr t ( Ma thf . Pow ( Ve c t or 3. Di s t a nc e ( p h o n eC am e ra . t ra n sfom . p o si ti o n.. : oo rM es h . b o u nd s . c e nte r ) , 2) -Ma t hf . Pow ( di s t a nc eY , 2 ) ) ; e l e v ati o n = d i s t a nc eY -d i stan ce Z it: Mathf . Ian ( p h o ne Came ra . t ra n s form . r o t a · o n. x 'i' Mathf .. PI) : Figure 4-36 Schematic diagram and the script for the elevation calculation. 76 if (el evation ( D) { el evation = 0; Direc t ion. text = •c eil ing is ·ower than fl oor. • ; i f (ph oneCamera. trans form. rot at ion. x > = 0. 5) { el evat ion = 0; Direct ion. tex· = •c e i l ing is _ower than floor . • ; else if (ph oneCam era. t rans form. rotat ion. x <= -0. 5) el se el evat ion = 0; D irect ion. text ce i _i ngPosi tion *B evation 1s :n finit y. · : new Vect or3 (f _oorSurface. tran s form. pos it ion. x, f _oorSurfac e. t rans form. pos.it ion. y + e evation, f_oorSurfac e. t ransfor m.p os : tion. z ) ; cei ingSurface. ransform. pos it ion = ceil ingPosition ; Figure 4-37 Different conditions for ceiling elevation. 77 foreach (Gam eObject lin e in wal l LineList) { GameOb Ject. Des troy (l ine) ; wall L:n eLis t. Cl ear () ; for (:nt i = O; i ( fl oorPointList . Count ; i--) { Vecto r3 pointPos ition = new Vecto r3 (floorPointList[ i. ] . tr= s form. pos tion. x, fioorPointLis t [i.] . trfill s form . pos tion. y + el evation, fl oorPointLis [i] . trfill sform .pos t ion. z ) ; cei l ingPointLi. st [ i ] . transform. posi •ion = pointPos it i. on ; Vector3 l inePosi. tion = ne•, Vecto r3 (fl oorLineLis [ i]. trans form. pos it ion. x, fl oorLineList [ i .t rans form.p os ition. y + el evation, fl oorLineLi s [ i] . transform.p os ition. z ) ; cei l ingLineL is t [ i] . trans form. posi t ion = _inePos it ion ; for (:nt i = 0; i < fl oorPointList . Count ; i--) { Gam eObject wal l Line = new Gam eObject (); wa_ l Line = GetComponent( LinkT woPoin ts ) () . LinkP o ints (fl oor Po in List [ i] , cei 1 ingPointL is t [ i]) ; wa l LineLis . Add <�• all L ine) ; Direction. tex • = HE evation : • + el evat ion. ToString () - H\ n\ rTap screen to confirm. ' ; Figure 4-38 Add lines between floor and ceiling as the boundaries of the walls When users touch the screen and confirmed the ceiling, Set Room_ Main generates walls by connecting points from thefloorPointList and ceilingPointList and then assigns the default wall materials to each wall surface (Figure 4-39). At this point, the room is set up. When the room is set up, the "edit" and "finish" buttons will show up (Figure 4-40). The button "edit" leads to the scene "Edit Room", and the button "finish" leads to the scene "Set Sound." 78 if (Input , touchCount > 0 && Inp ut. GetTouch (O ) , phase == Touc hPh ase. B egan) { Dire - ct ion. t ext = "Room set ! \n\r Click check to next step. \n\r Or select a surface to edit. " : arP lan eManager, enabled = fa se : get\f all = true : List<Gam eOb ject > "' allPoint.Lis t = new List <Game O bject> 0 ; Game O bject wall Surface; int coll!lt = floorPointList. Count ; for (int i = O: i < count - 1 ; i+- ) { "' allPoint.Lis t . C lear O ; "' allPoint is t , Add (: loor Point Lis t [i") ; "' allPointLis t .. dd (: loorPoint Lis t ·i + 1 ] ) ; ..-al Point. is t . Add (cei. lin gi' ointLi.s t [ i + 1 ] ) ; "' all Po int. i s t , .. dd (ce i 1 ingi' ointLi st [ i]) , "' allSurface = Crea,eSurface, Cre .atSurface (wa!l l'oint List, •vert ical") ; "' allSur face. tag = ·wall" ; "' allSurfaceList. Aci d (._ all Surface) ; wallPo int.List. Cl ear () wallPoint. Lin . Add(floo rl'ointList :count - 1] ) ; ,.. allP. oint.List. Add(floo rl' ointList :O J ) ; ,.. al lPoint.Li. n . Add(ceil in gPoin, Lis t [o: ) •,allPoint.List. Add(ceilin gPoimtList [count - l] ) ; ,..a llSurface = CreateSurface, Crea 1t Surfac e (wa. lPoi. ntList , ·vert ical") ; ,.. allSurface, tag = ·v.-a ll" ; ,.. allSurface List. Add (,.. allSurface) ; Ma terial v.-a ll J.!a terial = Resourc es . Load n i!a terials /Roorn! at erial/Defal tListh ,-a ll /concrete•, typeof<Mat erial) ) as Ma teria l ; for each (Game Object a in wa!l Sur faceL:s t) { a. GetCompon ent<MeshR enderer> 0 . material = "' allMat erial ; ) this. GetCompo nent <EditRoom _!lai n> () , enablea = , rue , Figure 4-39 Create wall surfaces and assign the default material. Figure 4-40 Button "edit" and "finish" show up when all room surfaces are set. 79 4.3.3. Edit Room Main This script allows users to change the material of each surface. It first formats the material dropdowns and loads the material options from the material database (Figure 4-41). • , allDr opd o'li n . opt ions . Clear () floorDro pdo-n . op,io n s . Clear() ; cei li gDr opdo �11 . opt ion s . Cl ear () ; for (int i = 0; i < Start_ Main ,r alB!at erial. Len gth ; i-+ ) { ..- allDropdo'il' .. opt ions . Add (new Dr opdown . OptionData () { text = Star t_Main . wallMa terial [ i] [O] } ) ; } for (int i = 0; i < Start _Main. floorMaterial. Len gth ; i ++ ) { floor Drop down . options . Adel (n ew Dro p d01m. OptionDa taO { text = Star t_Main . floor Material [�] [OJ } ) ; for (int i = 0; i < Start _Mai n. ceiling�at erial. Length ; i-+ ) { ceilingDrop ci o..-n. options . . . dd {nev; Dr o p down . Opti onD ata O t ext = Start _Main. ce�lin gMa teri aHii [Oj ) ) ; foreach {GameOb j ect a in RunSilillll ation_Mai n. s ounclS our c e) { a. GetComp on ent <A udioSo ur ce) () . Stop () ; Figure 4-41 Formats all material dropdowns. When users touch the screen, the app casts a ray at the touch and returns the hit which represents the object that the ray hits on. By querying the tag of the hit object, it chooses to show the corresponding material dropdown for each type of surface (Figure 4-42). When there is a dropdown shown on the screen, touching any blank space can hide the dropdown (Figure 4-43). Users can choose the material options in the dropdown. When the value of the dropdown changes, the module Change material will be activated and change the material of the hit object. This script, as well as other UI related scripts, will be explained in Section 4.4. 80 Select a su rface to ed it. concrete • Figure 4-42 Different dropdowns are shown when selecting different surfaces. ///Set Dr opd own Menus. Click surface to show ; click blank to hid e. if (Input. to uchCoun t > 0) { } Touch to uch = Input. GetT ouch (O); Ray ra y = p honeCaa era . Scr eenPointToRay {touch. position) : if (! sho, UI) { else { if (to uch. phase == TouchP hase. Bega n) if (E ventSystem. curr ent. h PointerOverG ac:i eObject (touch. finger Id ) ) { ret urn : } if (Physics. Raycast (ra y, out hit) ) { if (hit. tr ansfo m . tag == 'floor') { d rop- Do..-n . showD r opDo'iT' . (touch, hit, floorD ropd o.m ) ; show UI = true : if (hit. tr ansfo rm . tag == ·.- all') { dro ;, Do7'!1 . showD r opDo.n (to uch, hit, n llDropd own); show UI = true : if (hit. tr ansf orm . tag == "' cei l ing "' ) { drop- Do.m . show Dr opDown(to uch, hit, cei ling Dropdoirn ) : show UI = true: if (to uch. phase == TouchP hase. Bega n) if (Ev entSystem. curr en , I sPo interOverG aci eObj ect (touch. finger Id)) { return : } #all. Dropdo wn. gam eObject. Set- Acti ve (false ) : floor Dr opdo.n . g ac, eObj ect. SetAct i ve (false) ; ceil ingD ropd oi;l,'11. . garn eObject. Se� Active(false ) : sho,i/I = false; Figure 4-43 Select surface to show the dropdown and touch blank space to hide. 81 4.3.4. Set Sound Main This main script first references the module Place objects in air to add sound source prefabs at the location of the device. The sound source prefab contains a sphere, which refers to a point source; an SPL tag, which shows the current SPL of this sound source and only visible in the scene "Run Simulation "; and an invisible text, which saves the information of whether this sound source is muted (Figure 4-44). The sound file of the sound source is defaulted to be a piece of an anechoic recording of symphonic music. The average SPL of the music has defaulted as 60 dB(C). The SPL for each sound source can be changed in the scene "Edit Sound." Figure 4-44 Sound source prefab. For each new sound source added to the scene, a component Do Not Destroy is added and a new sound mixer is duplicated to make the sound source controlled individually (Figure 4-45). A counter shows how many sound sources are placed in the project (Figure 4-46). soundSour c eLi. s t = GetCol!l]) onent<PlaceOb ject s_InA ir> 0. ob · ectLi. s t ; in, s oundSource_coun. = s ound SourceL i s t. Count ; Count . text = w Coun : • + Game-Ob ject . FindG aci e0 bjects "" i th Tag ( w sound s ource" ) . Leng th. ToStr ing O ; Gam e0 bjec rn a wS otm dSour c e ; nev;S oundS ourc e = s oundSourceList[soundS ource_co- unt - l ] ; newS oundSourc e. GetCom p onent<AudioSour c e> 0 . enab l ed = fals e : nev;S oundSourc e. tag = "sound s our ce # ; nev;S ound Sourc e. Ad dCom ponent<DoNotDestroy> () ; newS oundSourc e. GetC om p onent<Audi olilix er> 0 . output Aud · oMixerGrou !} = dup licaceMixer (CJ i xer, s ound Sourc e_count. oStr · ng () ) ; 82 public A ud ioMixerGr oup � � pl icateMixer (A ud ioMixerGr oup □ixer, string n) { A udi oMixerGro up newM ixer: nev. -M ixer = Instantiate (l!l ixer) ; ne v.-M ixer. n am e = #Sou nd So urce 6 + n, re, urn ne wMixer ; Figure 4-45 Place sound source objects. Figure 4-46 The counter shows how many sound sources are placed in the space. 4.3.5. Edit Sound Main The scene "Edit Sound" first hides the SPL tag of all sound sources if they are shown. Then it loads the current options for the sound file to the dropdown (Figure 4-47). forea ch (GameObject a in Star t_Main, soundSource) { a. transform. GetCh il d (O) , gam eObject. SetAc ive (fal s e) ; fil eDropdo�� . opt ions . Cl ear () ; for (int i = 0; i < Start_Main. s oundFil e. Length ; i++) { fi" eDropdo.� . opt ions . Add (new Dropdown . Op t ionData () { text = Sta r t_Main, soundF il e [ i] [O] )) ; Figure 4-47 Hide SPL tag and load current sound file options. 83 Users touch the screen to cast a ray and return a hit. When the tag of the hit is "sound source," the sound edit option UI shows at the position at users' touch and the "add new" button changed into the "delete" button. When users touch an empty space, the edit option UI disappear and all buttons are restored to the original form (Figure 4-48). if (Input. touchCoun t ) 0 && Input. GetTouch (O) . phase == TouchPhase. Began) { Touch touch = Input. GetTouch(O): Ray ray = phoneCamera. Screen?ointToRay (touch. position) ; if ( ! sho,.-Op t ionMenu) { if (EventSystem. current. Is?ointerOverGameObject (touch. fing erid) ) { return : } if (Phy s ics. Raycast (ray, out hit)) { if (hit. transform. tag == "sound source") I opt ion Menu. SetActive (true): optionMenu. transform. posit ion = touch. posi tion : sho�p t ion.M enu = true ; addButton. SetActive (fa_se) : del eteButton. SetActive (true); if (EventSystem.current. Is?ointerOverGameObject (touch. fingerid) ) { return : } opt ionMenu.gam eObject . SetActive (fa: se) : sho,.-O ptionMenu = fa: se : soun dOption. opt i o�MenuWhite (optionMenu); addButton. SetActi, 0 e (true) : deleteButton. SetAct ive (fa:se); Figure 4-48 "Edit Sound" when selecting a sound source and not selecting a sound source. 84 There are four edit options: "Change SPL," "Change sound file," "Mute," and "Move" (Figure 4-49). "Change SPL" allows users to change the average SPL(C) of the selected sound source. The input should be an integer within the range of 0 to 100. "Change sound file" allows users to change what to play of the selected sound file. The five given options for users are impulse, which is a balloon blast; a constant 500 Hz tone; a piece of symphonic music (Patynen, Pulkki, and Lokki 2008, 856- 865 ) ; and two segments of speech, which are the recordings of Harvard sentences in male and female voices (Kahal 2002, 9). All of these sounds are recorded in anechoic chambers. "Mute" allows users to stop playing the selected sound source. "Move" leads users to the scene "Move Sound" to change the location fo the selected sound source. Users are also allowed to add new sound sources by going back to the scene "Set Sound." For more information about the operations and the sound editor UI, see Section 4.4. Figure 4-49 Four edit options in "Edit Sound." 4.3.6. Move Sound Main In this scene, users can move the sound source along the coordinate axes. The axes are based on the world coordination system, which will not change the directions with the location and the orientation of the devices (Figure 4-50). 85 Figure 4-50 Users can move the sound source along the coordinate axes. Move Sound_ Main will first save the original location of the sound source. When users touch and drag one of the axes, it calculates the distance of the finger movement and adds or minuses this distance to the correlated coordination of the sound source position (Figure 4-51). For example, when users drag the x axis, the sound source will move along the x-axis in the same direction of the drag. This scene also shows the current elevation of the sound source for users to have a better understanding of where the sound source IS. 86 if (touch . p h as e == TouchPh ase . Began) pose = t ouc h . posi· ion : t ag = h it .l1X i s . trans form. tag ; fl oa d irec t ionX = t ouc h . posi ion.x - pos e. x ; floa · d irec t ionY = t ouc h . po s i · ion.y - pos e. y ; fl oa cameraX = phoneCa mera . t rans form. p os i •ion. x - axis . t ransform. pos it ion . x ; fl oa· c ameraZ = phoneCa mera . trans form. p os i · ion. z - ax is . tran sform. pos it ion . z ; dis t anc e = \l ecto r 3.D ist anc e ( ouc h. pos i tion, pose ) /speed ; if (d irect ionX * c ameraZ ) 0) { x -= di s +anc e ; s oun s Source . t ran s form. pos it ion = new Vec tor3 (x , y, z ) ; el se x + = di s t ance ; s oun s Source . transform. pos it " on = new Vec t or3 (x , y, z ) ; Figure 4-51 Move Sound_ Main moves the sound source along the axis. 4.3.7. Run Simulation Main When users start the simulation, the scene will first set the playing state of each sound source based on the related settings (Figure 4-52) and enable the SPL tags of the sound sources, which show the SPL for each sound source. The SPL tags are always facing to the device and always keep in a readable scale (Figure 4- 53). if ( ! Set Room_Main. mut eOn) { else if (a. t rans form. GetChi l d (l) . GetChi ld (0) . GetC-O mponent ( Text ) () . t ext w l w ) a. Ge Compon ent( Audi oSourc e ) () . Pl ay () ; e l s e a. GetComponen t <Audi oSource ) 0 , S· op () ; a. Get Component ( AudioSource ) () . St op O Figure 4-52 Check and set the playing state. 87 Simulation / · ent 65 �� se t t ings . � dat eSett ing s (s ett ing · I): foreach (Garn eObjec t a in St ar t_Main. s oundSourc e) GameObject SP D is pl ay = a. t ransform. Ge Chi l d (O) . gam eObjec.t : SPLDi sp l ay . · rans form. LookA t (ph oneCam era. t ransform) : SPLDi sp l ay . · rans form. Rot at e (0, 180, 0) : fl oat sca l e = Mat h f. Pow (Vec tor3 . Di stance (ph oneCam era. t rans form. posi t ion, a. rans.form. posi t ion) , 1. 1 f) + 0. 5f : SPLDi sp l ay . •ransforrn . l ocal Sc ai e = new Vec tor3 (sca l e, sca_e, sc al e) : Figure 4-53 The SPL tags always face to the device and rescale to make the information is readable. The current SPL of the sound received by the users is calculated from the sound signal sent to the device (Naletto 2011 ), which contains both the background sound recording and the sound from the sound source. The A udioListener. GetOut putData returns the voltage samples of what users are currently hearing. All .wav files of the sound source audio clips are with a 48000Hz sample rate, which means the sound clips have 48000 samples per second. The sample array is in the size of 1024, which can save all the voltage samples in the last 21.3 ms. Using the convert formula, the voltage samples can be translated into sound pressure (Figure 4-54). 88 /// Cal c u ate SPL val ue /// Th i s p ar t re fe rs t h e answer from @a l donal e t t o /// h tps : / /an swers . un i tv, c om/gues t i ons/157940/ ce to u i: p u t data - an d- g e sp ect rumda a- t h e ·., - r epr e sent - t . h t ml int samp _ e Si z e = 1 024 ; fl oat [] l i st ene r Sam p l e _ l = new flo at [sam p l e Si ze ] ; fl oat [] l i st enerSam p l e _ r = new flo at [ sarr,p l e Si ze ] ; fl oat l i st e n erSam p l e = 0; fl oat l i st e n erSPL = 0 ; fl oat l i st e n er R\IS = 0; Audio L i ste ne r . Ge t Ou t pu tDa t a (li st ener Sampl e_ l , 0); Aud i oL i ste ne r . Ge t Ou t pu tData ( l is te ner Samp l e_ r , l) ; for ( i nt i = 0 . i < samp l eSi z e : i - +) { l i ste ner Sam p l e -= O i st ener Sam p e _ l i: + l i ste ner Sam p l e _ d i ]) * (li ste ner Sampl e_ l[ i ] + l i st e ner Sam p l e_ r[ i ]) ; l i s· e ner RMS = Mat hf . Sqrt ( l is t ene r Samp l e / sam p l eSi z e ) : l i st e nerS PL = 20 * Mat h f. L-O gl 0 ( l i ste ner R\IS / 0. lf) + 68. 09£ : if ( l i ste ne r SPL < 0) { l i st e ne rS PL = 0 : SP L_ t e xt . te xt = Mat h f. Round (l i ste ner SPL) . ToSt r i.n g () + • dB ' : SPL_ ar r o•·· tr ansform . r ot a t ion = Qu at e rnio n. Eule r ( 0, 0, -2 * li st enerSPL) : Figure 4-54 Scripts calculates the SPL heard by users. For every time users open the "Run Simulation," it records the environment for three seconds, and use the last one second of the recording to calculate the average SPL of the current environment (Figure 4-55). By using the last second of recording, it can eliminate the low SPL at the beginning of the record and get a more realistic result. This one-second recording loops during the simulation to represent the even background sound of the environment (Figure 4-56). Figure 4-55 The SPL of the environment background sound. 89 /// Background s ou n d SPL i f ( !get Background) { background. l oop = t ru e ; background. c l i p = M: crophon e. St art (nu 11, t ru e, 1, 44100); backgroun d.Pa y () ; Di rec t ion. t ext = •oet ect in g env i ronme nt sou nd. · : Invo ke Cs to pR ecordB ackground • , 3. Of) ; ;•oid �-�opRecordBac kground 0 { Microphone. End (nu ll) / /background. St op O : get Background = t rue : int s am pleSize = background. cl ip. samp les.; fl oat [] s am ple = ne,. float [sam pieSize] : fl oat to t al = 0: fl oat backgroundSPL = 0: fl oat RMS = 0: background. clip. Get Dat a (sam ple, 0) : for (int i = 0: i < sampleSi2.e ; i-- ) { t ot al T= sam ple [ i * sam ple [ i] : R.\! S = Mat h£. Sqrt (tot al / sampleSize): backgroundSPL = 20 * Math£ . Log!O (R.\!S / 0. lf) + 80 : if (backgroundSPL < 0) { backgroundSPL = 0 : Direct ion. text = •Environm ent : • i !la t hf. Round (backgroundSPL) . ToSt ring () + • dB " : Figure 4-56 Scripts for calculating the environment background SPL. The user current SPL shows at the center of the SPL scale in real-time. The arrow of the scale also rotates to the same position on the scale (Figure 4-57). Figure 4-57 The SPL scale shows real-time SPL and reverberation time. 90 The reverberation time is calculated by the Eyring' s formula. The calculation uses the area of each room surface and the absorption coefficient at 500 Hz, which is the LowFreqAbsor ption value of the SteamAudioMaterial (Figure 4-58). The result of the reverberation time is shown at the upper-right corner of the SPL scale (Figure 4-57). /// C alcu at e T60 va ue Lis t( fl oat ) ar ea = new L i st ( fl oat> O : Lis t( fl oat ) absor pt ion = new Lis t( L oa t) () ; fl oat vo_ me = 0; flo•at T60 = 0 ; .ar ea. Add (ca lcul at ion. MeshArea (Start_Main . floo r.GetCo mponent( MeshF ilter) () . me sh)) ; abso •rp t io . Add (Start_Main. floor. GetC ompo nen ( SteamAudioMater ial> 0 . Va l e. LowFr eqAbsorpt ion) ; area . Add (calculat ion. Me shAr ea (Start_Main . cei · ing . GetC omp onent ( MeshF il ter) () . mesh)) ; abso •rpt ion. Ad d (Start_Ma in. ceili ng. GetC omponent( SteamAu dio Mater ial> () . Va � e. LowFreq Ab sorptio n) ; forea c (GameOb j ect a in Star t_Ma�n. wall ) area. Add Ccalculat ion. MeshA rea (a. Ge tC omponent< MeshF il ter) 0 . mesh) ) ab sorpt ion. A dd (a. GetCo mponent<SteamA udi oMaterial> () . Va� e. LowFreqAb sorpt i on) : volllill e = area o= * SetRoom_Main. e�evati on : T6 0 = ca cula ti on. E rri gFormul a (vo_ ume, area, ab sorpt i on) ; T60_text. . tex t = (Math£ . Round (T60 � 1 00) / 100) . oString () : Figure 4-58 Scripts calculates the reverberation time of the room. 4.4. User Interface Design The user interfaces are not only a display that contains information, but they also play significant roles in connecting different scenes, modify values, and control the objects in the scene. 4.4. 1. Menu Buttons and Settings The menu button is on the upper-left corner of the screen in all scenes except the "Start Scene." When users point the menu button, the menu shows. The menu includes "New Project," "Save Sound," "Settings," "About," and "Quit" (Figure 4-59). 91 Figure 4-59 The menu shows when users point at the menu button. "New Project" clears all the objects created by users and go back to the scene "Set Room" (Figure 4-60). Users can set up a new room and then place new sound sources. publ ic void n�wProject () { foreach (GameObj ect a in Jbject . FindObjects OfTj -p e<Game Object) () ) { Destroy (a. gam sOb ject) ; SceneMana ger. Loa.d Scene (l ) ; Figure 4-60 "New project" contains the module New project. "Settings" and "About" are two separate windows (Figure 4-61). The "About" window contains the current version, the information of the author, and acknowledgment. In the "Setting" windows, users can control which kinds ofresult shows in the simulation scene or mute all sound sources (Figure 4-62). Since the menu and the "Settings" exists in all scenes except "Start Scene," the settings need to be consistent. The module Setup settings and Update settings are called in each scene to read the values of the settings from the previous scene and write the new values from the current scene if there is any change (Figure 4-63). 92 public void sho-.Se ting O { i:.enu. SetActh-e (fa se) . se tir.g. SetAc: ive (true); eop t:;. Se Active (:rue): public \'oid sher.Abou t 0 { ce nu. Set.kt i ve (fa se) : abou . Se Active (true): eopt;·. SetAct ive {:rue): version. text = Apphc auo ,·ersio:i : Figure 4-61 Settings and About. public void Visibili ty_Scal e O { if (scal eToggle . GetComponent( Toggl e> (). is 0n ) { scal e. SetAc tive (true) ; els e scal e. Se· Active (fal s e) public void Vis ibili ty_FrequncyGra ph () { if (frequencyGraph Toggl e. GetComponent<Togg l e) () . isOn) { freqencj'G raph . SetAct ive (true); el se freqencyGraph . SetAct i ve (fal se) : public void \l:isi b il ity_WaveGraph () { if (waveGraphT ogg e. GetComponent ( To gg l e) () . is0n) { waveGraph . SetAct. ive (true) ; el se waveGraph . SetAc t.ive (fal se); public void Mu eAll () { if (muteToggle . GetComponent( Toggl e) () . is0n) { foreach (Game0 bjec t a in Start_Main. s oundSource) { a. GeC omponent( AudioSourc e) 0 . Stop () : el se foreach (Game0 bject a in Start _J.lain . s oundSource) a. Ge Component( Audi oSourc e)( ) . Pla y () : Figure 4-62 modules in "Settings" that controls the visibility and playing state. 93 public void S e tupSett : ngs (Gam eOb j ect se ttingU I) { s e t t ing UI . transform . GetC h i ld (0) . Ge tC om ponent < Toggle ) () . sOn = Se tRoom _Main . sca _ eOn ; sett ing UI. transform . GetC hi l d (1) . Ge tC omp onent < Togg l e ) 0 . sOn = Se tRoom _Main . freque nc:, GraphOn; s ett ing UI. trans form . GetC hild (2) . Get C om ponent < Toggle ) () . sOn = SetRoom_Ma in . waveGraphOn ; set t ing UI. transform . GetCh i ld (3 ) . Ge tC om ponent < Toggl e ) 0 . : son = Se tRoom _Main . 111U t eOn ; public void Updat eSett ing s (Gam eObjec t se t •ingUI) { Set Room _Main . scaleOn = se t ingl: I. tran sform . Ge tC h i l d (0) . Ge t C om pone nt < Togg l e ) 0 . is On ; Se t Room _Main . freque n cyGraphOn = set t i ngUI. t ransfo rm . Get C h i ld (1) . Ge t C om pon en <Togg le > 0 . isOn ; Set Room _Main . wave Grap hOn = sett 1n gUI. · ran s form . Ge tC h i l d (2) . Ge tC omp one nt<Togg l e ) () . i sOn ; Se tRoom _Main . m uteOn = s ettingU . tran sform . GetC hi l d (3 ) . GetCo m ponent ( Togg l e > 0 . i sOn ; Figure 4-63 Setup settings and Update settings can read and write the values of the settings. "Quit" end the whole process of the application and go back to the device desk (Figure 4-64). When the menu or the windows of "Settings" and "About" shows in the screen, a transparent button "Empty" that under these windows covers the whole screen also set active as well. When users touch any blank space on the screen, they actually click the "Empty" button, which calls the module clear that hides all these windows and itself (Figure 4-64). 4.4. 2. Step Control publi c mid 9. � i t 0 { Appl ic at i on . Qu i () ; publi c void s � e ar () { m enu . Se t Act i v e (fal s e ) ; s e t ing . Se t Activ e (fa: se) ; ab ou t . Set Act i v e (fa l s e); e di t Room . Set. Ac t iv e (fa: se) ; e d i tSound . S e t.Ac · i ve (fa _ s e) emp ty. Set Act iv e (false ) ; Figure 4-64 Scripts of module Quit and Clear. The step control buttons are on the upper-right corners of each scene except the "Start Scene." The module Edit Room leads to the scene "Edit Room" and the module Edit Sound leads to the scene "Edit Sound". When users click the "edit" button in the scene "Start Simulation," two options will show up and lead to "Edit Room" and "Edit Sound" (Figure 4-65). 94 public void edi t 0 buttonRoo:n. gat1e0 bjecr. SetActive (true ) ; buttonSound.game0b ject.Se Ac ive(true ) : empty. SetAc i\ ·e (tru e) : publi c \·oid EditRoom O Scenel.lanager .LoadScene(4) : buttonRoo:n. game0b ject. SetAct ive(false) : buttonSound.game0 bject.Se Ac ive (: al se); public \·oid Edi tSound 0 Sc enel.lanager .LoadScene(S) : buttonRoom. gat1e0b ject. Set Act iv e (!'al se) buttonSound.game0 bject.SetAc ive(fal se) ; Figure 4-65 The module Edit Room, Edit sound and edit. There module Finish Room has two directions. If the sound sources have not been set, which means that this module is used in the scene "Set Room," the module Finish Room saves the surfaces to the game object lists, and assign Steam Audio Geometry and Steam Audio Material on the surface objects and leads to the scene "Set Sound." If the sound sources have already set, which means this module is used in the scene "Edit Roon," it leads to the scene "Run Simulation" (Figure 4-66). Both directions reassign the acoustic material parameters and export the scene as a phonon scene. The phonon scene is named as the current scene. After the scene is generated, duplicate the phonon scene as the name of "3 Run Simulation.phononscene", therefore when the scene "Run Simulation" starts, this phonon scene can be read for the simulation (Figure 4-67). 95 if ( ! SetR oom _ Main. setSound ) { II/ Save objects from sce ne. Start_ Main. cei 1 ing = GameOb ject. FindG ameO bjectWi th Tag C cei 1 ing n ) ; Start_ Main. floor = Gam eObject. Find Gam eObjectWithTag( n floor n ) ; Start_Main. wal l = Gam eObject. FindG am eObjectsWithTag( n wall n ) ; Ill A ssign ma terial acoustic dat a foreach (Gam eObject a in Star t_M ain. wal l) { a.A d d Com ponen t<SteamA udi oGeometry > (); a.A d d Compon ent<SteamA ud ioMaterial> (); Start_ Main.ceil ing. A ddC om ponent<SteamA ud ioGeomet ry > () ; Start_ Main. ceiling. A ddCo m pone nt<SteamA ud ioMaterial> () ; Start_Main.floor. A dd Compone nt<SteamA udio Geom etry>() ; Start_ Main.floor. A dd Compone nt(SteamA udio Material>() ; if ( tSetRo om _ Ma in. setSound ) { SceneManag er. L oadScene (2) ; else { SceneMan ager. L oadScene (3) ; } Figure 4-66 Two directions in Finish Room. Star t_ldain . floor. GetComponen <Steam. ,\ udioMa t erial> O . Prese t = MaterialPr eset . Cus tom; se t Sur facePar a. Ass :i gnA cous ticPara l!l eters (Star t_Main . fl oor, Start _Main. floo r!! ai erial, Star t_Ma in. fl oor. GetCo:np onem <MeshR end erer> 0 . ma· erial. name } ; Star t_ldain . ceilin g . Get Cornp onent <SteamAudi o Material> () . Preset = Jdat erialP rese t. Cu s tol!l ; setSur f acePara . Ass :i gnAcous t i cPar a m eters (Star t_l!a in . ce :i l in g, Star _Main. ce:i l in gMaterial, Star _Main. cei l ing . GetColll]) on ent<MeshRenderer> ().roa terial. n ao e) ; foreach (Game Obj ect a in Start _Main . wal l) { a. GetComponent <Steam. .\ udio!!a t erial> 0 . Preset = MaterialPr es et . Cusrnm ; .secSurf acePara . As s i gnA cous t icPar al!l e, er.s (a, Start _!!ai n . wallMateri. al, a. GetComp�mm i( lle shRe nderer> () . mat erial. na 11 e) ; SetRoom _l!a in . i s First = true ; S- eamAudioManager. ExportScen e (fals e) ; var fi leNam e = Path. Getf ileNal!l eWi tho utExtensi on (Scen eManager. GetActi veScen e () . n am e) + •. phononscen e · ; var n ewNam e = �3_Run Sirr:ul at ion� + •. phon onsc en e" ; i: (File. Exists (Path. C olil bin e (App lication. p ers i s t ,m tDataPat h, n ew. · am ;; ) ) ) Fi le. Delete (Path. C olil bin e (App licat ion. p ers i s t entD ataP ath, n ew. · am e) ) ; File. Copy (Path. Com:b ine (App licat ion . pers i s tentDataPath, f i. letiam e) , Path . Comb in e (Ap pl ication . persistentDataPa th, n e..- Name ) ) ; Figure 4-67 Finish Room assign the material acoustic parameters and export the phonon scenes. The module Finish Sound saves all sound sources in an array and leads to the scene "Run Simulation" (Figure 4-68). Add Sound leads to the scene "Set Sound" which allows users to add a new sound (Figure 4- 69). 96 pub l ic void f �nishSou nd () { fo rea c h (G am e0 bj e c t a rn St art _M ain . wall ) a. Get C omp one n O !es hCo ll id e r> O . e nab l e d = t ru e: St art _ Main . fl oor. G e ·Co m po e. t <Mes hColl ider ) () , e nab_e d = tru e; Sta rt _ Main . c e ili n g . G e t.C om po ne t ( Mes hCollide r) 0 . e na _e d = t ru e; St art _ Ma in . s ou ndSour ce = G ame 0bjec t . F i n dG am e Ob j e c t sWi t hT ag C s ound so u rce 0 ) ; S ce ne M.a nager. oadSce n e(3); S e t Room _Ma i n. isF irs t = tru e; S t ea m. a\ u d i. o .anag e r. Exp ort. Sc e ne (fa s e ) ; Figure 4-68 Finish Sound saves all sound sources and leads to "Run Simulation." p blic void ?- .4dSo und 0 { Sc e ne Manag e r, L-oadS cen e (2) Figure 4-69 Add Sound leads to the scene "Set Sound." 4.4. 3. Material Dropdown Two modules are used to control the material dropdowns. One is Show dropdown, and the other is Change material. Show dropdown not only controls the visibility of the dropdowns, but also identifies the current material of the hit object to ensure the dropdown is set up to the correct material (Figure 4-70). public voi d ;; Ji. owDropDo wn (Touch touch, RaycastHit hit , Dropdo-rn dr opdo wn) { stri ng curre ntMa i e ri al = hi t. i ra nsforrn. Ge tCo m ,i one nt<M es hR e nder er > 0 , mat e ri al. nao e . Re place C (Instance ) ·, ·•) ; drop do wn . value = dr opci.o 'on . opt i ons. Fi nclfodex ( (i ) => { re turn i. tex i. Equals (curre ntMa te ri al) ; } ) ; drop down . game Objee . Se - Ac- iv e (true ) ; drop down . - rans form .. pos i tion = t ouch. positi on ; Figure 4-70 Identify the material object and assign the correct value to the dropdown. The module Change material gets the material in the library, which matches the current option of the dropdown (Figure 4-71 ). This module runs whenever the value of the dropdown changes, so that the material of the object changes when users select another option in the dropdown (Figure 4- 72). 97 ,i ubl i c m id � t} an geMaterial 0 { r.la terial mat eri al : i. f (EditRoom _Mai n. h it . - ransforn. -ag = µfloor"} { ma terial = Re sour ces. Load C"Mat erials/Roo mMa terial/Defal tLi st/:: loorr + floorli) ropdown . options : n oor Dro pdo,m . v alue: , text, typeo:: (Mat er ial)} as Ma terial ; Edi tRoom _Main, h it . - ransforn. GetCo:npo nent<J,! eshR enderer> (} , ma terial = m :, erial ; i.f (EditRoom _Ma i n, h it . ransfo rn . ag = µ� al l ") m aterial = Re sour ces. Loa d ("Mat erial.s/RoomMat eria l/Defal tLi st/wal 1r + wall Dropd own. opti ons [wallDr opdow n . va ue] . text, typeof (Ma terial) ) as Material ; Edi tRoom _!lai n. h it . u ansfo rn . G e t. C o:npo nent<J,! eshR enderer> () , ma terial = m. t .eria ; i.f (Edi tRoom _Ma i n. h it . - ransforn . ,ag = µ ceiling •) m aterial = Re sour ces. Lead C"! .!a terial.s /Roo m.l,!a terial/Defal tLi st/ce i lingr + cei ingDr opdo.m . options[ce i l ingDropd o�TI . va ue] . - ext, typeof(Ma terial} ) as Ma terial : Edi tRoom _Main, h it . - ransforn . GetCo:np onent<J,! eshR enderer> (} , ma ter �al = m. terial ; Figure 4-71 Get the material in the library, which matches the current option of the dropdown. Select a surf ace to edit. gypsum ... Figure 4-72 Change the material of the object when the dropdown value change. 4.4. 4. Sound Option When users choose a button in the sound edit option UI, this button will be highlighted and shows the corresponding options (Figure 4-73). The value of the options is updated to the current data of the chosen sound source. 98 For example, when selecting "Sound SPL", the module Op tion_SPL that links to this button replaces the picture of the "Sound SPL" and restores the other buttons. It also shows the option of "Sound SPL" which is the SPL input filed. The current SPL value shows as the default (Figure 4-74). Instead of showing any options, Opt ion_move directly leads to the scene "Move Sound" (Figure 4-75). Edit Sound ail � - I j • • >))) ··..,...___ Figure 4-73 Soundar highlights the selected button and shows the corresponding option. publi c void 0pt ion_SPL () { SPLOp t ion . SetAc · i ,e ( true) : fil e Opt i on . SetAc i ,·e (£al s e) : mut e Op t i on . S etAc i ve (£al se) : mo, eOp t ion. SetAc t i ve (fal s e) , SPLB ut t on . GetComponent ( Imag e) 0 . oYerri deSpri te = Re sour c es . Load ("Pi c tu re/U I /Sound Opt ion Menu A2' , typeof (Sp ri te)) as Sprite : fi l eBut ton. GetC omponent<I mage) () . oYerri deSpri t e = Re sources. Load ("Pic ture/VI/Sound Opt ion Menu B l•, typ eof (Sp ri te)) as Sprite : mut eBut ton. GetComponent<Image) () . overri deSpri te = Resour ces. Loa d ("Pic ture/VI /Sound Opt ion \!enu Cl•, typ eof (Sp ri te)) as Sprite : mov eB utton. Ge tC omponent <Image) 0 . overri deSpri te = Resources . Load ("Picture/VI/Sound Opt ion \!enu D I · , typ eof (Sprite)) as Sprite : SP[,Op t ion . t ransform . Get.C hi ld (0) . Get Chil d (0) . GetComponent ( Text > 0 . t ext = EditSound_Main . h i t . tran sform. GetChi l d (0) . GetChil d ( 1 ) . GetComponent ( Text > O . t ext . Rep a c e (" dB" , n ): Figure 4-74 Scripts of the module Op tion_SPL. 99 p ub l i c void Option_mo,·e 0 { SPU)p t ion. SetAct i ve (fal se) fil eOp tion. SetAc tive (fal s e) . muteOpt ion. SetAct i ve (fal s e) . moveOpt ion. SetAct i ve ( true) : SP LButton. GetCo mp onent( l mage) () , overri deSp ri te = Resources . Load ("P i cture/UI/Sound Opt ion Menu Al• , type of (Sp rite)) as Spr ite ; fil eButton. GetCompone nt( lmage) () , overrideSp rite = Resources . Load ('Pic ture/UI/Sound O p tion Menu B l', typ eof (Sp rite)) as Sp rite ; muteButton. GetComp onent( lmage) () . overrideSp ri te = Resources . Load ("Pic ture/UI/Sound O p tion Menu c 1 •, typ eof (Spri te)) as Sp rite ; moveButton. GetCompone nt<Image) 0 . overrideSp ri te = Resources. Load (•P ic ture/VI/Sound O p t ion Menu D2', typ eof (Spri te)) as Sp rite ; SceneManager. L oadScene (6): Figure 4-75 Op tion_move directly leads to the scene "Move Sound." Sound Op tion White hides all options and restores the buttons (Figure 4-76). Therefore, when users open the sound edit options again, it will show the original UL publi c v o id � R t ionMe nuWh : t e (Gam eOb ject optionMenu) { op•ionMenu . trans form . Ge t Ch i l d (l) . gam eOb j ect . GetC omponent ( lmage > O . ov er r ideSpr i t e Resou rc e s . Load C P ictur e/UI/Sound Opt i on Menu A l • , ypeof (Spri t e)) as Spri t e ; opt ionMenu . trans form . Ge t Ghil d (2 ) . gam eOb ject . GetComponent ( Image ) () . ov er r ideSpr i t e Reso urc e s . Loa d ('Pict ur e/UI/Sound Opti on Menu Bl ' , •ypeof (Spri t e) ) as Spri t e ; opt ionM enu . t r an sform . Ge t Chil d (3 ) . gam eOb jec . Ge tComponent ( lmage ) 0 . ov er r ideSpri · e Resou rc e s . Loa d C P ict ur e/UI/Sound Opt i on Menu Cl • , ypeof(Spr i t e)) as Spr i t e ; opt i onMenu . tr an s form . GetCh i l d (4) . game Ob j ec .Get Component ( Image ) () . over r ideSpr i • e Resou r ce s . Load ('P:ct ur e/UI /Sound Opt.i on Menu D l ' , t ypeof (Spri t e) ) as Spri t e ; opt i onMenu. tra n sform . Ge tChi l d (5 ) . gam eOb j ec . SetAct iv e (fa ls e); opt ionMenu . tran sform . Ge t Ch i l d (6) . gam eOb j ec . Set Ac t iv e (fa l s e); opt i onMenu . trans form . Ge t.Ch i l d (7 ) . gam eOb j ect . SetAc t i v e (fa l se) ; opt ionMenu . tran sfor m. Ge t Ch i l d (S) . gam eOb j ect . Set Act iv e (fa l se); Figure 4-76 Sound Op tion White hides all options and restores the buttons. 4.4. 5. Sound Control All default sound files are pre-set as 60 dB(C) through the mixer added to the audio source. For each time when the value of the sound SPL input filed is changed, SPL Inp ut assigns the new value to the SPL tag of the selected sound source and also changes the volume of the mixer (Figure 4-77). 10 0 public void SPLinput () ( fl oat newSPL = 0, Simulation � newSPL = float, Parse(SPLinputF il ed. text, ToString () ) - 80 EditSound_!lai n. h it. transform.GetCompon ent<AudioSou rce) () . outputAudi oMixerGrou p. audioM ixer. SetFl oat ("volume_mixer•, newSPL) ; EditSound_Main. h it. transform.GetChi ld (O) , GetChild(l) . GetCompo nent<Text) () . text = SPLlnputFi l ed. text + · dB ' . Figure 4-77 SP L Inp ut changes the SPL tag value and the volume of the mixer. When the value of the sound file dropdown changes, Change sound fi le changes the audio clip of the selected sound source to the one with the same name. All the sound clips are the .wav files saved in the "Resources" folder. If the new sound clip is the impulse file, the sound will only be played once; otherwise, the sound will be looped (Figure 4-78). public void s�angeSoundFi e () { AudioCl ip c _i p ; cl ip = Resour ces. Load ("Sound/' + fi leDropdo wn . options fil eDropdo,n . val ue] . t ex t, typeof (AudioC l ip)) as AudioCl ip : EditSound_Main. hi t . transform. GetComponent( AudioSource) ().cl ip = cl ip ; if (cl ip. name == 'Impul se") { EditSound_Mai n. h it . transform . GetCo mpon ent<A udioSourc e> () . loop = true: else { Edi tSound_Main. hit . transform. GetCompon ent<AudioSource ) 0 , loop = true: Figure 4-78 Script for Change sound fi le. 101 When the value of the mute toggle changes, mute will rewrite the isPlay text of the select sound source (Figure 4-79). When users go back to "Run Simulation," that sound source will change the play state based on the value of isPla y. publ i c vo id � � te () i f (muteTo g gl e. isOn) { EditSound_Main . hit . tra nsform . GetChi l d ( l) . GetChi l d (O) . GetCo mponent<Text>() . ext = HO� ; else EditSound_Main . hit . tra nsform . GetChi l d ( l) . GetChi l d (O) . GetCo mponent<Text >() . text = Hl� ; Figure 4-79 Script of Mute. When clicking the "delete" button at the upright corner, the sound source will be removed from the project and the sound edit option UI will be hidden as well. 4.5. Summary public ·,·oid d elete () { Destroy(EditSound _M a:n. it. transform . game Obj ect) ; op ionM enu. SetAc t ive (fa_ se) ; ad c!B tton. SetA ct ive (true) ; delet eButton. SetA ct :ve (fa � se) ; EditSound _ Main. shoi.O pt ionMenu = fa�s e ; Figure 4-80 Script of Delete. This chapter explained in detail each module and the main scripts of each scene, the entire script of the application, and the algorithm of the process. There are a total of 22 scripts written specifically for this application, which includes 41 different modules (Table 4-3, Table 4-4). 10 2 Tab le 4-3 Mod ules used in the simulation process (in order of use). Module Script File Name Function CSV to list DataOperation Read the .csv file into the program as an array. Place ob jec ts on AR plane PlaceObjec ts Place a specific type of game ob ject on an AR plan. OnARPlane Link two points LinkTwoPoints Link two given game ob jec ts with a line. The format of the line is defined by a prefab. Create surface Create Surfac e Create a surface by the given list of vertex and the direction of the surface (horizontal and vertical). Clone obj ect CloneObject Copy a game ob ject and paste it at the same location. Clone obj ects Copy a list of game objects and paste them at the same location. Do not destroy DoNotDestroy Keep the game object when switching betwe en scenes. Place obj ects in air PlaceObj ects InAir Place a given game obj ect in the air without any reference plane. Assign acoustic DataOperation Search certain material from the array and assign its material parameters acoustic data. Mesh area Calculation Calculate the area of a given mesh. Eyring formula Calculate the reverbe ration time using Eyring's formula. * This table only shows the modules and codes written by the author. T bl 4 4 M d l a e - o u es use d £ h or t f; e user mter ace d es1gn. Module Script File Name Function Show menu Show the main menu. New project Create a new project and return to the scene "Set Room." Save sound Save a .wav file of the internal sound of the simulation. Show setting MenuButtons Show the window of "Settings." Show about Show the window of "About." Quit Quit the application. Clear Unshow all windows when touching the blank space. Visibility scale Show/hide the SPL scale in the scene "Run Simulation." Visibility frequency graph Show/hide the frequency graph in the scene "Run Simulation." Visibility wave graph Settings Show/hide the wave graph in the scene "Run Simulation." Mute all Mute/play all sound sources in the scene "Run Simulation." Setup settings Read the current setting in the current scene. Update settings Update the setting from the last scene. Finish room Go to the scene "Set Sound." Finish sound Go to the scene "Run Simulation." Add sound StepButtons Go to the scene "Set Sound." Edit Show the two choices of "Edit Room" and "Edit Sound." Edit room Go to the scene "Edit Room." Edit sound Go to the scene "Edit Sound." Show dropdown Material Show the material dropdown at a touch. Change material Dropdown Change the material when the dropdown value changed. Option SPL Change the UI form and show the SPL input filed. Option sound file Change the UI for m and show the sound file dropdown. Option mute S ound Option Change the UI form and show the mute toggle. Option move Change the UI form and go to the scene "Move Sound." Option menu white Reset the UI form and hide all options. SPL input Change the SPL of the choosing sound source. Change sound file SoundControl Change the sound file of the choosing sound source. Mute Mute the choosing sound source. Delete Delete the sound source. * This table only shows the modules and codes written by the author. 10 3 There are also scripts form the AR Foundation SDK called in Soundar. AR Session and AR Session Origin build a basic AR environment. AR Plane Manager and AR Rayca st Manager help detect AR plane and interact with AR objects. Another outside source is Steam Audio, which provides scripts for sound rendering. For the complete scripts, please see Appendix B. Multiple tests were done for validating the accuracy of the simulation of Soundar, which will be introduced in Chapter 5. 10 4 5. Validation The validation was focused on the accuracy of the simulation result from Soundar, as well as its practicability. The application was tested in two rooms in Watt Hall at the University of Southern California. Two methods were used in the test. One was a live recording, and the other was the Soundar simulation. A live recording was taken of a test sound going through a loudspeaker and recorded with a microphone. The microphone was linked to a computer and used Adobe® Audition to record the sound to .wav files. The Soundar simulation was the screen recording from the tested phone. The Soundar used in the test was version 1. 0.8.2 5, which did not contain the function of saving a simulation sound file to the local folder of the device. The simulation sound file was recorded by the phone' s built-in recorder, which was set to only record the device internal audio and then transfers to .wav files also using Adobe® Audition. All .wav files used in the validation were mono channel and with a 48000 Hz sample rate. The validation tested Soundar in different room conditions and sound conditions. The tests mainly focused on these performances: 1. How did Soundar perform compared with the real sound performance in the room, especially the sound pressure level (SPL)? SPL is the logarithmic value of the sound pressure, which has a positive relevance with the sound volume. Comparing the SPL between Soundar and the live recording can indicate the accuracy of the volume changes of Soundar. 2. How did Soundar perform when the room and sound conditions changed. This comparison will show if there is an obvious change in the simulation results based on the input changes and if this result is reasonable and accurate. 3. How accurately did Soundar calculate and render the reverberation time? Reverberation time (T6o) is the time for the sound pressure level to reduce by 60 dB in a room. It can be measured by playing and recording an impulse sound in the room and calculating the decay time base on 10 5 the sound of the impulse response. By comparing the decay time and rate that can be indicated from the impulse response, the reverb performance of Soundar can be validated against theoretical calculations and real measurements in a room. The performance was analyzed from the Time domain charts, frequency domain charts, and impulse responses. The time domain charts used time as the horizontal axis and the signal amplitude as the vertical axis, which showed the amplitude changes during the time. The frequency domain charts used the sound frequency as the horizontal axis and SPL for each frequency. This kind of chart indicates the frequency constitution of the sound. Impulse response shows the sound decay as a function of time after the sound stops playing, which is a direct way to estimate the reverberation time. This chapter introduces the validation test design, process, and results, which include the introduction of the software and equipment used in the test, the design of the validation tests, the result and the analysis of the tests, and the conclusion of the validation. 5.1. So f tware and Eq uipment This section introduces the software and equipment used during the validation test. The software used in the test was used to edit the recordings, unify the sound file format, and analyze the sounds. These include Adobe® Audition, Virtual Sound Level Meter, and Room EQ Wizard. The equipment included a sound pressure level, a loudspeaker, and a measurement calibrated microphone. These sensors and instruments were used in the live recording process. It also contains the cellphone that installed Soundar and the earphones used by the tester during the tests. 5.1.1. Software Adobe® Audition was used to record the sound file and convert the file of the simulation recording form Soundar to the .wav format (Figure 5-1). It was also used to trim the sound files and cut off the unneeded silent parts at the beginning and the end of the sound clips. 10 6 ■ A6obe� � io,, C(2 01g f;I � fd� hJu H�! Kt J:lip Eff�cti favo{itu :litw �- !:!tip Figure 5-1 Using Adobe Audition to convert the file format. Virtual Sound Level Meter (VSLM) is a MATLAB based software for analyzing SPL ( (Muehleisen 2018, 1840)), frequency band, and other acoustic analysis based on the input .wav file (Figure 5-2). The version used in the validation was V0.4.1. II v slm _ 0_ 4_1 - X Fil e lpp l ot LEQ Band P50 Sp ectrogram H el p - L ave Lp Data - Display Sound P ressur e Level - File 90 Se t Calf actor I L oad Cal .wav I Load l,teas .wav l 85 -Operate Save Plot I P•y l,leas .wav l i Ana�ze Data ii 80 75 - Speed 70 I Slow Fast I Impulse I � 65 -Weighting 60 A I I C Flat I 55 - Analysis 50 I Lp Plot LEO I Band I NCJ RC I PSD I Sg'8m I 45 = into Meas File: T1 c_Recording.wav 40 0 2 4 6 8 10 Meas Length: 10 sec t (sec) Sample Rate: 4B000 Hz Weighting: C Meter S peed: S low Cal File: None Loaded Cal Factor: 1 Lcsma x = 76 . 1 dB at 1 = 8. 3 sec. Figure 5-2 VSLM and its basic information. 10 7 Room EQ Wizard (REW) is a room acoustics analysis software that can measure and analyze room and loudspeaker responses. The version used in the validation was V5.19 The microphone used in the test can be directly linked to REW and sent its data. It can return the realtime SPL value of the live recording and can also analyze from imported .wav files. It can also simulate the sound performance of simple room models. l.l, P£W � . !I .,. , ... �- i.- -�•Qr ... 5.1.2. Equipment i:;.:::;:;;;:r:;;q _ ,.,. ., ,.oo f,·11�•, o...,f - •_. -..;:;;;;;:::;;;: Figure 5-3 Welcome screen and information of REW. The tests used a professional data-logging sound level meter (Figure 5-4). The meter was set up to the mode dB(C) to measure the C-weighted sound pressure level, which is based on the loudness sensitivity of human ears. Figure 5-4 Professional sound level meter. 10 8 The speaker used in the test was a Mei dong QQChocolate Bluetooth Speaker (Figure 5-5). The size of the speaker was 14.5 cm in length, 4. 5 cm in width, and 5. 5 cm in height. The speaker can play the sound in four directions, which was a good representation of an omnidirectional point sound source. Figure 5-5 Bluetooth loudspeaker. A microphone used to record the sound in the test room was a miniDSP UMIK-1 calibrated USB measurement microphone (Figure 5-6). It has its own calibration file and is compatible with all acoustic software like REW. Figure 5-6 Measurement calibrated microphone. During the test, Soundar was installed and used on OnePlus® 6T, based on Android version 9. The earphones used in the tests were Sony® MDR-XB70BT EXTRA BASS ™ Wireless In-ear Headphones (Figure 5-7). 10 9 OnePlus 6T ON�Pl US A0010 _47_190730 Privacy policy . agreements etc Snapdragon'" 845 8GB RAM + 128GB ROM 16 + 20 MP Dual Camera Optic AMOI.FO 6 41" Di splay Ver1f1 cat1on ONFPL US A601 0 Phone number, signal . etc MDR-XB708T EXTRA BASS• Wftlns k,..,.a, tt.oldphones General Features DRIVER UNIT o 4r !Dome Type) DYNAMIC TYPE Closl!d Dynamic MAGN£T Neodymium IMPEDANCE (OHM) NIA fREOUENCY RESPONSE 4 Hz-24.000 Hz SENSITIVTT lES (D8/MW} NIA VOLUME CONTROL Yes COAD TYPE NIA COAD LENGTH NIA PlUG NIA USAGE TIME Max 9h CHARGE TIME ApptOlC 2 5 Hours WEARING STYLE In-ear Figure 5-7 Device and earphones used in the test. 5.2. Validation Tests This section introduces the two rooms used in the tests, as well as the design of each test. A total of eleven tests were performed for the validation. Nine of them were in Room 1, and the other two were in Room 2. Both rooms were enclosed rectangular rooms without windows but differed in size and materials. Nine tests were done both with Soundar and the live recordings, and two of the tests were done only by Soundar, where the material of the rooms was virtually changed. 5.2.1. Room 1: Watt 212 Room 1 was Watt 212, which is on the second floor of the Watt Hall (Figure 5-8). The room was an enclosed room with no windows. It was 2. 40 m in height, 7.70 m in length, and 4.96 m in width. The floor was covered with thin carpet, the ceiling was acoustic ceiling board, and the walls were gypsum board. The room had an even background sound which was at an average of 65. 5 dB(C). 11 0 Gypsum Board Figure 5-8 Room Watt 212. Nine tests, numbered from Tia to T1i, were implemented in Room 1 (Table 5-1). These tests included different positions of the sound source and listener (Figure 5-9), different kinds of sound, different sound source SPL, and different room materials. All tests except T1b were performed both with Soundar and with real sound from the loudspeaker (Figure 5-10). Test Tia played the impulse at 85 dB (C) at the center of the room both in the room and in Soundar. The listener and the microphone were in the same position as the room. The virtual room in Soundar used the same material with the room. Test T1b changed the room materi al based on T,aand played the impulse in the same SPL. Tic used the same material with the test room and played a constant 500 Hz sound at 75 dB(C) at the same position of the listener. T1ct keep the same sound and listener position but moved the sound source to the center of the room. Tie raised the SPL of the sound source to 85 dB(C) based on T,ct . In T,r , the music at 75 dB(C) was played at the center of the room, and the listener was in the front of the room. T1g moved the sound source to the front of the room and raised 111 the height and moved the li stener to the back corner of the room, which represents a person who sits at the back row of the lecture room . Test T11 1 and T1i played a speech of a female and a male in the same position with the last test, which can represent a person standing and talkin g. Test Tla Tlb Tlc Tld Tle Tlf Tlg Tlh Tli a e - est settmgs m oom T bl 5 1 � . R 1 Sour ce Sour ce (m) Listener (m) Material Sound Average Test File SPL Method (dB(C)) X y z X y z Floor Walls Ceiling Impulse 90 2.48 3.85 0.75 2.48 3.85 0.75 Carpet Gypsum Acoustic Soundar / Recording Impulse 90 2.48 3.85 0.75 2.48 3.85 0.75 Wood Concrete Concrete Soundar 500 Hz 75 2.48 1.90 1.05 2.48 1.90 1.05 Carpet Gypsum Acoustic Soundar / Recording 500 Hz 75 2.48 3.85 0.75 2.48 1.90 1.05 Carpet Gypsum Acoustic Soundar / Recording 500 Hz 85 2.48 3.85 0.75 2.48 1.90 1.05 Carpet Gypsum Acoustic Soundar / Recording Music 75 2.48 3.85 0.75 2.48 1.90 1.05 Carpet Gypsum Acoustic Soundar / Recording Music 75 2.48 1.77 1.50 1.00 7.05 1.05 Carpet Gypsum Acoustic Soundar / Recording Speech 75 2.48 1.77 1.50 1.00 7.05 1.05 Carpet Gypsum Acoustic Soundar / (Female) Recording Speech 75 2.48 1.77 1.50 1.00 7.05 1.05 Carpet Gypsum Acoustic Soundar / (Male) Recording Oso und So urce • Li stener \J \J \J \J 1 .00 m 2.48 m 2.48 m 2 .48 m J E 0 2 .48 m E E 2 .48 m 2 .48 m E 2 48 m "' "' 00 (T) (T) (T) V E E E E 0 0 0) 0 0) - - - - (a). Tia , T1b, (b) . Tic (c). T1ct, T ie, and T1r (d). T1g, T11 ,, and T1i Figure 5-9 Po sitions of the sound source and li stener in Room 1. 11 2 Figure 5-10 Tester is using Soundar to simulate the virtual sound source. 5.2.2. Room 2: San Merendino Room Room 2 was tested in the San Merendino Room, which is in the basement of the Watt Hall (Figure 5-11). The room was also an enclosed room with no windows. It was 2. 75 mi n height, 4.40 mi n length, and 4. 80 m in width. The floor was covered with thin carpets; the ceiling was unpainted concrete. One of the walls was gypsum boards, and the other walls were unpainted concrete. The background sound in this room is not even. An air conditioner filter was on one side of the room generated an uneven background noise. The noise at the test position was about 54. 0 dB(C). Concrete Figure 5-11 San Merendino Room Two tests, L a and T2b, were implemented in Room 2 (Table 5-2). These tests focused on impulse reactions in a room with different sizes. Both Test T2a and T2b played the impulse at 85 dB (C) at the center of the 11 3 room both in the room and in Soundar. The listener and the microphone located in the same position as the room (Figure 5-12). Test Tia used the same room material with T,a, Test L b used material of San Merendino Room. a e - est settmgs m oom T bl 5 2 � . R 2 Sour ce Sour ce (m) Listener (m) Material Sound Average Test File SPL (dB(C)) X y z X y z Floor Walls T2a Impulse 90 2.20 2.40 0.80 1.10 2.40 0.80 Carpet Gypsum T2b Impulse 90 2.20 2.40 0.80 1.10 2.40 0.80 Carpet Concrete Os o und Source • Listener 2 .20 m 2 .20 m E E D D N Figure 5-12 Positions of the sound source and listener in Room 2. 5.3. Test Result and Analysis Test Ceiling Method Ac oustic Soundar Concrete Sou ndar/ Recording This section contains analyses of three kinds of performance of Soundar based on the test results, which are the SPL change as a function of time, frequency response, and the reverb performance. The analysis used Time domain charts, the frequency domain charts, and the impulse response chart to compare the sound performance between different tests. The SPL change as a function of time used the time domain charts generated by VSLM. The time spacing of the time domain charts is 100 ms. The difference of SPL (�SPL) can be calculated from data logs, which is the absolute value of the SPL of Soundar simulation result minus the SPL of the live recording at the same time. Generally, untrained listeners can distinguish the difference of SPL when it changes by about 3 11 4 dB. The validation used 3 dB as the tolerance and calculated the percentage of the data that has the �SPL under 3 dB. The frequency response analysis was based on the frequency domain charts generated by REW, which indicated the SPL at each frequency. The frequency domain data was 1/12 octaves smoothed, which separated each octave band into twelve parts, and the value on the center frequency of each band is the average of the values on both sides. The coefficient of determination (R 2 ) was used to indicate the similarity of Soundar from the live recording. R 2 is in the range of 0 to 1. When R 2 is close to 1, it indicates that the Soundar simulation is close to the live result. The reverb performance was analyzed by using the impulse responses, which showed the sound decay per time after the sound stops. From the impulse response, the reverberation time can be indicated by the T 20 calculation. The threshold for human hearing to notice the difference of reverberation time is a deviation of 20% (Meng, Zhao, and He 2006, 418-421 ), which means when the deviation of T 60 is lower than 20%, the reverb sounds the same to the listeners. Therefore, the validation used 20% as the threshold of the accuracy of the reverb simulation. Besides the reverberation time, the shape of the impulse response also shows the decay rate and how the room reacted to the impulse. This section only shows the analysis results and the thumbnails of the charts. For complete test data and enlarged charts, please see Appendices C, D, and E. 5 .3 .1 . SPL Change as a Function of Time The Time domain charts were generated by VSLM to compare the SPL changes per time of the Soundar simulation results and the live recordings (Figure 5-13). In these charts, the orange lines represented the SPL per time of the Soundar simulation, while the green lines represented the result of the live recordings. According to the charts, Tic, which played 500 Hz constant sound and the listener was at the same position as the sound source, two lines were pretty close to each other. In T1d, Tie, and T1r, when the distance between 11 5 the listener and the sound source increased to 1.95 m, the result from Soundar always had a higher SPL than the live recordings, and the deviation also increased. When the distance between the listener and the sound source became further to 5.48 m, the deviation decreased. The changes in the time response of Soundar results were more stable than the live recording. It is likely because Soundar was more influenced by the background noise. The noise masked the changes, especially at the troughs where the sound SPL was lower than the background noise. The noise was likely also the reason why the deviation decreased in T1g, Tu,, and T,;. When the source-listener distance is too far, the SPL form the sound source had dropped to a low level in Soundar, which was similar to the SPL of the background noise. Therefore, the sound source did not obviously raise the overall SPL in these three tests. Tlc_Soudnar v.s. Tl c_Recording 90- ---�--��-��-�� 85 IJJ 75 70 '!!, 65 _j- 60 55 50 45 40 0����� 3��� 5-� s-�� e-� 9� ,o Weighting: C Mete r Sp eed: 8 , 0 <.;,ec) - Tlc_Soudnar Tl c Recording Tic: Room 1, 500 Hz, 75 dB(C), distance: 0 m. Tle_Soudnar v.s. Tle_Recording 90 --------------� 85 IJJ 75 4 5 40 0 � �-�- 3 �-4�- 5 ��-��- 9 �� ,o Weighting: C t (sec) Meler Speed· Slow - Tle_Soudnar Tle_Reco rdin g Tie: Room 1, 500 Hz, 85 dB(C), distance: 1.95 m. 11 6 Tl d_Soudnar v.s. Tld_Recording 90- ---�--��-��-�� 85 IJJ 75 7 0 55 50 4 5 40 0 Weigh ting · C 3 4 5 t (sec) Meter Speed : Slow 6 8 - Tld_Soudnar Tld_Recording 10 T,ct : Room 1, 500 Hz, 75 dB(C), distance: 1.95 m. Tl f _So udnar v.s. Tlf _Recording 90- ---------�--�- 85 IJJ 75 60 55 50 4 5 40 o��-�- 3��- 5��s� �� e��� ,o Weighting: C t (sec) Meter Speed: Slow - Tlf_Soudnar T 1 f _ Recording T1r: Room 1, music, 75 dB(C), distance: 1.95 m. Tlg_Soudnar v.s. Tlg_Recording Tlh_Soudnar v.s. Tlh_Recording oo ..--�-��-��-��-��---, oo..-- �-��-��-��-��� 85 00 75 70 i:r 55 50 Weighting: C 3 5 t (sec) Meter Speed : Slow - Tlg_Soudnar Tl g_Recording 10 85 00 75 70 55 50 45 400 Weighting· C 3 4 5 I (sec) Meter Speed: Slow 6 B 9 - Tlh_Soudnar Tl h_ Recording 10 T1g: Room 1, music, 75 dB(C), distance: 5.48 T1h: Room 1, speech(female), 75 dB(C), distance: 5.48 m. Tl i_Soudnar v.s. Tl i_Rccording oo..-- �-��-��-��-��� 85 00 75 Weighting: C 3 5 I (sec) Meter Speed: Slow B 9 - Tli_Soudnar TI i_ Recording 10 T1i: Room 1, speech(male), 75 dB(C), distance: 5.48 m. Figure 5-13 Time domain charts ofT1c to T1i tests. Calculated with the data exported from the charts, the percentage of �SPL less than 3 dB(C) showed how much data was in the tolerance (Table 5-3). Soundar had a good performance of SPL change per time in Test Tic, T1d, T1g, T111, and T1i, which had over 98% of the time that had a difference lower than 3 dB(C). Tie and T1f had a larger SPL gap between Soundar and the live recordings. The average �SPL of these to tests was 3.92 dB(C) and 3. 61 dB(C), which are higher than the tolerance. a e - T bl 5 3 D ata ana1ys1s o fSPL per time. Test T1c T1d Tie T1r T,� T1h T1; Avera2:e !1SPL (dB(C)) 0.41 0.72 3.92 3.61 0.55 1.21 0.72 Percentage of !1SPL < 3 dB(C) (%) 98.99 98.99 9.09 24.08 100 100 100 Test Tic, which played a stable sound source and had the closest distance between the listener and the sound source, indicated the best performance among the tests. Therefore, it can be used as a baseline to analyze 11 7 which variable influenced the accuracy ofSoundar. Based on the combination ofT1c and1d, the only change was the distance between the listener and the sound source. When the distance increased, the deviation also increased. Compared with T1d and Tie, only the SPL of the sound source changed. When the original sound had a higher sound pressure, the deviation increased. The low accuracy of T 1e was caused by the original SPL. Similarly, compared with T1d and Ttr , only the file played was changed, which shows the deviation also increased when the sound file had more variation in tone and pitch. In conclusion, the accuracy of Soundar in the SPL per time had negative correlations with the distance between the listener and the sound source, the original SPL of the sound source, the variation of the sound file played, and the SPL of the background noise. 5.3.2. Frequency Response The frequency response is a useful tool for comparing two sound sources. The peaks in the graphs represented the frequencies that had higher SPL. The similarity of the peaks and dips indicates how close they will be in perceived pitch, while the differences in magnitude indicate how loudly each one will be heard. The frequency domain charts of the test Tic to T1i were smoothed by 1/12 octave which more closely corresponds to what we hear and for comparison purposes (Figure 5-14). The orange graphs were the frequency response of the result from Soundar, while the green graphs represent the result from the live recordings. The SPL at each frequency was an averaged result but the accurate result. Since the overall SPL performances were discussed in the previous section, only the relative SPL trends will be focused on in the frequency response analysis. Two conclusions can be drawn based on the frequency domain charts: 1. The responses in the low-frequency bands were in a similar shape, which was caused by the background noise. 2. Soundar generates almost no frequency above 12 kHz, which might because the result sound files are recorded from a phone recorder, which recorded at the sampling rate of24 kHz. 11 8 In Tic, T1ct, and Tie, the sound played was at the single frequency of 500 Hz. Theoretically, only the peaks at 500 Hz was were caused by the audio (red blocks in Figure 5-14); the sound at other frequencies were all from the background noise. It can be clearly seen that there were peaks at integer multiples of 500 Hz in graphs of live recordings in Tic, T1ct, and T1e (red dash lines), which indicated that the speaker could have large amounts of harmonic distortion. This distortion could certainly affect the comparison to the Soundar simulated response. In test T1r, T1g, T111, and T1i, the overall shape of the two graphs were the same, and most peaks and valleys were matched. r T1c _ So u nd a r v .s. T1c_ 1 ecor dlng ' ' ' ' ' ' ' ' ' ' -- ·- 200 Ji)() ,oo 500 11{10 100 II. ' ' ' ' ' ' � - ;.-. . 5i II,, 11. 8k ,o.. Tic: Room 1, 500 Hz, 75 dB(C), distance: 0 m. (,{J .,. - n e_So u nd a r v .s. T1e_ 1 ecor dlng ''' ''' : : : ''' 2GO lOO 400 5001n0 100 ,: - , - t� � 5i � ;_- � 10... - Tie: Room 1, 500 Hz, 85 dB(C), distance: 1.95 m. T1 g_ So u nda r v .s. T1 g_ � ecor d lng :1QIO JOO 400 500100 100 lk ,fliu 2119 lla"11tdrt 11< Jk ,1, 51, 11< 71. 8k IOI< G, 30- T1 d_ So u nda r v .s. T1 d_ � ecor d lng ' ' ' '-· Jk •• 5i T1ct: Room 1, 500 Hz, 75 dB(C), distance: 1.95 m. T1 f_So u nda r v .s. T1 f l ecor d lng 200 JOO 400 5008'10 IQO II, � 1 ({J 2111 _ --, 11< JI, 41, 51,. 71; 81< lot. -e 3'2- T1r: Room 1, music, 75 dB(C), distance: 1.95 m. t T1 h_So u nda r v .s. T1 h_� ecor d lng :!00 JOO 400 500100 100 1k � 2TU , 'la,....., � 3k •I. 5i II< 7l 8k l(MI; e- 3&1- T1g: Room 1, music, 75 dB(C), distance: 5.48 m. T111: Room 1, speech(female), 75 dB(C), distance: 5.48 m. 11 9 T 1I_So u nd a r v . s. T1 I_ R ecor d l n g � ' 20 ]� 10 7C IO 100 :!00 ]00 100 500100 100 1� 2lt lk l� !,I,. llkU Slt ll)t. ZOOIH.: ""./. ·1, -u -9- • [l) 2TtL - .... °® ).It- T1i: Room 1, speech(male), 75 dB(C), distance: 5.48 m. Figure 5-14 Frequency domain charts of all Tic to T1i (20 Hz to 20 kHz). To look into the similarity more precisely, the coefficient of determination (R 2 ) was calculated from the data log of the graphs. In the frequency range of 20 Hz to 20 kHz, which is the frequency range of human hearing, the R 2 is higher than 0.73 (Table 5-4). Despite the frequency higher than 12 kHz, which were wiped by the record settings of the device, the R 2 is above 0.8 (Table 5-5). Table 5-4 Data anal sis of SPL 20 Hz to 20 kHz Test Tic T1d Tie T1 T1h Tli R2 0.78 0.77 0.73 0.76 0.77 0.78 Table 5-5 Data anal sis ofSPL 20 Hz to 12 kHz Test Tic T1d Tie T1g T1h T1; R2 0.93 0.94 0.92 0.9 0.93 0.93 In conclusion, Soundar had a good performance in the frequency responses. The sound simulated from Soundar had high similarity with the live recording. 5.3.3. Reverb Performace Reverberation time (T60) is the time for the sound pressure level to reduce -60 dB in a room. It can be measured by playing and recording an impulse sound in the room and calculated the decay time base on its impulse response. The threshold for human hearing to notice the difference of reverberation time is the deviation of 20% (Meng, Zhao, and He 2006, 418-421 ). When the deviation is lower than 20%, listeners would not hear a difference. The impulse responses generated by REW were used to analyze and compare the result of the reverberation times. 12 0 When the background noise is high, which makes it hard to identify when the impulse SPL is reduced by 60 dB, a Lo calculation can be used to estimate the T60 , The T20 test calculates the time duration form -5 dB to -25 dB and which is then multiplied by three to provide the estimated T6o , In the validation tests, because the range from maximum to the background sound level was less than 30 dB(C), the Lo calculation was used to estimate the T60 , By comparing the impulse responses and the estimated T6o, the difference of the T60 between the rendered sound from Soundar and the recorded sound is less than the threshold of20%, which means the deviation of the reverberation time rendered by Soundar is below the just noticeable difference of human hearing. Using the T20 calculation, the T60 of Room 1 and 2 can be concluded from the simulation response of the live recordings (Figure 5-15). The T60 of Watt 212 was 0.36 s, and the T6o of San Merendino Room was 1.02 s. � ,. �------� T � 1. _ � R . - co - , d � i ng ------- � � , o ------� T2 b_Re co rdin g Tia: Room 1, impulse, 90 dB(C), distance: 0 m. T2a: Room 2, impulse, 90 dB(C), distance: 0 m. Figure 5-15 Impulse response charts ofroom impulse recordings. These values were different compared with the results calculated by Soundar using Eyring' s formula (Table 5-6). Three reasons that might have caused this difference are the following: 1. The room geometry created by users was not accurate. The location of the vertex and the ceiling hight may have offset from the actual room. The area of the room surfaces and the room volume could be influenced by this kind of deviation. 2. The absorption coefficients were not exactly the same as the data in the material database in 121 Soundar. The parameter set up in Soundar used the value of a typical type of each kind of material. However, the material used in the test rooms might not have the same absorption coefficients as the one listed in Soundar. 3. The test rooms were not empty and the condition was not the "perfect situation" that Eyring' s formula hypothesized. All furniture and people in the room also absorb and reflect sound, which influenced the final sound performance. Soundar currently considered the room as an empty room when simulate, which caused a certain amount of deviation in the final result. 4. The amplifier gain that was set to the microphone during the live recording was too high. The parts higher than the limitation was clipped from the recording, which affected the result of the impulse response of the live recordings Table 5-6 The recorded T 60 and the calculated T 60 of Soundar. Test Tia T2b Recorded T6o (s) 0.36 1.02 Calculated T6o (s) 0.48 1.45 Diff erence (%) 25.00 29.66 The sounds rendered by Soundar showed an abnormal impulse response (Figure 5-16). The orange graphs represented the impulse responses of Soundar rendering. The rendered sound had a high deviation from the recorded sound, which can be directly heard while listening to the results. Normally, the sound decays with a logarithmic trend after it stops playing like what is shown by the impulse responses of the live recordings (green graphs). The sound decay speed was faster in Tia than T2b. However, the results from Soundar renderings showed the opposite. Large differences were also shown in the T 60 values calculated by the impulse response (Table 5-7). 122 09 �t ,- -----= n ,- a_ -= s - o u � n d � a ,- v .- s . = n - . _ � R e - c o � , d � l n g ------� �t .------- = T2 � b_ -= so - u � n d � a ,- v . s - . T = 2 ,- b_ � R e - c o � , d � ln g ------� $C!11 100m ,� 200,,, ;,om lOOm ]� fOOffl HO,,, !00,,, !!.D<n '"°"' U o m 700m tiGltl 100,,,, 85#,,1 t o 0m tSOm 1 OOGI l� - ..... , J Tia: Room 1, impulse, 90 dB(C), distance: 0 m. T2b: Room 2, impulse, 90 dB(C), distance: 0 m. Figure 5-16 Impulse response charts comparing the rendered sound and recorded sound. Table 5-7 The recorded T6o and the rendered T6o of Soundar. Test T,a T2b Recorded T6o (s) 0.36 1.02 Rendered T6o (s) 0.22 0.09 Diff erence (%) 32.50 88.87 Furthermore, comparing the impulse responses between Tia and T1b, as well as T2a and Lb, which were tested in the same room but used different material settings, showed that the reverberation time of the room was not significantly influenced by the material assigned in the room (Figure 5-17). Although T1b and Lb used more reflective materials, the impulse responses and the frequency responses were quite similar to the ones with more absorptive materials. e111�t ,- ----,--- -= T = 1 a � _S ,- ou - n d � a , - v - . s .= T 1 ""' b_ -= s � ou - n d ,- a r ------� ---�--------------� - (� ,., [ll . • .. - __ ,.., , ] '\?) lo - T1 a _S o u n dar v . s . T 1 b ...,S ou n d ar 100 ]QO fQO �1 00 11)(1 I� -@- [ll2r ,. __ Tia: Room 1, floor: carpet, wall: gypsum boards, ceiling: acoustic ceiling T1b: Room 1, floor: wood, wall: concrete, ceiling: concrete 12 3 08�t � -----=r2- ._� so - un - da - ,v - .s�. r = 2b� _s- ou - nd - . ,------� T 2 a_ Sou n dar v. s. T 2b Sou n da r �00 JIIO f00 5COI00 100 I� ... ....... J 50M 100,,, 150M 200,,, ;,� , lGOm lSO.,, � fSQo, 511°"1 !!.11'1! to0m 150m 100m 7500, 800fn 1""1 t� t50rn 1 OCICI ["' - •• , J � 1�.- L a : Room 2, floor: carpet, wall: gypsum boards, ceiling: acoustic ceiling T2b: Room 2, floor: carpet, wall: three in concrete and one in gypsum boards, ceiling: concrete Figure 5-17 Impulse responses and frequency domain charts of tests with different materials. One the other hand, when comparing the impulse responses between Tia and T2a, which were tested in different rooms but with the same material settings, the reverberation time was quite different (Figure 5- 18). The T6o of Tia, which was tested in a larger room was longer. The frequency response indicated that the difference was mainly made by the different reactions in the low frequencies (20 Hz to 400 Hz). � t� -----= r1 - ._� so - un - da - , . - .• �. T � 2a _ � s- ou - nd - ., ------� $Offl 't:' 15Cm 201),n 1!1),n lOOm )ll(lm ,00m ''°"' !00,,, !�m IOOm � r., ' D °:; rse .. J � l!°"1 t00rn t50rn IO GDI � I :NI JD , a SI IO Tt lO lot MO XIII fOf l«tUO IOI 1, ,, :I,, 0 !� eio. nl l ,.,.. ltOIJ-t � , - 6- Gll :r.i. __ 5 Tia: Room 1, floor: carpet, wall: gypsum boards, ceiling: acoustic ceiling L a : Room 2, floor: carpet, wall: gypsum boards, ceiling: acoustic ceiling Figure 5-18 Impulse response and frequency domain chart of tests with different room sizes. The above analysis of the rendered sound from Soundar showed that the reverberation time was more sensitive to the change of the room size, but not to the surface materials. However, the reverberation time calculated by the formula showed the opposite, which was more affected by the material rather than the room size (Table 5-8). There was a higher difference in the small room. 12 4 Table 5-8 The calculated T6o and Rendered T6o of Soundar. Test Tia T1b T2a T2b Calculated T60(S) 0.48 1.76 0.46 1.47 Rendered T6o (s) 0.22 0.37 0.09 0.08 Diff erence (%) 53.75 79.03 80.43 94.49 The reverb simulation depended on Steam Audio. There were two possible reasons cause such a huge difference in the reverb simulation (Table 5-7). One was the inaccurate usage of the Steam Audio. The current settings on Steam Audio were the basic set up for binaural sound and reverb simulation, which may not suit the situation that Soundar had. The other one was the initial algorithm of Steam Audio might not fit Soundar in the reverb simulation. Based on the information on the Steam Audio's official website (ht t ps: / /valvesoftware.github.io/steam-audio /#learn-more ), Steam Audio was mainly designed for VR which can pre-baked the environment components. Soundar, which was developed for AR, created and baked the environment in real-time. In this case, Soundar may need to seek other sound simulation SDK to do the reverb simulation instead of using Steam Audio. In conclusion, Soundar had a good performance in the numeric result of the reverberation time, which was calculated by Eyring' s formula and shown on the scale. However, the reverb rendering of the auditory result did not fit to the number. More development and tests need to be implemented to get a more realistic reverb rendering to the simulation result. 5.4. Summary The validation of version 1. 0.8.2 5 showed that the numeric results and the auditory results of the realtime SPL are at an acceptable deviation threshold. Soundar also had a good performance when the listener is close to the sound source. However, the accuracy went down when the distance got further. Soundar currently had a bad performance in the reverb rendering, impulse responses. The improper settings on Steam Audio might be one of the reasons that caused the inaccuracy. The settings of Steam Audio may need to be adj usted to create better performances. The fitness of Steam Audio and Soundar also needs further assessment and assessment in the future. 125 The current validation test has some limitations: 1. The background noise was a significant problem in the whole validation process. The high level of the background noise caused the low difference between the maximum SPL and the background base, which decreased the accuracy of the T6o estimation. Tests need to be done in relatively quiet rooms if possible. 2. The tests used the screen recorder to record the output of the cellphone then get the simulated sound from Soundar. The signals sent to users' earphones were in the sample rate of 48000 Hz but were transferred to 24000 Hz by the screen recorder. The result used in the analysis will be closer to what users heard if used the file directly exported from Soundar. 3. Limited by the time and available space, there was only one test for each change of variety. Multiple tests need to be done to exclude potential interferences and reduce contingency. 4. The loudspeaker used in the test had shortcomings that affect the results. The harmonic distortion influenced live recordings and their comparison to the Soundar simulated response. 5. In the impulse tests, the amplifier gain that is set to the microphone during the live recording was too high. It has to be turned down so that the max peak is clearly distinguished. Even so, the current validation tests still showed some patterns about how Soundar performed and had a very valuable reference significance for future developments. 12 6 6. Conclusion, Dis cussion, and Future Work Soundar is developed to simulate the performance of virtual sound sources in a real room. It can provide the simulation result in both visual and acoustical ways. This chapter will talk about the current status, limitations, and future possibilities for Soundar. Soundar has finished the first round of development and validation (Figure 6-1 ). Based on the result of the validation, some limitations were discovered. These limitations will be one of the significant parts in the future development to enhance the accuracy, improve user experience, and enlarge the scope. Platfo rm Setu p Data b ase Setup 6.1. Cur rent Status No A pplication Developemnt A pplication Test Result A nalysis Finish Figure 6-1 Overall workflow of the developing process. Background research helped to determine the need for an AR app for sound simulation. After the application development and the validation tests, Soundar has achieved its primary achievements. The current version of Soudar can simulate the SPL and reverberation performance of virtual sound in a small, enclosed room. After setting a virtual sound source, Soundar plays what users will hear at their position if the sound source was real. Soudar shows the real-time SPL at the users' position, as well as the reverberation time in the room. It can also meet the needs of users to edit room and sound source. Soundar also has a fluent user interface, which contains concise directions to guide the users. 6.1 .1 . Background Research The background research focused on related works to learn from others' expenences. The strategies, principles, and theories from other researches had reference significance to the development of Soundar. The background research included three main topics: augmented reality (AR), room acoustics, and room 12 7 auralization. In the research on AR, those that are about the display strategies, surface detection strategies, and interaction technologies helped to choose the device, platform, and SDK for the development of Soundar. It was finally decided to use mobile devices that use Android or IOS to be the hardware to run the application. By testing existing AR applications for mobile devices, lessons were learned about what could be done in AR and what was a better user interface. As an application for room acoustic simulations, the knowledge of room acoustics is the theoretical basis of the development. It was crucial to understanding what is sound pressure level (SPL) and reverberation time, as well as their measurement methods and calculations. Room auralization is also a significant part of the Soudar simulation. Although this part used Steam Audio to simulate and render aural sound result in the current version of Soundar, knowing the process of room auralization and how other software achieves it can also help understand the principles behind Steam Audio and make better use of it. 6.1 .2 . Application Development Based on the knowledge learned from the background research, the platform, SDKs, and the database format were decided. Soundar used Unity as the development platform and Visual Studio as the code editor. AR Foundation and Steam Audio were the main SDKs used in Soundar. The database used CSV file to save the data of the material properties and the sound file properties. The whole development was divided into two main parts which are the simulation process and the user interface (UI) design (Figure 6-2). The simulation process includes four aspects: setting room, setting sound, running simulation, and getting feedback. Each aspect can also break down into multiple tasks. Similarly, the UI design also contains the theme and layout and the main menu design. These two parts also contain their own tasks. 128 Application Development t t Simulation Process ser Interface Design Set Room ___. Set Sound ! Run Sinrnlati� Get Feedback Theme and Layout Main Menu I Geomerty I I Position I I Reverberation Time I I mueric I I Logo and Icon I I Proj ect Operation I Theme Color I I Settings I I I I Sound Properties I I I Sound Pressure I I I Material Audito1y Level I I I I Layout Quit Application I Sound file I About I Figure 6-2 Current framework of the development process of Soundar. The current structure of Soundar has seven scenes: "Start Scene," "Set Room," "Edit Room," "Set Sound," "Edit Sound," "Move Sound," and "Run Simulation" (Figure 6-3). "Start Screen" is the start of the application. It reminds users to wear earphones and guide them through the volume calibration. Then it automatically turns to "Set Room" where users set up the room base on the real environment. Users can choose to edit the room or continue to set the sound sources in "Set Sound" after the room is set up. Similarly, after setting up the sound, users can edit sound or go to "Run Simulation." In "Run Simulation," users can see the simulation results and hear the feedback. If users want to edit the room or sound, they can choose to go to "Edit Room" or "Edit Sound." Users can move the selected sound source in "Move Sound" instead of moving it directly in "Edit Sound." Users can create a new project from the menu in all scenes except "Start Screen." The New Project button will take users back to "Set Room" and clear all data from the previous project. 12 9 .-------------------------------------------------------------- ------------- Start Scene Start Scene Main ---------------------------- -------------------- ---------------------- Figure 6-3 The operation flow of Soundar. 0 Move Sound Mo\·e Sound Mam The development of Soundar involved multiple platforms and technologies, including Unity, Steam Audio, AR foundation, and other acoustical simulation software used for validation. 41 modules in 22 scripts were written to build the application, and some of them have generated ideas for future development (Figure 6- 4). 130 Acoust idl,, faterialP ara i I CS\/ to list 4, Assigii d ata [ Place objects on AR plane !Pl ac e objects in air I Creat surfa ce I' I : � - - - - - - - - - - - - - - - - - - - - - - � A coustic:- i [ Clone object -- ------ � -► -► i [� _ C _ lon _ e o _ b j_ ec ts _ · � f+== r---+ -++ +- ---.i I I i I i I Lin k two poin ts I Do not destr oy Calculation Mesh area f+ Eyring fo rmula f+ MenuButtons Show menu Kew project Sa\·e sound Sho\Y setting Show about Quit Clear MaterialDropdown � ' , ! [� S _ ho \\ _ · dr � o p d _ o w _ n �f-i- : I Change material f-+-- -- - --' "- --- � f---4 f---4 � Start Screen Start_Mam Set Room Set Room_Mam Ed it Room Edtt Room_Mam Set Sound Set Sound _Mam Ed it Sound Edit Sound_ Mam _) AIO\•e Sound Mo,·e Sound_ Mam Run Simu lation Run S1IDulation _ Ma.in , ., -- - - ··· ;, j; � B���� ;-··- -., _ \ I - ---- ._., Finish room µ J J i I I : , ·, · ,-- - -- -rl Fin ish sound - - - �� _ _ A_ dd _ s _ o u _ n _ d _ �I ! Ed it Ed it room I Ed it sound _j, ,,: Settings Visibili ty _Sele Visibili ty _ frequncy graph Visibility wave graph � ------� M ute all Setup sett in gs Upd ate sett ings Sound Op tion O ption SPL O ption soun d file Op tion mute O ption_ mo,·e J Option menu white Sound Control ( ... I __ s _P _ L_ in ..:. p _ u i __ �I _j ... I _ C _ h _ an ..:. g _ e _ so _ un _ d _ fil _ e _ � I ! l I M ute I ! � . I Dele te l) --. ----. ----. ---.. -- - � Figure 6-4 Modules and scripts written for Soundar. 6.1 .3 . Validation Tests The validation test aimed at the simulation accuracy of Soundar. Eleven tests were done in two different rooms. Both rooms were enclosed room without windows. Nine of the tests used two test methods: simulation by Soundar and live record with real sound playing in the room. The other two tests that only simulated by Soundar changed the surface materials in Soundar. These eleven tests included all possible changes in the variables. The simulation results were recorded by the cellphone built-in screen recorder, which can also record the internal audio. All recordings from Soundar and live sound were converted to .wav file format for the following analysis. 131 The three mam analyses were SPL change as a function of time, frequency response, and reverb performance. The SPL change as a function of time can be indicated from the time domain charts used time as the horizontal axis and the signal amplitude as the vertical axis to show the amplitude changes during the time (Figure 6-5). The frequency response is a useful tool for comparing two sound sources, which can be indicated from the frequency domain charts. The frequency domain charts used the sound frequency as the horizontal axis and SPL as the vertical axis, which showed the SPL at each frequency (Figure 6-6). The reverb performance was shown in the impulse response, which indicated the sound decay as a function of time after the sound stops playing (Figure 6-7). ,of .. 85 80 75 70 � 65 60 55 50 45 40 Weighting : C 2 Tl c_Soudnar v.s. Tl c_Recording t (sec) Meter Speed: Slow -- Tic Soudnar Tic Recording Figure 6-5 Time domain chart -,--.,.....,-,-...,..,.,.... T1 c_Soundar v.s. T1 c_ ecordlng 20 33 •o 50 eo 10 so 100 200 Joo •oo 500 eoo eoo _1 k 2k Jk_ •k_ 5� 8k 11,; 8lt 1 0k 20 DI.Hz � .ia, � 2Tlc_lu:•drf � '5-1.,_ Figure 6-6 Frequency domain chart 132 da �� .------------, r= 1- a_= R• - c- or""'din _ g _________ __, ., .,o " 5Dm 100'!'1 150<rl 200m Z51m JOO<n 35Gm ,oom � � IJm �Om 550m tlOIJm tl50m 700m 75Gm 801Jm 850m &DOm 9SOm I 0001 Figure 6-7 Impulse response These three kinds of analysis objects were compared between the simulation of Soundar and the live recordings to check if Soundar performed similarly to the real sound performance in the room. They were also compared among different groups of tests to see how Soundar performed when the room and sound conditions changed. The analysis results showed that Soundar had good performances in the numeric results and the auditory results of the real time SPL. The accuracy of Soundar in the SPL per time had negative correlations with the distance between the listener and the sound source, the original SPL of the sound source, the variety of the sound file played, and the SPL of the background noise. The tones and the pitches in Soundar were also very similar to the live recordings. Soundar also had a good performance in the numeric result of the reverberation time. However, the reverb rendering of the auditory result did not match the simulation. More development and tests need to be implemented to get a more realistic reverb rendering to the simulation result. 6.2. Limitations Based on the current structure of Soundar, there are limitations in its current functions, algorithms, Software Development Ki t (SDK), platform, and devices, as well as the limitations in the design of the validation test. These limitations are the guide for future work. Solving them will be the main task for the next step of development. 133 6.2 .1 . Function Some functions in the current Soundar can be approved in detail to provide a better user interface. For example, when editing the room or a sound source, the dropdown menu or the edit options menu would get out of the screen and clipped if users awoke them near the edge of the screen. It can be improved by adding a condition to determine if they awake in the range where they can be out of the screen. If so, the position of these menus will be moved to avoid the clipping, otherwise, just keep where it is. Also, when moving the sound source, the moving speed of the movement was not very comfortable for users to control the distance of the movement. The relationship between the finger movement on the screen and the object moved in the AR environment needs to be reconsidered and deduce a better convert formula. 6.2 .2 . Algorithm Some of the algorithms using in the current version have some defects which influence the accuracy and limit the scope of the application. Currently, Soundar can only generate meshes in the shape of convex polygons but not all concave polygon meshes. The algorithm of ear clipping (Eberly 2008, 2002-2005) can be used to generate both convex polygons and concave polygons. 6.2 .3 . SDK There are some problems while using AR Foundation and Steam Audio in Soundar. It may because of the incorrect usage of these SD Ks, or it can be a compatibility issue between these SD Ks and some special occasions in Soundar. It can also be the initial problem of the SDK algorithms. More research and test have to be done before concluding the reason. The AR environment can be unstable when the devices move or rotate fast or violently, like shaking and flipping. The origin of the AR environment may offset, which causes all virtual objects, including room surfaces and the sound source, changing their positions. The relative position of these virtual objects will not change, but the distance between the sound source and users, in reality, will be influenced by the offset. 134 The current settings on Steam Audio may not suit the situation that Soundar has. The initial algorithm of Steam Audio might not fit Soundar in the reverb simulation. Based on the information on the home page of Steam Audio, it was mainly designed for the VR environment but didn't mention about AR. In the development of a VR environment, the environment components can be pre-baked. However, Soundar, which was developed for AR, created and baked the environment in real-time. In this case, Soundar may need to seek other sound simulation SDK to do the reverb simulation instead of using Steam Audio. This is currently the biggest issue. Although attempts were made to get more information about the specific algorithms Steam Audio uses, it is still not possible to clearly understand the reason for the differences between Soundar' s simulation, which uses Steam Audio, and the real recordings. 6.2 .4 . Platform The current version of Soundar can only run on devices that used the Android system. Although AR Foundation can be used to develop applications for both Android and IOS platforms, there are still some differences in details to match these two platforms. For those parts that need different scripts for two platforms, Unity has its initial function that can be called to identify the operation platform. Two groups of scripts should be written for each platform. If the operation platform is Andriod, it will run the scripts written for Android, vice versa (Figure 6-8). Commo n S crip ts Common Sc ri pt! Figure 6-8 Choose which scripts to run based on the operation platform. Not all versions of the Android system are supported. The Android system must be Android 8.0 or higher to support the AR functions. 135 6.2 .5 . Devices Many functions in Soundar are based on the performance of the device that is used to run the application, which generates a large uncertainty to the simulation result. For instance, the background sound of the environment is highly determined by the quality and sample rate of the recorder of the device. The quality of the final playback also depends on the quality of the device and the earphone. Different earphones may have different performance on some frequency bands and the volume. Although the calibration at the beginning of the application reduces a portion of the influence on the volume part, it cannot be eliminated completely. 6.2.6. Validation Test Limited by the time, available places, and available devices and equipment, the current validation test had some deficiencies, which brought some deviations of the analysis results. First of all, there was only one test for each change of variety. More tests need to be done to exclude potential interferences and reduce contingency. The background noise in the test rooms was a significant problem in the whole validation process. The high level of the background noise reduced the difference between the maximum SPL and the background base, which influenced the accuracy of the T6o estimation. Due to the current difference which was lower than 35 dB(C), the T6o was estimated by the Lo calculation. Tests need to be done in relatively quiet rooms if possible. The way to get the simulated sound from Soundar was by using the screen recorder to record the output of the cellphone. Although the signals sent to users' earphones were in the sample rate of 48000 Hz, the sound saved by the screen recorder was transferred to 24000 Hz. The result used in the analysis will be closer to what users heard if using the file directly exported from Soundar . The defects of the test equipment also affected the test results. The loudspeaker used in the test had shortcomings that affect the results. The harmonic distortion influenced live recordings and their 136 comparison to the Soundar simulated response. Besides that, the amplifier gain that is set to the microphone during the live recording was too high in the impulse tests. It has to be turned down so that the max peak is clearly distinguished. 6.3. Future Work The first thing needs to be done as future work is fixing the limitations discussed in Section 6.2 . Then, more work can be done to get more precise validation, improve the accuracy of the simulation result, provide more functions, and enlarge the scope for more possibilities. 6.3 .1 . More Precise Validation The current test result form Soundar is influenced by the method used to export the simulated sound from the device. The accuracy and integrity of the validation will be increased if the function of exporting a .wav file of the simulation sound result. More groups of validation tests need to be done to get more data on how Soudar performed on different occasions. The same test also needs to be performed multiple times to get rid of the random disturbance. Soundar can also be compared with other room acoustics simulation software, like Odeon and CATT, which can provide more accurate results. The sensitivity of the human auditory sense can be extremely different, and it is also very subtle. People who have a congenital advantage from birth or have acquired training can be more sensitive to the difference between the two sounds. Therefore, besides analyzing the data and graphics, the sound recording files should also be brought to the experts like musicians, who have been educated and trained, to distinguish the difference between two pieces of sound. By considering their opinion of the similarity between the live recording and the auditory feedback from Soundar, it will provide another part of the conclusions of the accuracy from a different angle. All the testers should do the test alone and separately. No communications should be allowed during the 137 whole test. For each test, the live recording and the sound from Soundar will be played on after the other and loop until the tester can give his/her score of the two pieces of sound. The score should be in the range of Ot o 100. 6.3 .2 . Improving the Accuracy According to the validation results, the accuracy of Soundar still has a large space for im provement and development. Based on the current limitations, which were discussed in Section 6.2 , the accuracy may be influenced by many reasons. First, the settings in Steam Audio and Unity audio source need to be adj usted to create better performances. For example, the sound decay model per distance needs to be regenerated to match the real sound decay. The compatibility between Steam Audio and Soundar needs to be discussed. If Stream Audio can not fit the occasion of Soundar, other SDKs for sound simulation and rendering need to be seak as a replacement. Second, more options of the material need to be added to the material database to match the real room environment and give more precise parameter values for the calculation and the simulation. Another database for room contents, such as furniture and people, also need to be built to support the function of adding these contents into the room. Furniture and people are significant in absorbing and reflecting sound, which can enhance the accuracy if considering them in the simulation. Third, as mentioned in the limitations, there are some algorithms can be upgraded to more efficient and provide more accurate results. For example, instead of using Eyring' s formula, a more comprehensive method should be used to calculate the room reverberation time based on the room condition such as use ray tracing. 6.3 .3 . More Functions Based on the current structure of Soundar, there are more functions can be added to the current applications (Figure 6-9). 138 Set Room ,---,. I Geomerty I I Materia l I I Contents I I Application Development I 1 Simulation Process Set Sound I Position I I Sound Properties I I Sound File I I Form I I ---➔- Run Sim ulation r+ Get Feedback I Re\·erberation Time l I Sound Pressure I LeYel l Sound \Vave Graph j I Sound Frequency I Graph I Numeric I I Auditor y I I Graphical I I I I I User Interface Design Theme and Layout Main Menu Logo and Icon I Project Operation Theme Color I Settings Layout I Quit Application Animation I About File Operation Help Center Report User Log in Figure 6-9 Overall framework of the development process of Soundar including future functions. The room can be set up with greater detail and accuracy relative to the real environment. A new surface generation algorithm should be used which can cover polygons in all shapes including concave and convex. The geometry of the room can be changed by moving the vertices of the room surfaces. Therefore, the surfaces can be tilted so the shape of the room will not be limited to a shoebox. Multiple materials can also be assigned to one surface by splitting into multiple sub-surfaces. The splitting process can be realized using some of the existing modules like Link two points. In this way, Soundar can also add openings that are considered as the surfaces with the material of air. More available materials need to be added to the default library. Moreover, the function for users to add customized materials and browse sound files from their local folders is also very useful. Soundar should also allow users to add virtual contents like furniture and people into the room to make the sound environment more realistic. A new database needs to be built to save the data of these contents including their names and absorption coefficients on different frequency bands. Besides the default options of the sound files, it needs to allow users to load sound files from their local folder. Different forms of sound sources also need to be provided. The def a ult source form can include point, hemisphere, 1/4 sphere, 1/8 sphere, and liner. Each different form will have different directivity. 139 More diversified graphical simulation results can be provided by Soundar. A realtime sound wave graph and frequency bands should be considered to show or export based on the current simulation. The wave graph will show the sound wave from several seconds ago until now, which can record and display the SPL changes in a straightforward way. The frequency bands will show the SPL at each critical frequency band. This As for the UI and assistant functions, Soundar also needs to be upgraded to let users have a better experience. The current version of Soundar cannot export the simulation file as the .wav format. Project operation allows users to open and save their projects with all their settings in it. The projects will use the first three continuous vertices on the floor as the anchors of the model. When opening a previous project, users will need to select the same corner points with the same order to load the model to reality. File operation will allow users to export the audio file and graphics from the simulation. If there are more questions about Soundar, the user can go to the "Help Center" under the main menu. If the question or problem still cannot be solved, there is also a portal for reporting the issue. These reports and feedback from users will be valuable information for further study. User logging should allow users to log in their account for more services. 6.3 .4 . Further Possibilities The scope of Soundar is now limited by many reasons like the development time, the team size of the developers, knowledge in coding, and funding. Once these problems are fully solved, Soundar may have more possibilities to provide accurate feedback for scientific use and be available in more comprehensive situations, which can create higher value. The shape of the room may not be limited as a shoebox but can be more complex even an organic shape with curves. The maximum size of the room may also be increased so that Soundar can be used to simulate larger spaces like concert halls and auditoriums, even unenclosed spaces like stadiums. A real-time ray tracing simulation can be very helpful for users to learn about the sound performance in a 14 0 room and see the virtual sound rays bouncing in the real room, The amount of ray tracing can be set as a certain number, like 10, and allow users to change the amount in the settings in the menu. The transparency of the ray-tracing line can be relevant to the SPL, which can provide a better performance in data visualization. A timeline slider can also be added to this simulation. By changing the time on the timeline, users can see how sound rays performed in the room at any time. Furthermore, Soundar may also move to other devices, like goggles, which have a better environment detection and relevantly settled quality of the earphones. They also have more efficient processors, which can support complicate calculations and modeling. Goggles can also set free the hands of users, so users can do multitasks while using Soundar. 6.4. Con clusion Based on the design scope and the validation result, the current version of Soundar can only provide a close but not accurate simulation result. Although it was not enough for precise scientific tests, it is enough for giving ordinary users a sense of how the sound will perform in a room but cannot be used for. For example, if the owner of a restaurant wants to change the decoration to decrease the reverberation in his or her restaurant, he or she can use Soundar to try different materials and get a feeling of what kind of materials works. Speakers or lecturers can also use Soundar to know how loud they should speak to make them heard clearly at every spot in the room, even in the last row. If someone just bought a new stereo, Soundar can help him or her to decide where to put it in the bedroom to get the best sound effect. Soundar brings a new solution for these ordinary people to understand how the environment influences the sound performance. By introducing the AR technology to the room acoustic simulation, Soundar can provide them with more direct and immediate experiences. It is also convenient since it is using cellphones and tablets as the media, which have already been around everywhere. The UI design and the directions make Soundar easy to use, even for the first time. After setting up the room and sound sources, users can directly get feedback in both numerical and auditory ways (Figure 6-10). 141 Figure 6-10 Soundar is convenient and easy to use. It has to be admitted that Soundar has some inadequacies at the current stage. The unstable positioning and low accuracy of results limit the usage of Soundar. However, it is still a meaningful try towards a new track and can be used on many occasions to help people have a basic understanding of the space and the room acoustical performance, as well as make decisions. There is a large space for development, which is worth to devote more research on. Soundar is a good step towards combining acoustics and augmented reality. 14 2 REFERENCES Allen, Jont B. and David A. Berkley. 1979. "Image Method for Efficiently Simulating Small-room Acoustics." The Jo urnal of the Acoustical Society of America 65 (4): 943-950. Apple Inc. "ARK.it 3 -Augmented Reality.", accessed Jan 15, 2020, https: // deve loper.apple.com /augmented reality /arkit/. Arth, Clemens, Daniel Wagner, Manfred Klopschitz, Arnold Irschara, and Dieter Schmalstieg. 2009. "Wide Area Localization on Mobile Phones. "IEEE, . Azuma, Ronald T. 1997. "A Survey of Augmented Reality." Presence: Teleo perators & Virtual Environment s 6 (4): 355-385. Bederson, Benjamin B. 1995. "Audio Augmented Reality: A Prototype Automated Tour Guide."ACM, . Boom, Daniel V. "Pokemon Go has Crossed 1 Billion in Download s.", last modified July 31, accessed Jan 17, 2020, https :/ /www .cnet.com /news /pokemon-go-has-crossed- 1-b illion-in -downloads /. Bose Corporation. "Bose AR.", accessed Jan 13, 2020, https: // www.bose.com /en us /better with bose /augmented reality.htm l. Brungart, Douglas S. and William M. Rabinowitz. 1999. "Auditory Localization of Nearby Sources. Head-Related Transfer Functions." The Journal of the Acoustical Society of America 106 (3): 1465-1479. Caudell, T. P. and D. W. Mizell. 1992. Augmented Reality: An Application of Head s-U p Displ ay Technology to Manual Manufacturing Processes. Vol. ii IEEE Comput. Soc. Press. doi: 10.1109/HICSS.1992.183317. CNN, By H.K. "Google Glass Users Fight Privacy Fears.", last modified Dec 12, accessed Jan 17, 2020, https: // www.cnn.com /2013 /12/10 /tech/mobile /negative- goog le-gl ass-reactions/index.htm l. 14 3 Coelho, Enylton Machado, S. J. Julier, and B. MacIntyre. 2004. "OSGAR: A Scene Graph with Uncertain Transformations."IEEE, . Daniel, Eileen. 2007. "Noise and Hearing Loss: A Review." Jo urnal of School Health 77 (5): 225-231. doi: 10.l l l l/j. l 746-1561.2007.00197.x. Eberly, David. 2008. "Triangulation by Ear Clipping." Geom etric To ol s: 2002-2005. Everest, F. A. and Ken C. Pohlmann. 2009. Master Handbook of Acoustics. 5th ed. New York: McGraw-Hill. Eyring, Carl F. 1930. "R eve rberation Time in "dead" Rooms." The Journal of the Acoustical Society of America 1 (2A): 217-241. Feiner, Steven, Blair MacIntyre, Tobias Hollerer, and Anthony Webster. 1997. "A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for Exploring the Urban Environment." Personal Technologies 1 (4): 208-217. doi: 10.1007 /BF01682023. Georg Neumann GmbH. "Ku 100.", accessed Mar 23, 2020, https: // en-de. neumann.com /ku-100 . Google. "Fundamental Concepts I ARCore .", accessed Jan 15, 2020, https: // developers. go og le.com / ar/ discover / concepts . Google. "Glass."2020, https: // www.google.com /glass /start /. Hainich, Rolf R. and Oliver Bimber. 2011. Displ ay s: Fundamentals & Applications. Boca Raton, Florida] ;: CRC Press. Hong, Joo-Young. 2019. "Evaluation of Acoustic Environment using VR/AR with 3D-Audio Techniques." Cf/Ef?J� 2f2/ 2/�'# HC /12/ -E-§ �3 9 (1): 338-339. Ikeuchi, Katsushi, Yoichi Sato, Ko Nishino, and Imari Sato. 1999. "Photometric Modeling for Mixed Reality.". 14 4 Jain, Puneet, Justin Manweiler, and Romit Roy Choudhury. 2015. OverLay: Practical Mo bile Augmented Reality ACM. doi: 10. l 145/2742647.2742666. Kabat, Peter. 2002. "TSP Speech Database." Mc Gill University, Database Vers ion 1 (0): 9. Kanazawa, Yasushi and Hiroshi Kawakami. 2004. "Detection of Planar Regions with Uncalibrated Stereo using Distributions of Feature Point s."Citeseer, . Kato, Hirokazu and Mark Billinghurst. 1999. "Marker Tracking and Hmd Calibration for a Video-Based Augmented Reality Conferencing System."IEEE, . Kensek, Karen, Douglas Noble, Marc Schiler, Anish Tripathi, and Karen Kensek. 2000. Augmented Reality: An Application fo r Architecture. doi: 10.106 1/ 40513(279)38. Klein, Georg and David Murray. 2007. "Parallel Tracking and Mapping for Small AR Workspaces."IEEE Computer Society, . Krokstad, Asbj0rn, Staffan Strom, and Svein S0rsdal. 1968. "Calculating the Acoustical Room Response by the use of a Ray Tracing Technique." Jo urnal of Sound and Vibration 8 (1): 118-125. Lievendag, Nick. 2018. Updated 2018: RealityCapture Photogrammetry So ftware Review. 3D Scan Expert. https ://3 dscanex pert.com /realitycapture-photogram metry-software -rev iew /. Litho. "Litho. "2020, https:/ /www .litho.cc /. Magic Leap, Inc. "Spatial Computing for Enterprise I Magic Leap."2020, https: // www.magiclea p.com /en-us. Meng, Zihou, Fengj ie Zhao, and Mu He. 2006. "The just Noticeable Diff erence of Noise Length and Reverberation Perception. "I EEE, . Microsoft. "Microsoft HoloLens I Mixed Reality Technology for Business." 2020, https: // www.microsoft.com /en us /hololens . 145 Microsoft. "Spatial Mapping - Mixed Reality.", last modified Mar. 20, accessed Jan 20, 2020, https: // docs.microsoft.com /en-us /windows/mixed-reality /spatial-mapping. Milgram, Paul and Fumio Kishino. 1994. "A Taxonomy of Mixed Reality Visual Displays." IEICE Transactions on Inf ormation and Sys tems 77 (12): 1321-1329. Mohring, Mathias, Christian Lessig, and Oliver Birnber. 2004. "Video Se e-through AR on Consumer Cell Phones. "I EEE, . Muehleisen, Ralph T. 2018. "VSLM-The Virtual Sound Level Meter." The Jo urnal of the Acoustical Society of America 143 (3): 1840. Naletto, Aldo. 2011. GetOut putData and GetSpectrumData, what Re present the Values Returned? - Uni ty Answers. Unity Answers. https: / /answer s.unity.com /questions/ 157940 /getoutputdata-and-getspectrumdata-they represent-t.htm l. Navab, N. 2003. Industrial Augmented Reality (JA R): Challenges in Design and Commerc ialization of Killer Apps IEEE. doi: 10.1109/ISMAR.2003.1240682. Occupational Safety and Health Administration. 2015. "OSHA Technical Manual, Section III: Chapter 5-Noise Measurement." US Department of Labor, Occupational Sa fe ty and Health Administration. Olympus America Inc. "M E30W Microphone Kit I Olympus Pro Dictation. ", accessed Mar 23, 2020, https: // www.olympusamericaprodictation.com/product /me30w-microph one-kit/ . Patynen, Jukka, Ville Pulkki, and Tapio Lokki. 2008. "Anechoic Recording System for Symphony Orchestra." Ac ta Acustica United with Acustica 94 (6): 856-865. Raiche!, Daniel R. 2006. The Science and Applications of Acou stics. 2nd ed. New York: Springer Science+Business Media. 14 6 Rekimoto, Jun. 1998. "Matrix: A Realtime Object Identification and Reg istration Method for Augmented Reality. "IEEE, . Rolland, Jannick P. and Henry Fuchs. 2000. "O ptical Versus Video Se e-through Head-Mounted Displays in Medical Visualization." Presence: Teleo perators & Virtual Environment s 9 (3): 287-309. doi: 10.1 162/1054 74600566808. Sabine, Wallace Clement. 1900. "R eve rberation." The American Architect: 4. Sanchez, Sergio, Laura Martin, Miguel Gimeno-Gonzalez, Teresa Martin-Garcia, Fernando Almaraz-Menendez, and Camilo Ruiz. 2016. Augmented Reality Sandbox: A Platf orm fo r Educative Ex periences . Vol. 02-04- ACM. doi: 10.1 145/3012430.3012580. Savio ja, Lauri and U. P. Svensson. 2015. "Overview of Geometrical Room Acoustic Modeling Techniques." The Journal of the Acoustical Society of America 138 (2): 708- 730. doi: 10.1121/1.4926438. Schmalstieg, D. (D and Tobias Hollerer. 2016. Augmented Reality : Principles and Practice. Boston: Addison Wesley. Sutherland, Ivan E. 1968. "A Head-Mounted Three Dimensional Display."ACM, . Tuliper, Adam. 2017. "Introduction to the HoloLens, Part 2: Spatial Mapping." Microso ft, January ,. Valve Corporation. "Steam Audio Unity Plugin 2.0-Beta. l 7. ", accessed Feb. 13, 2020, https: / /valvesoftware.github. io /steam-aud io /doc /phonon unity .html #steam-audio- unity-plugi n-2.0-beta. l 7. Vorlander, Michael, Dirk Schroder, So Pelzer, and Frank Wefers. 2015. "Virtual Reality for Architectural Acoustics." Jo urnal of Building Per formance Simulation 15 (3): 15-25. doi: 10.1080/19401493.2014.888594. Zhou, Ji and Charles B. Owen. 2007. Calibration of Op tical See through Head Mo unted Displ ay s fo r Augmented Reality ProQuest Dissertations Publishin g. 14 7 APPENDIX APPENDIX A: MATERIALS USED IN SOUNDAR Floor Materials Name carpet wood empty Texture 10 0 % trans arent Absor tion Coefficient Low Mid High Fre uenc Fre uenc 0.14 0.6 0.65 0.11 0.07 0.06 0 0 0 Transmission Coefficient Scattering Low Mid High 0.05 0.05 0 Fre uenc Fre uenc 0.02 0.005 0.003 0.2 0.025 0.005 0 0 0 * The center frequencies of for low, mid, and high frequency are 400 Hz, 2.5 kHz, and 15 kHz (Valve Corporation 2017 ) Wall Materials Name carpet wood empty Texture 10 0 % trans arent Absor tion Coefficient Low Mid High Fre uenc Fre uenc 0.14 0.6 0.65 0.11 0.07 0.06 0 0 0 Transmission Coefficient Scattering Low Mid High 0.05 0.05 0 Fre uenc Fre uenc Fre uenc 0.02 0.005 0.003 0.2 0.025 0.005 0 0 0 * The center frequencies of for low, mid, and high frequency are 400 Hz, 2.5 kHz, and 15 kHz (Valve Corporation 2017 ) Ceilin Materials Name acoustic tiles concrete empty Texture 10 0 % transparen t Absor tion Coefficient Low Mid High Fre uenc Fre uenc 0.53 0.69 0.52 0.03 0.04 0.04 0 0 0 Transmission Coefficient Scattering Low Mid High 0.05 0.05 0 Fre uenc Fre uenc 0.056 0.056 0.004 0.015 0.002 0.001 0 0 0 * The center frequencies of for low, mid, and high frequency are 400 Hz, 2.5 kHz, and 15 kHz (Valve Corporation 2017 ) 148 APPENDIX B: CO MPLETE SCRIPT Calculation usin g System. Collections. Gene ric ; usin g Un ityEngine ; n amespace Soundar { public class Calculation : MonoBehav iour { public float MeshArea (Mesh mesh) { float area = 0; int triangl e_n = mesh. trian gles. Length / 3; for (int i = 0; i < triangle _n ; i++) { Vector3 A = n ew Vector3 () ; Vector3 B = n ew Vector3 () ; Vector3 C = n ew Vector3 () ; A = mesh. vertices [mesh. triangles [i * 3]]; B = mesh. vertices [mesh. triangles [i * 3 + 1] ] ; C = mesh. vertices Gnesh. triangles [i * 3 + 2]]; float a = (B. y - A. y) * (C. z - A. z) - (C. y - A. y) * (B. z - A. z) ; float b = (B. x - A. x) * (C. z - A. z) - (C. x - A. x) * (B. z - A. z) ; float c = (C. x - A. y) * (B. z - A. y) - (B. x - A. x) * (C. y - A. y) ; area + = 0. 5f * Mathf. Sqrt (Mathf. Pow (a, 2) + Mathf. Pow (b, 2) + Mathf. Pow (c, 2)); return area/2 : publ ic float EyringFormula (float volume, List<float> area, List<float> absorption) { float T60 = 0; float totalArea = 0· float totalAbsorption = 0; float averageAbsorption = 0; for (int i = 0; i < area. Count ; i++) { totalArea + = area[i] ; totalAbsorption + = area[i] * absorption [i] ; averageAbsorption = totalAbsorption / totalArea ; T60 = -0. 161f * volume / (totalArea * Mathf. Log(l - averageAbsorption)) ; return T60; 14 9 CloneObject using Syst em. Collections.Generic ; using Unity Engine; nam espace Soundar { public class CloneObject : MonoBehaviour { public GameOb ject cloneO bject (GameOb ject gameO bject) { GameOb ject newObject gameO bject. transform. rotation) ; return ne wObject ; Insta ntiate (gameOb ject, public List <GameO bject> cloneObjects(List<GameOb ject> gameO bjects) { List <GameOb ject> c = new List <GameOb j ect> () ; for (int i = O; i < gameO bjects.Count ; i++) { gameO bject. transform. position, GameOb ject ne wObject = Insta ntiate (gameO bjects[i] , gameOb jects[i] . transform. position, gameO bjects[i] . transform. rotation) ; c. Add (newObject) ; return c; CreatSurface using Syst em. Collections.Generic ; using Unity Engine; nam espace Soundar { public class CreateSurface : MonoBehaviour { public GameOb ject CreatSurface (L ist<GameOb ject> pointList, string direction) { var surface = new GameO bject ("Ne w Surface"); surface. AddCom ponent <MeshRendere r> () ; surface. AddCom ponent <MeshFi lter> () ; int pointCount = O; List< int> me shlnd ices = new List< int>(); List< Vector3) pointLocat ion = ne w List< Vector3>() ; Ill Generate Mesh. Mesh mesh = surface. GetCom ponent <MeshFi l ter> 0. mesh ; me sh. Clear O ; pointCount = pointList.Count ; 15 0 for (in t i = 0; i < poin tCount ; i++) pointLocat ion. Add (pointList [i] . trans form. position) ; for (int i = 0; i < poin tCount - 1; i++) meshln dices. Add (i) ; meshlndi ces. Add (i + 1) ; meshin dices. Add (pointCount - i - 1) ; meshin dices. Add (pointCount - 1 -1) ; meshlndi ces. Add (i + 1) ; meshln dices. Add(i) ; mesh. vertices = poin tLocation. ToArray () ; mesh. trian gles = meshln dices. ToArray () ; mesh. Optimize () ; mesh. RecalculateNormals () ; Vector2 [] uv h Vector2 [] uv v new Vector2 [mesh. vertices. Length] ; new Vector2 [mesh. vertices. Length] ; if (direction == "vertical") for (in t i = O; i < uv_h. Length ; i++) uv_v [i] = n ew Vector2 (mesh. vertices [i] . x, mesh. vertices [i] . y) ; mesh. uv = uv_v ; if (direction { "horizontal ") for (int i = 0; i < uv_h. Length ; i++) uv_h[i] = n ew Vector2 (mesh. vertices [i] . x, mesh. vertices [i] . z) ; mesh. uv = uv_h ; surface. AddComponent <MeshCollider> () ; surface. GetComponent<Mesh Collider> () . sharedMesh = mesh ; return surface ; 151 DataOperation using Syst em. Collections.Generic ; using Unity Engine; using SteamAud io ; nam espace Soundar { public class DataOperation : MonoBehaviour { public List< string[] > Csv2List(string fileName) { TextAs set csvData = new TextAs set() ; List< string[] > databa se = new List< string[]>() ; string[] mat erial = new string[] { } ; csvData = Res ources. Load<TextAs set>(fileName) ; mat erial = csvData. text. Split (' \n'); for (int i = 0; i < mat erial. Length - 1; i++) string[] para = mat erial [i] . Split C', '); ///para[] [OJ --> mat erial/object name ///para[] [l] --> low frequency absorption ///para[] [2] --> mid frequency absorption ///para[] [3] --> high frequency absorption ///para[] [4] --> scattering ///para[] [5] --> low frequency transmiss ion ///para[] [6] --> mid frequency transmiss ion ///para[] [7] --> high frequency transmi ssio n datab ase. Add (para) ; return databas e; public void As signA cousticPara meters (GameOb ject gameOb ject, string [] [] databa se, string name) { string sufix = " (Insta nce)" ; for (int i = O; i < data base.GetLength (O) ; i++) if (string. Com pare (databa se [i] [OJ , gameO bject. GetCompon ent<MeshRenderer> 0. mat erial. name . Rep lace (sufix, "")) == 0) { gameO bject. GetComp onent<SteamA udioMaterial> () . Value = float.P arse(databa se[i] [l]), float.P arse(databa se[i] [3]), float.P arse(databa se[i] [4]), float. Parse(databas e[i] [5]), float.P arse(databa se[i] [7])) ; break ; 15 2 new MaterialValue ( float . Parse (data base [i] [2]), float. Parse(databas e[i] [6]), DoNotDi stroy using Unity Engine; public class DoNot Destroy MonoBehaviour { void Start () { DontDestroyOn Load (this. gameOb ject) ; EditRoom Main using Unity Engine; using Unity Engine.EventSys tems ; using Unity Engine. UI ; using Soundar; public class EditRo om _ Main MonoBehaviour { public Text Direction; public Text DebugLog l; public Text DebugLog2 ; public Text DebugLog3 ; public Camera phoneCa me ra ; public Dropdo wn wallDropdown; public Dropdo wn floorDrop down; public Dropdo wn ceilingDropdown; public GameOb ject settingUI; public Touch touch ; public static Rayc astHit hit ; MaterialDropdown dropD own = new Mat erialDropdo wn() ; Settings settings = new Settings() ; bool showUI = false; void Start () Direction. text = •select a surface to edit.• ; wallDropdown. options.Cl ear() ; floorDropdo wn. options. Clear() ; ceilingDropdo wn. options. Clear() ; for (int i = 0; i < Start_ Main. wallMaterial. Length ; i++) { wallDropdown. options. Add (new Dropdo wn. OptionData () { text = Start_Ma in. wallMaterial [i] [OJ }) ; for (int i = 0; i < Start_ Main. floorMat erial. Leng th ; i++) 15 3 floorDropdo wn. options. Add (new Start_Ma in. floorMaterial [iJ [OJ }) ; } Dropdo wn. OptionData () for (int i = 0; i < Start_ Main. ceilingMat erial. Leng th ; i++) ceilingDropdo wn. options. Add (ne w Start_Ma in. ceilingMaterial [iJ [OJ }) ; Dropdown.OptionData () } foreach (GameOb ject a in Start_Ma in. soundSou rce) a. GetComp onent <A udioSource> () . Stop() ; settings. SetupSettings(settingUI) ; void Update 0 settings. UpdateSettings(settingUI) ; ///Set Dropdo wn Menus. Click surface to show ; click blank to hide. if (Input. touchCount > 0) { Touch touch = Input. GetTouch(O) ; Ray ray = phoneCa mer a. Screen Poin tToR ay(touch. position) ; if (! showUI) { if (touch. phase == TouchPhase.Began) { text text if (EventSyst em. current. IsPoin terOverGameO bject(touch. fingerld ) ) { return; } if (Phys ics. Ra ycast(ray, out hit) ) else { if (hit. transform. tag == "floor") { dropD own. showDropDown(touch, hit, floorDropdo wn) ; showUI = true ; if (hit. transform. tag == "wal l") dropD own. showDropDown(touch, hit, wallDropdown); showUI = true ; if (hit. transform. tag == "ceiling") { dropD own. showDropDown(touch, hit, ceilingDropdo wn) ; showUI = true ; if (touch. phase TouchPhase.Began) 15 4 if (EventSyst em. current. I sPointerOverGameOb j ect ( touch. finger Id ) ) { return; } wallDropdown. gameO bject. SetA ctive (false); floorDropdown. gameO bject. SetAct ive (false); ceilingDropdown.gameO bject. SetAct ive (false); showUI = false; EditS ound Main using Unity Engine; using Unity Engine. EventSys tems ; using Unity Engine. UI ; using Soundar; public class EditSo und_Main MonoBehaviour { public Text Direction; public Text DebugLog l; public Text DebugLog2 ; public Text DebugLog3 ; public GameOb ject optionMenu; public Dropdo wn fi leDrop down; public Camera phoneCa me ra ; public GameOb ject settingUI; public GameOb ject addBut ton; public GameOb ject deleteButton; public static Rayc astHit hit new Rayc astHit () ; public static bool startMove false; public static bool showOptionMenu fal se ; Settings settings new Settings() ; SoundO ption soundOpt ion = new SoundOpt ion() ; void Start 0 { foreach (GameOb ject a in Start_ Main. wall) { a. GetComp onent<MeshCollider> () . enabled = false ; Start_ Main. floor. GetCom ponent <MeshCollider> () . enabled false ; Start_Ma in. ceiling. GetComp onent <MeshCollider> () . enabled = false; foreach (GameOb ject a in Start_Ma in. soundSou rce) { a. transform. GetChild(O) . gameO bject. SetAct ive (false); a. GetComp onent <A udioSource> () . Stop() ; 155 fi leDropdown.options.Clear() ; for (int i = 0; i < Start_ Main. sound File. Length ; i++) fi leDropdown. options. Add (new Dropdown.OptionData () { text Start_Ma in. sound Fi le[i] [OJ }) ; settings. SetupSettings(settingUI) ; void Update 0 settings. UpdateSettings(settingUI) ; Debug L ogl. text = Start_Ma in. soundSour ce[O] . transform. position. ToString () ; ///Cl ick surface to show the option menu ; click blank to hide. if (Input. touchCount > 0 && Input. GetTouch(O) . phase == TouchPhase. Began) Touch touch = Input. GetTouch(O) ; Ray ray = phoneCa mer a. Screen Poin tToR ay(touch. position) ; if (!showOpt ionMenu) { else if (Event Sys tem.current. IsPoin terOverGameO bject(touch. fingerid ) ) { return; } if (Ph ysics. Ra ycast(ray, out hit) ) if (hit. transform. tag == "sound source") { optionMenu.SetAct ive (true); optionMenu. transform. position = touch. position; showOpt ionMenu = true ; add Button. SetAct ive (false); deleteButton. SetAct ive (true); if (Event Sys tem. current. I sPointerOverGameOb j ect ( touch. finger Id ) ) { return; } optionMenu . gameO bject.SetA ctive (false); showOptionMenu = false; soundO ption. optionMenuW hite (optionMenu) ; add Button. SetAct ive (true); deleteButton. SetAct ive (false); LinkTwoPo ints using Unity Engine; public class Li nkTwoPo ints MonoBehaviour 15 6 public GameOb ject linePr efab; float radY = 0; float radX = 0; float radZ = 0; float angleX ; float angleY ; float angleZ ; public GameOb ject li nkPo ints (GameOb ject pointl, GameOb ject point2) { //Set line between two objects. float distance = Vector3. Distance (point 1. transform.position, point2. transform.position) / 2; var newline = Insta ntiate(linePref ab, point2. transform. position, point2. transform. rotation) ; newline. transform. localScale = new Vector3 (1, 1, 1) ; newline. transform. GetChild(0). transform. localPosition = new Vector3(dista nce, 0, 0) ; newline. transform. GetChild(0). transform. localScale = new Vector3(0. 0lf, distance, 0. 0lf) ; float dX point 1. transform. position. x point2. transform. position. x; float dY point2. transform. position. y point 1. transform. position. y; float dZ point2. transform. position. z pointl. transform. position. z; if (d X == 0) el se radX = Mathf. Atan2 (dZ, dY) ; angleY = 0; angleZ = -90; angleX = radX * Mathf. Rad2 Deg; radY = Mathf. Atan2 (dZ, pointl. transform. position. x - point2. transform. position. x) ; radZ = Mathf. Atan2 (d Y, point2.transform. position. x - pointl. transform. position. x) ; angleX = 0; angleY = radY * Mathf. Rad2 Deg; angleZ = radZ * Mathf. Rad2 Deg; if (angleZ == 180) { angleZ = 0; else if (angleZ { angleZ = 0· angleZ = 0; -180) newline. transform. localR otation = Quaternion. Euler (angleX, angleY, angleZ) ; return newline ; 15 7 MaterialDropdown using Unity Engine; using Unity Engine. UI ; nam espace Soundar { public class Mat erialDropdown : MonoBehaviour { public void showDropDown(Touch touch, Ray castHit hit, Dropdown dropdo wn) { string currentMaterial hit. transform. GetCom ponent <MeshRenderer> 0. mat erial. name . Rep lace(" (Instance)", "") ; drop down. value dropdo wn. options.Findin dex((i) i. text. Equals (currentMaterial) ; }) ; drop down. gameO bject. SetA ctive (true); drop down. transform. position = touch. position; public Dropdo wn wallDropdown; public Dropdo wn floorDrop down; public Dropdo wn ceilingDropdown; public void changeMaterial () { Material mat erial ; if (EditRo om_Main. hit. transform. tag == "floor") { = > mat erial = Reso urces. Load ("Materials/Ro omMat erial/DefaltList/floor/" return + floorDropdown.options[floorDropdown. value].text, ty peof(Material) ) as Mat erial ; Edi tR oom_Main. hit. transform. GetCom ponent <MeshRenderer> 0. mat erial = mat erial ; Mat erial ; if (EditR oom_M ain. hit. transform. tag == "wall") { mat erial = Res ources. Load ("Materials/Ro omMat erial/DefaltList/ wall/" + wa l lDropdown. options[wa llDropdo wn. value] . text, typeof(Material) ) as Mat erial ; Edi tR oom_Main. hit. transform. GetCom ponent <MeshRenderer> 0. mat erial = mat erial ; if (EditR oom_M ain. hit. transform. tag == "ceiling") { mat erial = Res ources. Load ("Materials/Ro omMat erial/DefaltList/ceil ing/" + ceilingDropdo wn. options[ceilingDropdo wn. value].text, ty peof (Material) ) as Edi tR oom_Main. hit. transform. GetCom ponent <MeshRenderer> 0. mat erial = mat erial ; 158 MenuButtons using Unity Engine; using Unity Engine. SceneManagement ; using Unity Engine. UI ; using Soundar; using Syst em. IO; public class MenuButtons : MonoBehaviour { public GameOb ject menu ; public GameOb ject setting; public GameOb ject empt y; public GameOb ject about; public GameOb ject editR oom ; public GameOb ject edit Sound ; public Text version; public GameOb ject popUp ; public void ShowMenu() { menu. SetA ctive (true); em pty.SetAct ive (true); public void newPro ject () { foreach (GameOb ject a in Object. FindO bjectsOfType<GameO bject> 0) Destroy(a. gameO bject) ; Scene Manager.L oad Scene(l) ; public void showSettting() { menu. SetA ctive (false); setting. SetAct ive (true); em pty.SetAct ive (true); public void showA bout() { menu. SetA ctive (false); about. SetAct ive (true); em pty.SetAct ive (true); version. text = Ap plication. version; public void quit () { App lication. Quit () ; public void clear () 15 9 menu. SetA ctive (false); setting. SetAct ive (false); about. SetAct ive (false); editR oom.SetAct ive (false); edit Sound.SetAct ive (false); em pty.SetAct ive (false); popUp. SetAct ive (false); RunS im ulation_Ma in. isR ecording = true ; MoveSound Main using Syst em. Collections; using Unity Engine; using Unity Engine.EventSys tems ; using Unity Engine. UI ; using Soundar; public class MoveSound _ Main MonoBehaviour { public Text Direction; public Text DebugLog l; public Text DebugLog2 ; public Text DebugLog3 ; public Camera phoneCa me ra ; public GameOb ject axis ; public GameOb ject settingUI; GameOb ject sounsSo urce; GameOb ject floor; float x; float y; float z; float distance; float scale ; float hight ; Vector3 pose new Vector3() ; string tag ; Settings settings new Settings() ; void Start 0 { sounsSource = EditSound_Main. hit. transform. gameO bject ; axis. transform. position = EditSo und_Main. hit. transform. position; distance = 0; tag = "none" ; floor = GameOb ject. Fi ndGameO bjectWithTag ("floor"); hight = sounsSo urce. transform. position. y - floor. GetCom ponent <MeshFi lter> () . mesh. boun ds.center. y; 16 0 settings. SetupSettings(settingUI) ; void Update 0 settings. UpdateSettings(settingUI) ; scale Mathf. Pow(Vector3. Distance(sounsSource. transform. position, phoneCamera. transform. position) /10 , 1. lf) +0. 0lf; axis. transform. position = sounsSou rce. transform. position; axis. transform. localScale = new Vector3(scale, scale, scale ) ; hight = sounsSo urce. transform. posit ion. y - floor. GetCom ponent <MeshFi l ter > () . mesh. boun ds. center. y; x sounsSo urce. transform. position. x; y sounsSo urce. transform. position. y; z = sounsSo urce. transform. position. z; if (Input. touchCount > 0) { Touch touch = Input. GetTouch(0) ; Ray ray = phoneCa mer a. Screen Poin tToR ay(touch. position) ; Rayc astHit hitA xis ; float speed Mathf. Pow (Vector3. Distance (sounsSource. transform. position, phoneCa mera. transform. posit ion) /10, 1. lf) + l; if (Event Syst em.current. IsPoin terOverGameO bject (touch. fingerid ) ) { return; } if (Ph ysics. Ra ycast(ray, out hitA xis) ) { if (touch. phase == TouchPhase.Began) { else pose = touch. position; tag = hitA xis. transform. tag ; float directionX = touch. position. x - pose. x; float directionY = touch. position. y - pose. y; float cameraX phone Cam era. transform. position. x axis. transform. position. x; float cameraZ = phone Cam era. transform. position. z - axis. transform. position. z; distance = Vector3. Distance(touch. position, pose) / 2000 ; if (tag == "x axis") { if (dire ctionX * cameraZ > 0) { sounsSou rce. transform. position else new Vector3(x - dist ance, y, z) ; sounsSou rce. transform. position new Vector3(x + dis tance, y, z) ; if (tag y ax i s") 161 if (dire ctionY > 0) { else sounsSou rce. transform. position = new Vector3(x, y + distance, z) ; sounsSou rce. transform. position new Vector3(x, y - dis tance, z) ; if (tag == "z axis") { Direction. text Debug L ogl. text Debug Log2. text if (direct ionX * cameraX > 0) else sounsSou rce. transform. position new Vector3(x, y, z +distance) ; sounsSou rce. transform. position new Vector3(x, y, z - distance) ; "Height : " + (Mathf. Round (hight * 100) /100). ToString () ; sounsSo urce. transform. position. ToString () ; EditSound _Ma in. hit. transform. position. ToString() ; PlaceOb jects _InAir using Syst em. Collections; using Syst em. Collections.Generic ; using Unity Engine; using Unity Engine.EventSys tems ; using Unity Engine. UI ; public class Place0 bjects_ InA ir : MonoBehaviour { public Game0b ject placed0 bject ; public Camera phoneCa me ra ; public List <Game0 bject> objectList; privat e Game0b ject ne w0bject ; privat e void Update () { Touch touch; if (Input. touchCount < 1 I I (touch Input. GetTouch(0)). phase ! = TouchPhase.Began) { return; 16 2 if (Input. touchCount > 0 && Input. touches[O] . phase == TouchPhase.Began) { if (Event Syst em. current. IsPoin terOverGameO bject (touch. fingerid ) ) { return; newObject Insta ntiate (placedO bject, phoneCamera. transform. position, phoneCamera. transform. rotation) ; objectList. Add (newObject) ; PlaceOb jects _ OnARPlane using Syst em. Collections; using Syst em. Collections.Generic ; using Unity Engine; using Unity Engine.XR.AR Foundation; using Unity Engine. EventSys tems ; using Unity Engine. UI ; public class PlaceObjects _ OnARP lane MonoBehaviour { public GameOb ject placedO bject ; public List <GameO bject> objectList; private ARR aycas tManager arRay castManager ; privat e ARP laneManager arPlaneManag er ; void Start 0 { arR aycas tManager = GetComp onent <ARRay castManager> 0; void Upd ate 0 if (Input. touchCount > 0 && Input. GetTouch(O) . phase { Touch touch = Input. GetTouch(O) ; TouchPhase. Began) if (Event Syst em. current. I sPointerOverGame Obj ect ( touch. finger Id ) ) { return; } Vector2 touchPosition = touch. position; List <ARRa ycast Hit) hits = new List <ARR aycastHit> () ; if (arRay castManager. Ra ycast(touchPosition, hits, Unity Engine. XR.ARSub syste ms. TrackableType. Pla nes)) { GameOb ject newObject ; newObject = Insta ntiate (placed Object, hits [O] . pose.position, hits[O] . pose.rotation) ; objectList. Add (newObject) ; 16 3 RunSimulation Main using Syst em. Collections.Generic ; using Unity Engine; using Unity Engine. UI ; using SteamAud io ; using Soundar; using Vector3 = Unity Engine. Vector3 ; public class RunS im ulation_ Main : MonoBehaviour { public Text Direction; public Text DebugLog l; public Text DebugLog2 ; public Text DebugLog3 ; public Camera phoneCa me ra ; public Aud ioSource background ; public Text SPL_t ext ; public Im age SPL_arr ow ; public Text T60_t ext ; public GameOb ject listener; public GameOb ject settingUI; public SteamA udioManager SteamA udioManager; bool getBackg round ; DataOperation setSurfacePara = new DataOperation() ; Calculation calculation = new Calculation() ; Settings settings = new Settings() ; List< float> backgroundSPLs = new List< float>(); void Start 0 { SetR oom_Main. simu lated = true ; settings. SetupSettings(settingUI) ; getBackground = false; time = O; InvokeRepeat ing("recording" , 3, 0.023f ) ; SetR oom_Main. isFirst = false; SteamA udioManager. ExportScene(false); Ill Backg round sound SPL if (! getBackground) backg round. loop = true; backg round.clip = Microphone.Start (null, true, 1, 44100); backg round.Pla y() ; Directio� text = "Detecting environment sound " ; Invoke(" stopRecord Background" , 3. Of) ; 16 4 void Update () { settings. UpdateSettings(settingUI) ; foreach (GameOb ject a in Start_Ma in. soundSou rce) { GameOb ject SPL Display = a. transform. GetChild(0). gameO bject ; SPL Display. transform. Look At(phoneCamera. transform); SPL Display. transform. Ro tate (0, 180, 0) ; float scale Mathf. Pow(Vector3. Distance(phoneCa mer a. transform. position, a. transform. position) , 1. lf) + 0. 5f ; t. html SPL Display. transform. localScale = new Vector3(scale , scale, scale ) ; if (!a. GetCom ponent<A udioSource> (). isPlaying) SPL Display. transform. GetChild(l) . gameO bject. SetAct ive (false); SPL Display.transform. GetChild(2) . gameO bject. SetAct ive (true); else SPL Display.transform. GetChild(l) . gameO bject. SetAct ive (true); SPL Display. transform. GetChild(2) . gameO bject. SetAct ive (false); Ill Ca lculate SPL value Ill This part refers the answer from @aldona letto Ill https :llansw ers. unity.com l questionsl 157940lgetoutputdata-and-g etspectrum data -they-r epresent- int sam pleSize = 1024 ; float [] listenerSa mp le_l float [] listenerSa mp le_r float listenerSa mp le = 0 · float listenerSPL = 0; new float [sam pleSize] ; new float [sam pleSize] ; float listene rRMS = 0; Aud ioList ener.GetOutputData(listenerSa mp le_l, 0) ; Aud ioList ener. GetOutputData (1 istenerSa mp le_r, 1) ; for (int i = 0; i < sam pleSize; i++) listenerSa mp le + = (listenerSa mp le_l [i] + listenerSa mp le_r [i] ) * (listenerSa mp le_l [i] + listenerSa mp le_r [i] ) ; } listene rRMS = Mathf. Sqrt (listenerSa mp le I sam pleSize) ; listenerSPL = 20 * Mathf. Log l0(listenerRMS I 0. lf) + 63. 09f; if (listenerSPL < 0) { listenerSPL = 0; SPL_t ext. text = Mathf. Round (listenerSPL) . ToString () + " dB " ; SPL_ar row. transform. rotation = Quaternion. Euler (0, 0, -2 * listenerSPL) ; 16 5 /// Calculate T60 value List<float> area = n ew List<float> 0; List<float> absorption = n ew List<float>(); float volume = 0; float T60 = 0; area. Add (calculation. MeshArea (Start_Main. floor. GetCompone nt<MeshFilter> () . mesh) ) ; absorption. Add (Start_Main. floor. GetCompone nt<Stea mAudioMaterial> () . Value. LowFreqAbsorpt ion) ; area. Add (calculation. MeshArea (Start_Main. ceiling. GetCompone nt<MeshFilter> () . mesh) ) ; absorption. Add (Start_Main. cei ling. GetComponent <SteamAudioMaterial> () . Value. LowFreqAbsorpt ion) ; foreach (GameObject a in Start_Main. wall) { area. Add (calculation. MeshArea (a. GetCompone nt<MeshFil ter> 0. mesh) ) ; absorption. Add (a. GetComponent <SteamAudioMaterial> (). Value. LowFreqAbsorption) ; volume = area [0] * SetRoom_Main. elevation ; T60 = calculation. EyringFormula (volume, area, absorption) ; T60 _text. text = (Mathf. Round (T60 * 100) / 100). ToString () ; void stopRecordBackground () Microphon e. End (nu ll); getBackground = true ; int sampleSize = 16384 ; float [] listenerSa mple_l float [] listenerSa mp le_r float listenerSa mple = 0 · float listenerSPL = 0; n ew float [sampleSize] ; n ew float [sampleSize] ; float listenerRMS = 0; AudioListener. GetOutputData(listenerSa mple_l, 0) ; AudioListener. GetOutputData (1 istenerSa mple_r, 1) ; for (in t i = 0; i < sampleSize ; i++) listenerSa mple + = (listenerSa mple_l [i] + listenerSa mple_r[i] ) * (listenerSa mple_l [i] + listenerSa mple_r [i] ) ; } listenerRMS = Mathf. Sqrt (listenerSa mple / sampleSize) ; listenerSPL = 20 * Mathf. Logl0(listenerRMS / 0. lf) + 63. 09f; if (listenerSPL < 0) { listenerSPL = 0; Direction. text = "Environment : " + Mathf. Round (listenerSPL) . ToStrin g O + " dB" ; foreach (GameObject a in Start_Main. soundSource) { a. trans form. GetChild (0). gameObject. SetActive (true); a. GetComponent <AudioSource> () . enab led = true ; if (! SetRoom_Main. muteOn) { 16 6 if (a. transform. GetChild (1) . GetChild (0). GetComp onent<Text> 0. text "1") { a. GetComp onent <A udioSource> 0. Pla y O ; else a. GetComp onent <A udioSource> () . Stop() ; el se a. GetComp onent<A udioSource> () . Stop() ; SetRoom Main using Syst em. Collections.Generic ; using Unity Engine; using Unity Engine. XR.AR Foundation; using Unity Engine. UI ; using Soundar; using SteamAud io ; using Material = Unity Engine. Material ; using Vector3 = Unity Engine. Vector3 ; public class SetR oom _ Main : MonoBehaviour { public Text Direction; public Text DebugLog l; public Text DebugLog2 ; public Text DebugLog3 ; public Button finishButton; public Button edi tButton; public Camera phoneCa me ra ; public Game0b ject settingUI; public SteamA udioManager SteamA udioManager; public public public public public public public public static static static static static static static static bool scale0n = true ; bool frequenc yGraph0n = false; bool waveGraph0n = false ; bool mu te0n; bool isFirst ; float elevation; bool setSound ; bool simu lated false ; privat e Pla ne infinitePla ne; privat e Game0b ject floorSurface; privat e List <Game0b ject> floorPo intList = new List <Game0b ject>O; privat e List <Game0b ject> floorL ineList = new List <Game0b ject>(); 16 7 privat e List <Game0b ject> wallLine List = ne w List <Game 0bject> () ; privat e List <Game0b ject> wallSurfaceList = new List <Game0b ject> O ; privat e Game0b ject ceilingSurface; privat e List <Game0b ject> ceilingPointList = new List <Game0b j ect> () ; privat e List <Game0b ject> ceil ingLine List = new List <Game0b ject> O ; privat e int planeCount; privat e int pointl ; privat e int pointCount ; privat e bool getR oom ; privat e bool getFloor; privat e bool get Ceiling; privat e bool get Wall; Vector3 ceilingPosition; CreateSurface CreateSurface = new CreateSurface() ; Clone0 bject clone = new Clone0 bject () ; Settings settings = new Settings() ; privat e ARP laneManager arPlaneManag er ; void Start 0 { pointl = 0; arPlaneManager = GetComp onent <ARP laneManage r> () ; Directio� text = "Searching for floor plane." ; getFloor = false; getCeiling = false; getWal l = false; getR oom = false; setSo und = false; elevat ion = 0; isFirst = true ; void Upd ate 0 settings. UpdateSettings(settingUI) ; planeCount = arPlaneManager. trackables. count; if ( ! getRo om) { /// Keep all element to next scene. if (getWall) { foreach (Game0b ject a in wallLine List) { a. AddCom pone nt<DoN otDestroy>() ; } foreach (Game0b ject a in wallSurfaceList) { a. AddCom ponent <DoN otDestroy) 0; foreach (Game0b ject a in floorL ineList) { a. AddCom ponent <DoN otDestro y>() ; } foreach (Game0b ject a in floorPo intList) { a. AddCom ponent <DoN otDestroy>() ; } floorSurface. AddCom ponent <DoN otDestroy > 0 ; foreach (Game0b ject a in ceilingLine List) { a. AddCom ponent <DoN otDestro y>() ; } foreach (Game0b ject a in ceilingPointList) { a. AddCom ponent <DoN otDestroy>() ; } ceilingSurface. AddCom ponent <DoN otDestro y) () ; finishButton. game0 bject.SetA ctive (true); 16 8 edi tButton. game0 bject.SetA ctive (true); if (getFloor) { if (!getCeiling) floorSurface = CreateSurface. CreatSurface(floorPo intList, "horizontal"); Material floorMat erial Res ources. Load ("Materials/Room Mat erial/DefaltList /floor/wood " , ty peof(Material) ) as Mat erial ; floorSurface. GetComp onent <MeshRendere r> () . mat erial = floorMaterial ; floorSurface. tag = "floor" ; ceilingSurface = clone. clone0 bject(floorSurface) ; Material ceilingMat erial Res ources. Load ("Materials/Room Mat erial/DefaltList/ ceiling/acoustic tiles", typeof(Material) ) as Mat erial ; ceilingSurface. GetComp onent <MeshRendere r> () . mat erial = ceilingMaterial ; ceilingSurface. tag = "ceiling" ; ceilingLin eList. Clear() ; ceilingPointList.Clear() ; ceilingPointList = clone. clone0 bjects (floorPo intLi st) ; ceilingLin eList = clone.clone0 bjects (floorL ineLi st) ; getCeiling = true ; if (!get Wall) { ///Set up room height Mesh floorMesh = floorSurface. GetCom ponentinChild ren<MeshFilter> () . me sh; float dis tanceY = phoneC amer a. transform. position. y - floorMesh. bounds .center. y; float dista nceXZ = Mathf. Sqrt (Mathf. Pow(Vector3. Distance( phoneC ame ra. transform. position, floorMesh. bounds. center) , 2) Mathf. Pow(dista nceY, 2)); elevat ion = dis tanceY - dista nceXZ * Mathf. Tan (phoneCamera. transform. rotation. x * Mathf. PI) : if (elevation < 0) { elevation = 0; Direction. text = "Ceiling is lower than floo� " ; if (phone Came ra. transform. rotation. x > = 0. 5) { elevation = 0; Direction. text = "Ceiling is lower than floor. " ; else if (phoneCamera. transform. rotation. x < = -0. 5) { el se elevation = 0; Direction. text "Elevation is infinity." ; ceilingPosition = new Vector3(floorSurface. transform. position. x, floorSurface. transform. position. y + elevation, 16 9 elevation, elevation, ceil in gPoint List [i] ) ; floorSurface. trans form. position. z) ; ceilin gSurface. trans form. position = ceil ingPosition; foreach (GameObject line in wallLine List) { GameObject. Destroy (line) ; wallLineList. Clear () ; for (int i = O; i < floorPoint List. Count ; i++) Vector3 poin tPosition = n ew Vector3 (floorPointList [i] . trans form. position. x, floorPoint List [i] . trans form. position. y + floorPoint List [i] . trans form. position. z) ; cei lin gPoint List [i] . trans form. position = poin tPosition; Vector3 linePo si tion = new Vector3 (floorLineList [i] . trans form. position. x, floorLineLi st [i] . trans form. position. y + floorLineLi st [i] . trans form. position. z) ; cei lingLin eList [i] . trans form. position = linePosition; for (in t i = O; i < floorPoint List. Count ; i++) GameObject wallLine = n ew GameObject () ; wallLine GetComponent <LinkTwoPoin ts> (). Lin kPoints (floorPoint List [i] , wallLineList. Add (wallLin e) ; Direction. text = "Elevation: " + (MathE Round (elevation * 100)/100). ToString () + "\n\r Tap screen to conf irm. " ; if (Input. touchCount > 0 && Input. GetTouch (O). phase == TouchPhase. Began) { Direction. text = "Room set ! \n\rClick check to next step. " ; arPlaneMana ger. enab led = false ; get Wall = true ; List<GameObject> wallPoin tList = n ew List<GameObject>(); GameObject wallSurface ; int count = floorPoint List. Count ; for (int i = 0; i < count - 1; i++) wallPoin tList. Clear O ; wallPoin tList. Add (floorPointLi st [i]) ; wallPoin tList. Add (floorPoint List [i + l]) ; wallPointList. Add (ceilingPointList [i + l]) ; wallPointList. Add (ceilingPointList [i]) ; wallSurface = CreateSurface. CreatSurface (wal lPoin tList, "vertical") ; wallSurface. tag = "wall" ; wallSurfaceList. Add (wallSurface) ; wallPointList. Clear () ; wallPoin tList. Add (floorPoint List [count - l]) ; wallPoin tList. Add (floorPointLi st [O]) ; wallPointList. Add (ceilin gPointLi st [O]) ; 17 0 wallPoin tList. Add (ceilingPointList[count - l]) ; wallSurface = CreateSurface. CreatSurface (wal lPointList, wallSurface. tag = "wall" ; wallSurfaceList.Add (wallSurface) ; Mat erial wallMaterial Res ources. Load C'Materials/Ro omMat erial/Defal tList/ wall/ concrete", typeof (Material) ) foreach (Game0b ject a in a. GetComp onent <MeshRendere r> () . mat erial = wallMat erial ; } else } ///Get point list floorPoin tList = GetComp onent<Place0 bjects_0nAR Plane>() . objectList ; pointCount = floorPoin tList.Count ; ///Li nk vertex if (pointCount == 0 && planeCount > 0) { Direction. text = "Please set floor corners . " ; else if (pointCount > 1 && pointCount > pointl + 1) Direction. text = "Please set floor corners . " ; "vertical") ; as Material ; wallSurfaceList) float back Distance Vector3. Distance(floorPointList[pointCount 1] . transform. position, floorPoin tList [0]. transform. position) ; if (back Distance < 0. 1) { if (pointCount > 3) { GetCom ponent <Place0 bjects_0nAR Pla ne>() . enabled false ; point l = 0; Destroy(floorPointList[pointCount - 1] ) ; floorPointL ist. RemoveAt (pointCount - 1) ; pointCount = pointCount - l; Game0b ject newline ; newline = GetCom ponent <Li nkTwoPo ints> () . Li nkPo ints (floorPointList [pointl], floorPoin tList[pointCount - l]) ; else floorL ineList.Add (new Lin e) ; Direction. text = "Set room hight. " ; getFloor = true ; I I /Hid e ARP lane. foreach (var plane in arPlaneManager. trackables) { plane. game0 bject. SetAct ive (false); 171 else Direction. text = "Poin ts are fewer than 3. \n\r Can't get surface. " ; back Distance = 1; Destroy(floorPointList[pointCount - l] ) ; floorPointL ist. RemoveAt (pointCount - 1) ; pointCount = pointCount - 1; GameOb ject newline ; newline GetCom ponent <Li nkTwoPo ints> 0. Lin kPo ints (floorPo intList [pointl], floorPoin tList[pointCount - l]) ; } pointl = pointCount - l; floorL ineList.Add (new Lin e) ; SetSound Main using Syst em. Collections.Generic ; using Unity Engine; using Unity Engine. UI ; using Unity Engine. Au dio; using Soundar; public class SetSound _Ma in MonoBehaviour { public public public public public Text Text Text Text Text Direction; Count ; DebugLog l: DebugLog2 ; DebugLog3 ; public Button edi tButton; public GameOb ject settingUI; public Aud ioMixerGroup mixer ; public static List< float> soundSou rceSPL = new List< float>(); privat e List <GameOb ject> soundSourc eL ist new List <GameOb ject>(); Settings settings = new Settings() ; void Start 0 Direction. text = "Place sound source. " ; Count. text = "Count: " + GameO bject. Fi ndGameO bjectsWi th Tag (" sound source"). Length. ToString O ; SetR oom_Main. setSound = true ; settings. SetupSettings(settingUI) ; 17 2 void Update 0 { if (GameOb ject. Fi ndGameO bjectsWi th Tag (" sound source"). Length > 0) { editButton. gameO bject.SetA ctive (true); settings. UpdateSettings(settingUI) ; soundSourc eL ist = GetCom ponent <PlaceObjects_InAir>() . objectLi st ; int soundS ource_count = soundSou rceList.Count ; Count. text = "Count: " + GameO bject. Fi ndGameO bjectsWi th Tag (" sound source"). Length. ToString () ; GameOb ject newSoundS ource; newSoundSou rce = soundS ourceList[soundSourc e_c ount - 1] ; newSoundSou rce. GetComp onent <Aud ioSource>() . enabled false; newSoundSou rce. tag = "sound source" ; newSoundSou rce. AddCom ponent <DoNot Destr oy>() ; newSoundSou rce. GetComp onent <Aud ioMixer> () . outputA udio Mix erGroup soundS ource_c ount. ToString () ) ; } public Aud ioMixerGroup duplicateMixer (A udio Mix erGroup mi xer, string n) { Settings Au dio Mix erGroup newMixer ; newMixer = Insta ntiate (mixe r) ; newMixer. name = "Sound Source" + n; return newMixer ; using Unity Engine; using Unity Engine. UI ; nam espace Soundar { public class Settings : MonoBehaviour { public GameOb ject scale ; public GameOb ject freqency Graph ; public GameOb ject waveGraph ; public Toggle scaleToggle; public Toggle frequenc yGraph Toggle; public Toggle waveGraphToggle ; public Toggle mu teToggle; public void Visibility_Scale () { if (scale Toggle. GetCom ponent <Toggle>() . isOn) { scale.SetAct ive (true); 17 3 duplicateMixer (mixer, else scale.SetAct ive (false); public void Visibility _Fr equnc yGra ph () { if (frequenc yGraphToggle. GetComp onent<Toggle>() . isOn) { freqenc yGra ph. SetA ctive (true) ; else freqenc yGra ph. SetAct ive (false); public void Visibility _W aveGraph () { if (waveGraphToggle. GetComp onent <Toggle> 0. isOn) { waveGraph. SetAct ive (true); else waveGraph. SetAct ive (false); public void MuteAll() { if (m uteToggle. GetComp onent<Toggle>() . isOn) { foreach (GameOb ject a in Start_Ma in. soundS ource) { a. GetComp onent <A udioSource> 0. Stop() ; el se foreach (GameOb ject a in Start_Ma in. soundS ource) a. GetComp onent <A udioSource> 0. Pla y() ; publ ic void SetupSettings(GameOb ject settingUI) { settingUI. transform.GetChild(O) . GetCom ponent<Tog gle>() . isOn = SetR oom_Main. scaleOn; settingUI. transform.GetChild(l ) . GetCom ponent<Tog gle>() . isOn = SetR oom_Main. frequenc yGraphOn; settingUI. transform. GetChild (2) . GetCom pone nt<Toggle> 0. isOn = SetR oom_Main. waveGraphOn; settingUI. transform.GetChild(3) . GetCom ponent<Tog gle>() . isOn = SetR oom_M ain. mu teOn; 17 4 public void UpdateSettings(Game0b ject settin gUI) { SetR oom_Main. scale0n = settingUI. transform. GetChild(0). GetCom ponent <Toggle>() . is0n; SetR oom_Main. frequenc yGraph0n = settin gUI. transform. GetChild (1) . GetComp onent <Toggle> () . is0n; SetR oom_Main. waveGraph0n = settingUI. transform. GetChild(2) . GetCom ponent<Tog gle>() . is0n; SetR oom_M ain. mut e0 n = settin gUI. transform. GetChild(3) . GetComp onent<Toggle>() . is0n; S ound Control using Syst em. Collections; using Syst em. Collections.Generic ; using Unity Engine; using Unity Engine. UI ; using Unity Engine. SceneManagement ; public class SoundCont rol : MonoBehaviour { public InputField SPL inputFiled; public Toggle mu teToggle; public Dropdo wn fi leDrop down; public Game0b ject optionMenu; public Game0b ject addBut ton; public Game0b ject deleteButton; public void SPL input () { float newSPL = 0; newSPL = float.P arse(SP Lin putFiled. text. ToString() ) - 80 ; Edi tSound_Main. hit. transform. GetComp onent <A udioSource> () . outputAud ioMixerGroup. audioMixer. SetFloat (" volu me_m ixer", newSPL); Edi tSound_Main. hit. transform. GetChild (0). GetChild (1) . GetCom ponent <Text > () . text SP Lin putFiled. text + " dB"; } public void changeSound File () { Au dio Clip clip ; clip Res ources.Load ("Sound/" typeof(A udio Cl ip) ) as Au dio Clip ; + fileDropdo wn. options[fileDropdo wn. value] . text, Edi tSound_Main. hit. transform. GetComp onent <Aud ioSource> () . clip clip ; if (clip. name == "Imp ulse") { else EditSound _M ain. hit. transform. GetComp onent <A udioSource> (). loop false; Edi tSound_Main. hit. transform. GetComp onent <Aud ioSource> () . loop true ; 17 5 public void mu te 0 { if (m uteToggle. isOn) { else EditSound _M ain. hit. transform. GetChild(l) . GetChild(O) . GetCom pone nt<Text>(). text ll O ll ; Edi tSound_Main. hit. transform. GetChild (1) . GetChild (0). GetCom ponent <Text > () . text ll l ll ; public void delete() { Destroy(EditSound _Ma in. hit. transform. gameO bject) ; optionMenu . SetAct ive (false); add Button. SetAct ive (true); deleteButton. SetAct ive (false); EditSound _Ma in. showOptionMenu = false; S ound Option using Unity Engine; using Unity Engine. UI ; using Unity Engine. SceneManagement ; public class SoundOp tion : MonoBehaviour { public GameOb ject SPLO ption; public GameOb ject fileOption; public GameOb ject mu teOption; public GameOb ject mo veOption; public Button SPLBut ton; public Button fileButton; public Button mu teButton; public Button moveB utton; public Dropdo wn fi leDrop down; public void Option_SPL () { SPLOpt ion. SetAct ive (true); fileOption. SetAct ive (false); mu teOption. SetAct ive (false); mo veOption. SetA ctive (false); SPLB utton. GetCompon ent<Image) () . overrideSpri te = Res ources. Load ("Picture/UI/Sound Option Menu A2 ll , typeof (Sprite) ) as Sprite ; 17 6 fileButton. GetComponent<lmage> (). overrideSprite Bl ", typeof (Sprite)) as Sprite ; muteButton. GetComponent <Image) (). overrideSpri te Cl", typeof (Sprite)) as Sprite ; moveButton. GetComponent <Image) (). overrideSpri te Dl ", typeof (Sprite)) as Sprite ; Resources. Load ("Picture/VI/Sound Option Menu Resources. Load ("Picture/VI/Sound Option Menu Resources. Load ("Picture/Lil/ Sound Option Menu SPLOption. transform. GetChild (0) . GetChild (0) . GetComponent<Text> () . text EditSound_Main. hit. transform. GetChild (O). GetChild(l) . GetComponent<Text>(). text. Replace ( " dB", ""); } public void Option_file () { SPLOpt ion. SetActive (fal se); fileOption. SetActive (true); muteOption. SetActive (false); moveOption. SetActive (fal se); SPLButton. GetComponent<Image> () . overrideSpri te = Resources. Load ("Picture/UI/Sound Option Menu Al", typeof (Sprite)) as Sprite ; fileButton. GetComponent<Image) (). overrideSpri te B2", typeof (Sprite)) as Sprite ; muteButton. GetComponent<Image) (). overrideSpri te C l ", typeof (Sprite)) as Sprite ; moveButton. GetComponent<lmage> (). overrideSprite Dl ", typeof (Sprite)) as Sprite ; Resources. Load ("Picture/Lil /Sound Option Menu Resources. Load ("Picture/Lil /Sound Option Menu Resources. Load ("Picture/VI/Sound Option Menu string currentFi le = Edi tSound_Main. hit. transform. GetComponent <AudioSource> (). cl ip. name ; fi leDropdown. value fi leDropdown. options. Find Index ( (i) => { return i. text. Equals (currentFile) ; } ) ; } public void Option_mute () { SPLOpt ion. SetActive (false); fileOption. SetActive (false); muteOption. SetActive (true); moveOption. SetActive (false); SPLButton. GetComponent<Image> () . overrideSpri te = Resources. Load ("Picture/UI/Sound Option Menu Al", typeof (Sprite)) as Sprite ; fileButton. GetComponent<Image> (). overrideSpri te Resources. Load ("Picture/VI/Sound Option Menu Bl", typeof (Sprite)) as Sprite ; muteBut ton. GetComponent <Image) (). overrideSpri te C2", typeof (Sprite)) as Sprite ; moveButton. GetComponent<Image) (). overrideSpri te Dl ", typeof (Sprite)) as Sprite ; Resources. Load ("Picture/Lil /Sound Option Menu Resources. Load ("Picture/Lil /Sound Option Menu if (Edi tSound_Main. hit. transform. GetChild (1). GetChild (0) . GetComponent<Text> () . text "l") { muteOption. transform. GetChild (O). GetComponent<Toggle> (). isOn = false ; else 17 7 mu teOption. transform. GetChild(O) . GetCom ponent <Toggle>() . isOn true ; public void Option_move () { SPLOpt ion. SetAct ive (false); fileOption. SetAct ive (false); mu teOption. SetAct ive (false); mo veOption. SetAct ive (true); SPLB utton. GetComp onent<Im age> () . overrideSpri te = Res ources. Load ("Picture/UI/Sound Option Menu Al" , typeof(Sprite) ) as Sprite ; fileButton. GetComp onent<Im age> () . overrideSpri te Res ources. Load ("Pict ure/UI/Sound Option Menu Bl", ty peof(Sprite) ) as Sprite ; mu teButton. GetComp onent<Im age) () . overrideSpri te Res ources. Load C"Picture/UI/Sound Option Menu C l", ty peof(Sprite) ) as Sprite ; moveB utton. GetComp onent<Im age> (). overrideSpr ite D2", typeo f (Sprite) ) as Sprite ; Res ources. Load C"Picture/UI/Sound Option Menu Scene Manager. Load Scene(6) ; public void optionMenuW hite (GameOb ject optionMenu) { optionMenu . transform. GetChild(l) . gameO bject. GetComp onent<Im age> (). overrideSpr ite Res ources. Load ("Picture/UI/Sound Option Menu Al" , typeof(Sprite) ) as Sprite ; optionMenu . transform. GetChild(2) . gameO bject. GetComp onent<Im age> (). overrideSpr ite Res ources. Load ("Picture/UI/Sound Option Menu Bl" , typeof(Sprite) ) as Sprite ; optionMenu . transform. GetChild(3) . gameO bject. GetComp onent<Im age> (). overrideSpr ite Res ources.Load ("Picture/UI/Sound Option Menu C l" , typeof(Sprite) ) as Sprite ; optionMenu . transform. GetChild(4) . gameO bject. GetComp onent<Im age> (). overrideSpr ite Res ources. Load C"Picture/UI/Sound Option Menu Dl" , typ eof (Sprite) ) as Sprite ; optionMenu . transform. GetChild(5) . gameO bject. SetAct ive (false); optionMenu . transform. GetChild(6) . gameO bject. SetAct ive (false); optionMenu . transform. GetChild(7) . gameO bject. SetAct ive (false); optionMenu . transform. GetChild(8) . gameO bject. SetAct ive (false); Start Main using Unity Engine; using Unity Engine. SceneManagement ; using Unity Engine. UI ; using Soundar; public class Start_ Main : MonoBehaviour { public static string[] [] floorMat erial = new string[] [] public static string[] [] wallMaterial = new string[] [] { } ; { } ; public static string[] [] ceilingMat erial = new string[] [] { } ; public static string[] [] sound File = new string[] [] { } ; 17 8 public static GameOb ject floor; public static GameO bject[] wall = new GameO bject[] { }; public static GameOb ject ceiling; public static GameO bject[] sound Source = new GameO bject [] public Canvas logo ; public Canvas warning; public Canvas calibration; public Aud ioSource sound; bool doneLogo = false; bool don eWarning = false; bool doneCa libration = false; DataOperation setSurfacePara = new DataOperation() ; void Start 0 { /// Setup mat erial/object data base . { }; floorMat erial = setSurfacePara. Csv2List ("DataBase/FloorMaterials"). ToArray O; wallMaterial = setSurfacePara. Csv2List ("DataBase/WallMat erials"). ToArray () ; ceilingMat erial = setSurfacePara. Csv2List ("DataBase/CeillingMaterials"). ToArray O; I I I Setup sound data base. soundF i le = setSurfacePara. Csv2List ("DataBase/Sound F i le"). ToArray O ; sound. Stop O ; void Upd ate 0 if (Input. touchCount > 0 && Input. GetTouch(O) . phase { if ( ! doneLogo) { el se logo. gameO bject. SetAct ive (fal se); doneLogo = true ; if ( ! don eWarning) { else warning. gameO bject. SetAct ive (false); sound. Pla y O ; doneW arning = true ; if (! doneCa librat ion) { TouchPhase. Began) calibration. gameO bject. SetAct ive (false); sound. Stop O ; doneCa libration = true ; 17 9 else SceneManager. Load Scene(l) ; StepButtons using Unity Engine; using SteamAud io ; using Unity Engine. SceneManagement ; using Unity Engine. UI ; using Soundar; using Syst em. IO; public class StepButtons : MonoBehaviour { public GameOb ject buttonRoom ; public GameOb ject buttonSound ; public GameOb ject empt y; public SteamA udioManager SteamA udioManager; public Text Direction; public Text DebugLog l; public Text DebugLog2 ; public Text DebugLog3 ; public void finishRoom () { DataOperation setSurfacePara = new DataOperation() ; if (!SetR oom _Ma in. setSound) { Ill Save objects from scene. Start_Ma in. ceiling = GameO bject. Fi ndGameO bjectWi th Tag (" ceiling") ; Start_Ma in. floor = GameO bject. F ind GameO bjectWithTag ("floor"); Start_ Main. wall = GameO bject. Fi ndGameO bjects WithTag ("wall"); Ill As sign mat erial acoustic data foreach (GameOb ject a in Start_ Main. wall) { a. AddCom ponent <SteamA udioGeometr y>() ; a. AddCom ponent <SteamA udioMaterial> () ; Start_Ma in. ceiling. AddCom ponent <SteamA udioGeometr y>() ; Start_Ma in. ceiling. AddCom ponent <SteamA udioMaterial> 0 ; Start_ Main. floor. AddCom ponent <SteamA udioGeom etry>() ; Start_Ma in. floor. AddCom ponent <SteamA udioMaterial> 0; 18 0 Start_Ma in. floor. GetComp onent<SteamA udio Material> (). Preset = Mat erialPreset.Custom ; setSurfacePar a. As signAc ousticParam eters(Start_Ma in. floor. Start_ Main. floorMaterial, Start_Ma in. floor. GetCom ponent <MeshRenderer> 0. mat erial. name) ; Start_Ma in. ceiling. GetComp onent<SteamA udio Mat erial> () . Preset = Mat erialPreset.Custom ; setSurfacePar a. As signAc ousticParam eters(Start_M ain. ceiling, Start_ Main. ceilingMaterial, Start_Ma in. ceiling. GetCom ponent <MeshRenderer> () . mat erial. name) ; foreach (GameOb ject a in Start_ Main. wall) { a. GetComp onent<SteamAud ioMaterial> 0. Preset = MaterialPreset. Custom ; setSurfacePar a. As signAc ousticParam eters(a, Start_ Main. wallMaterial, a. GetComp onent<MeshRenderer> () . mat erial. name) ; } SetR oom_Main. isFirst = true ; SteamAu dioManager. ExportScene(false); var fi leName Path.GetFi leNam eWithoutExtension(SceneManager. GetA ctiveScene() . name) + " . phonons cene"; var newName = "3 Run Sim ulation" + ". phonons cene"; if (Fi le. Exists(Path.Com bine(App lication. persistentDataPath, newName) )) { File. Delete(Path.Com bine(App lication. persistentData Path, newName) ); File. Copy(Path.Com bine(App lication. persistentData Path, Path.Com bine(Ap plication. persistentDataPath, newName) ); if (!SetR oom _Ma in. setSound) { Scene Manager.Load Scene(2) ; el se { Scene Manager.LoadS cene(3) ; } public void finishSound () { foreach (GameOb ject a 1n Start_ Main. wall) a. GetComp onent<MeshCollider> () . enabled = true ; Start_Ma in. floor. GetCom ponent <MeshCollider> () . enabled = true ; Start_Ma in. ceiling. GetComp onent <MeshCollider> () . enabled = true ; Start_Ma in. soundSou rce = GameO bject. Fi ndGameO bjectsWithTag ("sound source"); Scene Manager. Load Scene(3) ; SetR oom_Main. isFirst = true ; SteamA udioManager. ExportScene(false); public void addSound () { Scene Manager.Load Scene(2) ; 181 fileNam e) , public void edit() { buttonRoom. gameO bject.SetA ctive (true); buttonSound. gameO bject. SetAct ive (true); em pty.SetAct ive (true); public void EditR oom () { Scene Manager. LoadSc ene(4) ; buttonRoom. gameO bject.SetA ctive (false); buttonSound. gameO bject. SetAct ive (false); public void EditSound () { Start_Ma in. soundSou rce = GameO bject. F ind GameO bjectsWithTag ("sound source"); Scene Manager. Load Scene(5) ; buttonRoom. gameO bject.SetA ctive (false); buttonSound. gameO bject. SetAct ive (false); 18 2 APPENDIX C: TIME DOMAIN CHARTS 90 85 80 75 70 � 65 60 55 50 45 40 0 2 Weigh ting: C 90 85 80 75 70 � � 65 60 55 50 45 40 2 Weighti ng: C Tlc_Soudnar v.s. Tlc_Recording 3 4 5 I (sec) Meter Speed : Slow 6 7 8 9 Ile Soudnar T 1 c _ Recording Tld_Soudnar v.s. Tl d_ Recording: 3 4 5 I (sec) Meter Speed : Slow 6 7 8 9 Tld Soudnar T 1 d _ Recording 10 10 90 85 80 75 70 CD �65 a. 60 55 50 45 40 0 2 Weighting: C 90 85 80 75 70 / CD � 65 a. / 60 55 50 45 40 2 Weighting: C Tlf_Soudnar v.s. Tlf_Recording - - - --------- ------ ---- 3 4 t (sec) Meter Speed: Slo w 6 7 8 9 Tlf Soudnar Tlf _Recording Tl e_Soudnar v.s. Tl e_Recording 3 4 t (sec) Meter Speed: Slo w 18 4 6 7 8 9 Ile Soudnar T 1 e _ Recording 10 10 90 85 80 75 70 � 65 a. 60 55 50 45 40 0 2 Weigh ting: C 90 85 80 75 70 � 65 a. ( 60 55 50 Tlg_Soudnar v.s. Tl g_Re cording 3 4 5 6 7 8 9 t (sec) Meter Speed : Slow Tl g_ Soudnar Tl g_ Recording Tlh Soud nar v.s. Tlh Recordino - - e 10 45 40 0� ---:·:----- -:: �� ---= � �-....l � -- ....liL...- _...J �--....l.; --....l. �--....l. �-__J 1 0 Weigh ting: C t (sec) Meter Speed : Slow 18 5 Tlh_Soudnar Tl h _ Recording 90 85 80 75 70 � :3, 65 55 50 45 40 0 2 Weighting : C Tli_Soudnar v.s. Tl i_Recording 3 4 t (sec) Meter Speed : Slow 18 6 6 7 8 9 Tl i Sou dnru· Tl i_Recording 10 APPENDIX D: FREQUENCY DOMAIN CHARTS -20 -40 -60 20 30 40 50 60 70 80 100 200 300 400 500 600 800 1k 2k 3k 4k 5k 6k 7k Bk 1 Ok 20.0kHz � 1 .,..1 a_Soundar -{!Zjy- 732 df3 'H . .V � 2: T1 a _ Record,ig -{!Zjy- 77.7 dB'-£W dB T1 c Soundar v.s.T1 c _Recording 80 60 + + + 40 20 0 -20 -40 + l -60 1 20 30 40 50 60 70 80 100 200 300 400 500 600 800 1k 2k 3k 4k 5k 6k 7k Bk 10k 20.0kHz 1: .,..1 c_Soundar -{!Zjy- 31 .:. dB"' [\'./ 2: Tl c_Recording -{!Zjy- 35.1 dB'itw 18 7 dB r- ---- - -- - -- --, --- -,,�-,---,,- -- --,- - - - -,, --,- -,,, --- - --------- - -- -,- ---- -, T1 d_ Soundar v .s. T1 d_Recording 80 t 60 � t t- t- + + -40 -60 t t + t j t 20 30 40 50 60 100 200 300 400 500 600 800 1k 2k 3k 4k 5k 6k 7k Bk 1 Ok 20.0kHz 1 T1 d_Sourid 1 !1r -{iZiir 36 0 dB"4Ew 2; T1 d_Recording -{iZiir 31 .SdB'<EW dB .----------- ....... ---------------------------------..... -----, 80 60 I T1 e_Soundar v.s. T1 e_Recording 40 20 -20 -40 -60 20 30 40 50 60 70 80 100 200 300 400 500 600 800 1 k 2k 3k 4k 5k 6k 7k Bk 1 Ok 20.0kHZ �------------------------------� (?] 1 T1 e_Soundar -{iZii} 32 3 dfl �EW 2: T1 e_Record111g -{iZii} 34 8dB "'1EW 18 8 dBr- ---- - -- - -- --, --- ---,=-,,..,..--,,- -- --,- - - - ---= -- -- --------- - -- -,- ---- -, T1 f_Soundar v.s. T1 f_ 80 60 dB 80 60 20 t � t 30 40 50 60 100 200 300 400 500 600 800 1 k 2k 3k 4k 5k 6k 7k Bk 1 Ok �------------------------------� 1 T1 f _Soundar -@ 198dB"4Ew 2: T1 f _Recording -@ 26.2dB'< EW 20.0kHz .----------- ....... ---------------------------------..... -----, T1 g_ Soundar v .s. T1 g_Recording 20 30 40 50 60 70 80 100 200 300 400 500 600 800 1 k 2k 3k 4k 5k 6k 7k Bk 1 Ok 20.0kHZ �------------------------------� (?] 1 T1 g_Soundar -@ 1 9.9 dfl �EW 2: T1 g_Record111g -@ 29.0dB"'iE W 18 9 dB .- ---- - -- - -- --, --- -=----= ---- - --= ------ - --------- - -- -,- ---- -, l t T1 h_Soundar v.s. T1 h_Recording � t 80 60 20 30 40 50 60 100 200 300 400 500 600 800 1 k 2k 3k 4k 5k 6k 7k Bk 1 Ok 20.0kHz �------------------------------� 1 T1 h_Sourid 1 !1r -1!Ziir 31 O dB"4Ew 2: T1 h_Recording -1!Ziir 34.3 dB'<EW dB .----------- ....... -----------------.----------------- ..... -----, 80 60 I T1 i_Soundar v.s. T1 i_R cording t -40 -60 l 20 30 40 50 60 70 80 100 200 300 400 500 600 800 1 k 2k 3k 4k 5k 6k 7k Bk 1 Ok 20.0kHZ �------------------------------� (?] 1 T1t_Soundar -1!Ziir 31 ,.. dfl �EW 2: T1 i_Recording -1!Ziir 34 9 dB "EW 19 0 20 t + t .. .. .. .. + + -20 i + .... + + -40 t t + t .. .. .. + -60 20 30 40 50 60 100 200 300 400 500 600 800 1k 2k 3k 4k 5k 6k 7k 8k 1 Ok 20.0kHz 1 T2b_Sourid 1 !1r -{iZiir 69 1 dB"4Ew 2: T2b_Recording -{iZiir 71 .SdB'< EW 20 I- t t + -20 i + + -40 -60 20 30 40 50 60 70 80 100 200 300 400 500 600 800 1 k 2k 3k 4k 5k 6k 7k 8k 1 Ok 20.0kHZ �------------------------------� (?] 1 T1 a_Soundar -{iZji} 73 2 dfl �EW 2: T1 b_Soundar -{iZji} 72.0 dB"'iE W 191 20 t + t .. .. .. .. + + -20 i + .... + + -40 t t + t .. .. .. + -60 20 30 40 50 60 100 200 300 400 500 600 800 1k 2k 3k 4k 5k 6k 7k 8k 1 Ok 20.0kHz 1 T'2 a_Souri d1 !1r -{iZiir 65 0 dB"4Ew 2: T2b_Soundar -{iZiir 69 1 dB'< EW 20 I- t t + -20 i + + -40 -60 20 30 40 50 60 70 80 100 200 300 400 500 600 800 1 k 2k 3k 4k 5k 6k 7k 8k 1 Ok 20.0kHZ �--------------------------------� (?] 1 T1 a_Soundar -{iZji} 73 2 dfl �EW 2: T1 b_Soundar -{iZji} 72. 0 dB "'i EW 19 2 APPENDIX E: IMPULSE RESPONSE CHARTS d BFS 1 0 -- - ------------------------------ - - - ----------- T1 a_Recording -5 -1 0 t- -1 5 -20 -25 -30 -35 -40 -1 00m -som 50m 1 OOm 150m 200m 250m 300m 350m 400m 450m 500m 550m 600m 650m 700m 750m 800m 850m 900m 950m 1.000s d8�� --------------------.- --- - --- ----,-- ..- -------.------------ T2b _Recording -5 -1 0 + + -1 5 + t- -20 -25 -30 -35 -40 j .I -1 OOm -50m 50m 1 OOm 150m 200m 250m 300m 350m 400m 450m 500m 550m 600m 650m 700m 750m BOOm 850m 900m 950m 1.000s 19 3 dB ;� ,-- -..---.,--,---..---.----,--,, ,,..,. -..,,,.--,,....,.-,-.--= ,,.,. -.--=---,,....---,,,...--.-- --, -- .,.... -..-- -., ----,,---..---. T1 a_Soundar v.s. T1 a_Recording -5 -1 0 -1 5 -20 -25 -30 -35 -Som dB FS som 1 OOm 150m 200m 250m 300m � 1 Tla_�ounc kir .�5.� dBFS + .. + t 600m 650m 700m 750m BOOm 850m 900m 950m 1.000s 2: T1a_ Recording -- .36.8 dBFS 1 0 �----------------- T2b_Soundar v.s. T2b_Recording -5 ·1 0 -1 5 -20 -25 -30 -35 900m 950m 1.000s 19 4 dB ;� ,---..---.,--,---..---.----, - - -=-,-- T1 a_ SOUndar v.s. T1 b_Soundar -5 - 10 - 15 - 20 - 25 - 30 - 35 -Som dB FS +- som 1 OOm 150m 200m 250m 300m 350m 400m 450m 500m 550m 600m 650m 700m 750m BOOm 850m 900m 950m 1.000s � 1 Tla_�ounc kir --8.3 dBFS 2: T1b_Soundar -- -27.1 dBFS 1 0 �----------------- T2a_Soundar v.s. T2b_Soundar 19 5 - 5 - 10 - 15 - 20 - 25 - 30 +- -Som som 1 00m 150m 200m 250m 300m 350m 400m 450m 500m 550m 600m 650m 700m 750m B00m 850m 900m 950m 1.000s � 1 Tla_�ounc kir •-8.3 dBFS 2: T2a_Soundar -- .32.0 dBFS 19 6
Abstract (if available)
Abstract
Augmented reality (AR), as a combination of real and virtual worlds, is getting more widely used in the architecture and construction domain, especially for visualization. However, virtual information can be presented not only visually but also audibly, which is valuable for room acoustic simulation. An application running on a mobile device, Soundar, was developed for simple acoustic simulations for small size rooms and for ordinary users. It simulates the reverberation time and sound pressure level based on an existing room, virtual sound source, and the location of the user. Users can change the material of the room surfaces and then test the difference in sound. The application provides both visual and auditory feedback that will let the users not only read the data but also hear the result of the simulation. It was developed on Unity and used Steam Audio for sound rendering. ❧ Tests were run to compare the results returned by Soundar, existing sound simulation software, and microphone recordings in real testing rooms. By comparing the results and the frequency response graphs of the sound file, Soundar had good performances in the numeric results and the auditory results of the realtime SPL. The reverb rendering of the auditory result did not match the simulation. More development and tests need to be implemented to get a more realistic reverb rendering to the simulation result. Soundar is acceptable for small-size spaces and for ordinary people to do simple tasks like room decoration and schematic design, but not for acoustic experts and engineers to do scientifically precise analysis such as professional acoustic design. ❧ More development on the user interface could be added to the application to provide better interaction and to make Soundar a more user-friendly application that can go from the academic realm to the profession.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Using building information modeling with augmented reality: visualizing and editing MEP systems with a mobile augmented reality application
PDF
Simplified acoustic simulation - Rutabaga Acoustics: a Grasshopper plug-in for Rhino
PDF
Building information modeling based design review and facility management: Virtual reality workflows and augmented reality experiment for healthcare project
PDF
Acoustics simulation for stadium design using EASE: analyzing acoustics and providing retrofit options for the Los Angeles Memorial Coliseum
PDF
CFD visualization: a case study for using a building information modeling with virtual reality
PDF
Visualizing architectural lighting: creating and reviewing workflows based on virtual reality platforms
PDF
Data visualization in VR/AR: static data analysis in buildings
PDF
Daylight prediction: an evaluation of daylighting simulation software for four cases
PDF
A simplified building energy simulation tool: material and environmental properties effects on HVAC performance
PDF
BIM+AR in architecture: a building maintenance application for a smart phone
PDF
Comparing visual comfort metrics for fourteen spaces using simulation-based luminance mapping
PDF
Energy efficient buildings: a method of probabilistic risk assessment using building energy simulation
PDF
The acoustic performance of double-skin facades: a design support tool for architects
PDF
Adaptive façade controls: A methodology based on occupant visual comfort preferences and cluster analysis
PDF
Streamlining sustainable design in building information modeling: BIM-based PV design and analysis tools
PDF
Daylight and health: exploring the relationship between established daylighting metrics for green building compliance and new metrics for human health
PDF
Development of circadian-effective toplighting strategies: using multiple daylighting performance goals for dementia care communities
PDF
Landscape and building solar loads: development of a computer-based tool to aid in the design of landscape to reduce solar gain and energy consumption in low-rise residential buildings
PDF
Utilizing user feedback to assist software developers to better use mobile ads in apps
PDF
Green facades: development of a taxonomy tool to assist design
Asset Metadata
Creator
Wang, Zhihe
(author)
Core Title
Augmented reality in room acoustics: a simulation tool for mobile devices with auditory feedback
School
School of Architecture
Degree
Master of Building Science
Degree Program
Building Science
Publication Date
04/29/2020
Defense Date
03/23/2020
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
acoustics simulation,augmented reality,mobile application,OAI-PMH Harvest,room acoustics,room auralization
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Kensek, Karen (
committee chair
), Kyriakakis, Chris (
committee member
), Narhi, Erik (
committee member
), Zyda, Michael (
committee member
)
Creator Email
wzh9465@gmail.com,zhihewan@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-296337
Unique identifier
UC11663791
Identifier
etd-WangZhihe-8373.pdf (filename),usctheses-c89-296337 (legacy record id)
Legacy Identifier
etd-WangZhihe-8373.pdf
Dmrecord
296337
Document Type
Thesis
Rights
Wang, Zhihe
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
acoustics simulation
augmented reality
mobile application
room acoustics
room auralization