Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Analysis of free path distributions in simulated aerogels
(USC Thesis Other)
Analysis of free path distributions in simulated aerogels
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INFORMATION TO USERS This manuscript has been reproduced fro m the m icrofilm master. UMI films the text directly from the original or copy subm itted. Thus, some thesis and dissertation copies are in typewriter face, w h ile others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct p rin t, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these w ill be noted. Also, if unauthorized copyright material had to be removed, a note w ill indicate the deletion. Oversize materials (e.g., maps, draw in gs, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections w ith s m all overlaps. ProQuest Information and Learning 300 North Zeeb Road, Ann Arbor, M l 48106-1346 USA 800-521-0600 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ANALYSIS OF FREE PATH DISTRIBUTIONS IN SIMULATED AEROGELS By Kenneth James McElroy A Thesis Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements of the Degree MASTER OF SCIENCE (PHYSICS) May 2002 Copyright 2002 Kenneth James McElroy Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UMI Number: 1411799 Copyright 2002 by McElroy, Kenneth James All rights reserved. _ _ ® UMI UMI Microform 1411799 Copyright 2003 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, Ml 48106-1346 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UNIVERSITY OF SOUTHERN C A LIFO R N IA The G raduate School U niversity P ark LOS ANGELES, C ALIFO RNIA 90089-1695 This thesi s, w ritte n b y l &An e AL James _______ U n d er th e d irectio n o f h .is ... Thesi s C om m ittee, an d approved b y a ll its m em bers, has been p resen ted to a n d accepted b y The G raduate School , in p a rtia l fu lfillm e n t o f req uirem en ts fo r th e degree o f Dean o f G raduate Studies D ate May 1 0 . 2002 TH E S IS C O M M IT T E E ..........— ............. 1 1 1 J l i«« I —........... Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Dedication This thesis is dedicated to my mother, Marilyn Louise McElroy, whose support has been felt across the miles, and to the memory of my father, Kenneth Harold McElroy, whose strength has been felt across the years. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Table of Contents Dedication ...................................................................................................... ii List of Tables......................................................................................................v List of Figures.................................................................................................... vi Abstract ....................................................................................................vii 1 Motivation..................................................................................... 1 2 Random Walk: Particle Position Method......................................5 2.1 Multi-Particle Model................................................................................5 2.2 Witten-Sander Model.............................................................................6 2.3 My Model............................................................................................... 7 2.3.1 Random Numbers.......................................................................... 10 2.3.2 Random Walk................................................................................ 14 2.3.3 Final Particle Positioning................................................................ 16 2.3.4 Growing the Cluster....................................................................... 21 2.3.5 Cluster Size................................................................................... 22 2.3.6 Timing Analysis............................................................................. 25 2.4 Inner Radius Enhancement.................................................................. 28 2.4.1 Inner Radius Method...................................................................... 29 2.4.2 Timing Analysis............................................................................. 30 2.5 Binary Tree Enhancement.................................................................... 33 2.5.1 Binary Tree Method....................................................................... 34 2.5.2 Timing Analysis............................................................................. 35 2.6 AVL (Balanced) Binary Tree Enhancement..........................................38 2.6.1 AVL Tree Method........................................................................... 38 2.6.2 Timing Analysis............................................................................. 42 2.7 Windows GUI Enhancement................................................................ 44 2.7.1 Windows GUI Method.................................................................... 45 2.7.2 Timing Analysis............................................................................. 47 2.8 Cluster Library......................................................................................49 2.8.1 Timing Analysis..............................................................................50 2.9 Fractal Analysis.....................................................................................57 2.10 Free Path Analysis................................................................................64 2.10.1 Concentric Spheres........................................................................65 2.10.2 Repeated Cluster Analysis............................................................. 66 2.10.3 Comparison to Published Work...................................................... 70 3 Random Walk: Skin Method (Abandoned)..................................72 3.1 Two-Dimensional Skin Growth..............................................................73 3.2 Three-Dimensional Skin Growth........................................................... 80 4 Direct Placement Method (Abandoned)......................................85 5 Results........................................................................................ 93 5.1 Concentric Spheres............................................................................... 93 5.2 Repeated Fractal Cluster...................................................................... 98 5.3 Comparison to Published Work...........................................................102 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Table of Contents (cont'd) 6 Conclusion................................................................................. 108 Bibliography..............................................................................109 iv Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. List of Tables Table 2.8-1: Growth Times for Cluster Library......................................................................49 Table 2.9-1: Fractal Dimensions for Cluster Library..............................................................64 Table 2.10-1: Cube-Radius Measurements for 98% Porosity Clusters.................................67 v Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. List of Figures Figure 2.3-1: Particle Backtracking...................................................................................... 16 Figure 2.3-2: Sample 2-D Cluster of 5,000 Particles............................................................ 23 Figure 2.3-3: Particle Position Timing Graph (5,000 Particles)............................................26 Figure 2.3-4: Particle Position Timing Log Graph (5,000 Particles)......................................27 Figure 2.4-1: Inner Radius Method Timing Graph (5,000 Particles).....................................31 Figure 2.4-2: Inner Radius Method Log Timing Graph (5,000 Particles)..............................32 Figure 2.5-1: Binary Tree Method Timing Graph (5,000 Particles)....................................... 36 Figure 2.5-2: Binary Tree Method Timing Log Graph (5,000 Particles)................................37 Figure 2.6-1: AVL Tree Method Timing Graph (5,000 Particles)..........................................42 Figure 2.6-2: AVL Tree Method Timing Log Graph (5,000 Particles)...................................43 Figure 2.7-1: Windows GUI Method Timing Graph (5,000 Particles)....................................47 Figure 2.7-2: Windows GUI Method Timing Log Graph (5,000 Particles).............................48 Figure 2.8-1: Library Cluster 23 Timing Graph (50,000 Particles)........................................ 5 1 Figure 2.8-2: Library Cluster 23 Timing Log Graph (50,000 Particles).................................52 Figure 2.8-3: Library Cluster 5 Timing Graph (50,000 Particles).......................................... 53 Figure 2.8-4: Library Cluster 5 Timing Log Graph (50,000 Particles)................................... 54 Figure 2.8-5: Library Cluster 11 Timing Graph (50,000 Particles)........................................ 55 Figure 2.8-6: Library Cluster 11 Timing Log Graph (50,000 Particles).................................56 Figure 2.9-1: Library Cluster 23 Auto-Correlation Graph..................................................... 58 Figure 2.9-2: Library Cluster 23 Auto-Correlation Log Graph...............................................60 Figure 2.9-3: Library Cluster 23 Full Weighted Correlation Function....................................62 Figure 2.9-4: Library Cluster 23 Partial Weighted Correlation Graph...................................63 Figure 3.1-1:2-D Perimeter with Moving Particle P ............................................................. 75 Figure 3.2-1: Direct Placement Method Timing Analysis (1,000,000 Particles)....................88 Figure 3.2-2: Direct Placement Method Log Timing Analysis (1,000,000 Particles)............. 89 Figure 3.2-3: Distribution Graph - Random Walk Cluster (20,000 Particles)........................90 Figure 3.2-4: Distribution Graph - Direct Placement Cluster (20,000 Particles)...................91 Figure 5.1-1: Cluster 29a Free Path Sampling at r = 8 Diameters......................................94 Figure 5.1-2: Cluster 29a Free Path Sampling at r= 15 Diameters....................................95 Figure 5.1-3: Cluster 29a Free Path Sampling at r = 53 Diameters....................................96 Figure 5.1-4: Cluster 29a Free Path Sampling at r * 60 Diameters....................................97 Figure 5.2-1: Cluster 29a Uncorrelated Particles Free Path Distribution..............................99 Figure 5.2-2: Cluster 29a Free Path Distribution................................................................101 Figure 5.3-1: Cluster 29a Combined Free Path Analysis Comparison............................... 104 Figure 5.3-2: Cluster 29a Combined Free Path Analysis................................................... 105 vi Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Abstract In order to understand the deviations to the superfluid 3 He phase diagrams caused by the introduction of 98% porosity aerogels, random walk simulations were performed to grow a library of clusters to investigate the free path distributions for 3 He quasi-particles when an aerogel is present. The differences noted between experiments done in different laboratories for similar aerogel porosities indicate the need for better understanding of the free path distributions. When 98% porosity aerogels were placed in a repeating lattice structure, the long-distance distributions were matched to the exponential tail of a first-order Poisson distribution, but show no analytic function matching the distribution within the short-distance, clearly defined fractal regime of the aerogel. In this fractal regime, a nearly constant number of path lengths up to the size of the fractal are observed. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 Motivation Superfluid 3 He can be referred to as the world’s “ cleanest fluid” because of the fact that nothing will dissolve in it. The only impurity that can be naturally introduced into liquid a He is Vie, and it will separate out of solution exponentially as the temperature goes down. While this is often a desirable condition, it makes it difficult to test the effect that impurities have on superfluid 3 He. The entire field of “ dirty superconductors,” which studied the effect of impurities introduced into metallic superconductors, has no analog in superfluid 3 He. It wasn’t until the introduction of aerogels that any kind of substantial impurity could be introduced into the superfluid. The experimental work done with aerogels immersed in 3 He shows variations and changes to the established phase diagrams1 . This occurs despite the fact that all of the aerogels used had porosity (volume of empty space within the structure) of approximately 98%. This means that for the volume that the aerogel’s physical boundaries defined the aerogel material actually only occupied 2% of that volume. The rest is empty space. The differences between the phase diagrams of superfluid 3 He phases when immersed in aerogel and when no aerogel is present indicated that the superfluid state of 3 He was being affected by the presence of the aerogel, but 1 G. Gervais, T. M. Haard, R. Nomura, N. Mulders, and W.P. Halperin, ‘ Modification of the Superfluid 3 He Phase Diagram by Impurity Scattering*, Physica B., 280. 134-139 (2000) 1 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. the inconsistency in these changes between different laboratories using nominally identical porosities showed that there was more going on than just the porosity. Mean field calculations2 show that an aerogel grown in three dimensions will have a fractal dimension of approximately 2.5. Here, “ fractal dimension" refers to the measurement of the aerogel taken such that the radius of the volume being measured has a power law connection to the number of particles occupied within. It is this working definition that I will use when I refer to fractal dimension herein. As the aerogel grows, N, the number of particles in the aerogel, grows at a rate of N x r 2 5. This means that the volume occupied by the aerogel grows at a rate of r x A /04. The volume occupied, however, grows at a rate of N « r 3. Therefore, the average density, p - NIV, decreases at a rate of peer'05. Taking the aerogel’s volume-growth-rate into account, the average density grows at a rate of p x (a/0 - 4)"0 5 , orp x A/-02. This leads to the conclusion that the thermodynamic limit for the average density of the aerogel is zero. Since the aerogels used in all the research done to date had an average density of 2%, they did not reach the thermodynamic limit, and therefore were being defined by finite size limits 2 M. Tokayama, K. Kawasaki, Physical Review Letters, 100A, 337 (1984) 2 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. and effects. We conclude that the physical aerogels used in these experiments must therefore be composed of multitudes of finite-sized clusters, each of which introduces a new length scale into the physics of superfluid 3 He. The presence of the aerogels within the 3 He puts a stable impurity within the superfluid. Since 3 He will force all other impurities out of solution, aerogel is the best option for testing the effect that impurities have on 3 He. In pure 3 He, scattering of quasi-particles occurs randomly on a length scale which diverges with decreasing temperatures asT 2. One thing that aerogel does immediately is to impose a new length scale, limiting the 3 He quasi-particle path lengths before scattering. The particles within the aerogel provide a scattering surface for these quasi-particles to collide with. The distribution of these free path lengths must be important in determining superfluid properties, not just their mean value, just as the average porosity of an aerogel is not sufficient to characterize the aerogel. It is the measurement of these 3 He quasi-particle free paths within simulated aerogels that I have investigated here. This required simulating the growth of aerogel as an aggregate of particles through a random walk process. This process is referred to as Diffusion Limited Aggregation (DLA), where the particles perform a random walk until they make contact with the aerogel. A variant on this process 3 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (which I did not investigate) is Diffusion Limited Cluster Aggregation (DLCA), where the particles are allowed to join together into clusters which then random walk and join together into increasingly larger clusters until all elementary particles in the simulation are joined. Once theses simulated clusters were grown to a size where the finite size effects do not dominate the character of the cluster, exposing the fractal structure of the aerogel to experimental study, I measured the distribution of free paths within the clusters. The first method that I implemented for this analysis was to sample along the surface of concentric spheres, centered on the cluster's origin, in order to learn how the finite size of the cluster affects the free path distribution. Next, I set up the cluster to be a repeating cluster, so that the exact same cluster occupied neighboring boxes within an extended cubic volume. This allowed me to investigate the free paths within the cluster as if I had a physical cluster to work with. As a comparison for what my results indicated, I also reproduced the method that others used for measuring the free path distribution. This was a minor variation on the repeating cluster, and is explained in greater detail within this document. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2 Random Walk: Particle Position Method 2.1 M ulti-Particle Model There are several methods that have been used for simulating the process of Diffusion Limited Aggregation (DLA). Most of the simulations done with DLA use the following method: An exact volume or area for the cluster to grow within is defined at the start of the simulation. A coordinate lattice is defined within this region. Also defined at the start is the total number of particles that will form the cluster. At the start of the simulation, the desired porosity is selected. The number of particles to be added to the cluster is specifically determined based on the geometry of the growth region and the desired porosity. Each particle is placed simultaneously within the region of interest at a randomly selected lattice point. In each time-step for the simulation, every particle is moved, typically one lattice point in one of the coordinate directions. If any two or more particles should make contact, then their positions are adjusted so that they are in tangential contact, and the center of mass for this little cluster is placed at the lattice point. Variations on how these particles and clusters move from here are very straightforward. A single particle has the biggest move available to it, and the largest clusters have the smallest. This adds viscosity into the 5 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. simulation. Some simulations also added rotation into the formula for moving the clustered particles. As clusters and particles make contact, their positions are appropriately adjusted along the lattice by placing that mini-cluster’s center of mass at the designated lattice point. These particles and mini-clusters continue doing random walks until eventually there is one cluster left that contains all particles. For most of these simulation runs, the number of particles involved was typically under 10,000 particles - usually less than 5,000 particles. The method that I chose to use as the foundation for my simulation is the Witten-Sander3 (WS) model. 2.2 W itten-Sander Model The model used by T. A. Witten, Jr. and L. M. Sander for their work on simulating Diffusion Limited Aggregation (DLA) isolated a single particle at the origin of a two-dimensional lattice (which is identical to the Cartesian coordinate system), and used this particle as the seed for the growth of the cluster. They maintained a specific region of interest within which all activity was observed. They maintained active regions as large as three times the current cluster radius. 3 T.A. Witten, Jr., L. M. Sander, ‘ Diffusion Limited Aggregation, a Kinetic Critical Phenomenon*, Physical Review Letters, 47(191.1400-1403 (1981) 6 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Each new particle was placed at a random location on the boundary of the region, and would take a random walk through the region, moving only from one lattice point to a neighboring lattice point. When contact between the new particle and a particle already in the cluster was made, the new particle’s trajectory would be reversed sufficiently along the lattice to maintain tangential contact. If, instead, the particle left the region of interest, then it was reset as if another new one was added at the boundary of the region. Because of computational limits imposed on them by the technology available at the time, they did not examine any clusters of more than approximately 4,000 particles, and they only viewed clusters grown in two dimensions. They also only looked at two-dimensional clusters, but did acknowledge that this method can be extended to higher dimensions.3 2.3 My Model Essentially, my model begins with all of the points made above for the WS model. There are, however, fundamental differences in how I chose to build my random walk model. Like the WS model, my model places an immovable particle at the origin of the coordinate system. This brings up the first point of difference. In my coordinate system, all particles have x-, y- and z-coordinates, thus placing them in real, three-dimensional space, even if I am only doing a two- 7 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. dimensional simulation. In the case of a two-dimensional simulation, my model forces the z-coordinate for all particles to zero. In my model, all aerogel particles are idealized as perfect spheres, each having a diameter of one unit. In the real world, aerogel particles are not spherical at all, but for the purposes of the simulation, spheres were the easiest shape to simulate. At the time this simulation was initially written, it was decided not to force physical dimensions, such as Angstroms or microns, upon the size of the particle. Therefore, all sizes and distances in the cluster are based upon the diameter of a single particle being the fundamental size unit. It wasn’t until actually analyzing the clusters grown with this method that a physical size was given to the cluster particles. For my model, I chose to define a spherical region of finite size. New particles are added to the boundary of the region of interest and are allowed to proceed with their random walk from there. If the particle should leave the region of interest, it is no longer followed, and a new one is added to the boundary at an uncorrelated, randomly determined location. One complication that I saw as I examined the early runs of the simulation is that very long chains tended to form, thus reducing the cluster nature that I was trying to investigate. 8 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. To correct for this, and to make the probability of growth more uniform in all directions, I centered the region of interest on the center of mass for the cluster with each new particle that was successfully added. The equations used to place the particle on the surface of the region of interest are x = xC M +R a, y = yC M+R b - 2 = zcu+Rc. Here, c m designates the Center of Mass of the cluster, and R represents the radius of the region of interest. In all cases where the parameters are used in this document, including subscripts, superscripts and primes, the values for a, b, and c are as follows: a = sindcos^, £ > = sin<9sin^, c = cos0. Now, as the cluster grows and a new particle is attached, the center of mass is calculated. As this is an ongoing calculation, and since all particles are identical in shape and structure, this only requires updating the average value for all three coordinates for all particles in the cluster. These equations are used to place a new particle on the outer surface of the region of interest. This is the first position for any particle in its random walk. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.3.1 Random Numbers Witten and Sander used a lattice structure for their particles to move along as they performed their random walk. Instead of using a lattice structure, I chose to allow the particles to move freely in all three spatial dimensions as they proceed through the random walk. This requires the use of a random number generator (RNG) to calculate a random direction for the particle to move. The RNG that I use in my calculation came from Numerical Recipes in C. 2n d Edition 4 (Note: Use of this RNG is permitted for a single user, as noted in the license at the start of the book. However, the license does not allow for reproduction of the code.) This RNG algebraically combines the results of two separate RNG’s to produce a third one with a period equal to the product of both. The period for the first RNG is mi - 1 = 2,147,483,563. The period for the second RNG is m2 - 1 * 2,147,483,399. These periods produce a maximal period of 2,305,842,648,436,451,838, given by the product of all the unique prime factors in m y and m2 - their largest common multiple. To clarify how large this number is, and how unlikely it is that the period could be exhausted, consider the following information. 4 W . H. Press, S. A. Teukolsky, W . T. Vetterting, B. P. Flannery, Numerical Redoes in C. 2n d Edition, section 7.1, pp 280 - 282, Cambridge University Press, new York NY, 1997 10 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. This simulation was running timing and growth analyses on a 1.5-GHz computer (initially, the computer that these simulations were developed and run on was a 200-MHz computer). Under the assumptions that the computer is running perfectly, and that the code is written so that one random number can be generated for each cycle of the CPU, then there are 1,500,000,000 random numbers generated each second. Consider also, that there are 3600 seconds in one hour, and 24 hours in a standard day. This means that in one day, there would be 129,600,000,000,000 random numbers generated. In one standard year (365 days), there would be 47,304,000,000,000,000 numbers generated. At this rate, it would take the computer just under 49 years to exhaust all of the numbers in that maximal period. Since the CPU is not running perfectly, and the code is doing more than generating random numbers, it will take considerably longer than this ideal time to run through the entire sequence of numbers available. The authors of the book have tested the algorithm and I have also done some simple tests on it. It appears to be uniform and to contain no detectable serial correlations. (Note: the authors of the book are so convinced that this RNG will not fail to create “ perfect” random numbers that they have offered a monetary reward to anyone who can prove that this RNG will fail statistical tests in a non-trivial manner.) 11 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. I am using this RNG each time that the new particle in the simulation is moved. In fact, each move that the particle makes requires two calls to the RNG - one for the horizontal angle, which I call < | > , and one for the azimuthal angle. I use 6 for this angle. To make certain that the values determined for each retained the uniform behavior of the RNG, the following calculations had to be done. First, assume that the particle will be placed randomly on the sphere in such a way that any one point is equally as likely as any other point. This means that for 0 , I want values between 0 and t c s o that the probability of all directions is uniform. Since the region of space subtended by a given angular difference is greater at some locations within this region than for others, 6 must take on a non-uniform distribution in order to keep these probabilities uniform. For < t > , I want any value between 0 and 2it to be equally likely. For any spherical region, the differential unit is dCl = sin# dd d < f> . From this, I want to be sure that I can generate values of 9 that provide the uniform probability distribution mentioned above so that it will go into any direction. I know that the RNG I have chosen to use has a uniform distribution f(y) = 1 for y e [0,l). I want to convert these values of y to uniformly 12 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. distributed values of cosd. This means that f(y) must satisfy the azimuthal angle equation The integral asks the question: “ Given a uniform y, what gives a uniform probability distribution in 0?" The parameter k given in the integral is the normalization constant. This will be included in the final solution, which will normalize to 1. The integral above can be re-written as This means that the angle 0 associated with the uniform random number y is given by j f(y)dy = sin0 d0. 0 o y 6 jdy = frjsin# d0 0 0 because the value of f(y) is constant and equal to 1. Solving the integral given above yields the following: y(0) = -/ccos0£. This gives y(0)=-kcQS0- k, or 13 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. When y = 0, I expect that 6 will be equal to one of its extrema. By substitution into the equation, I get 0 = 71. Likewise, substituting y = 1 also leads me to believe that 6 will be equal to one of its extrema. Again, by substitution, when y = 1, the constant k goes to infinity if 0 = x. This makes sense, considering that 0 = x already corresponds to y = 0. By substituting 0 = 0 ,1 get k = -'A. This means that my uniform random number distribution for values in 0 is given by the following equation: 0(y) = cos"1 (2 y -l). For the horizontal angle < | > , I followed the same procedure, but using a random distribution of values in x. Knowing that the full range of < | > was [0, 2k), it was easy to use the method above to determine that ^(x) = 2nx. With this information for calculating the direction of the next step in the particle’s random walk, I can now discuss the walk. 2.3.2 Random Walk All particles move exactly one diameter in each step of the random walk. This means that the time-step of each particle's movement is defined so as to allow the particle to move this distance through the “ soup” within which the cluster material is dissolved. By using this kind of time-step, it is assumed either that the viscosity of the medium won’t change, or that the 14 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. mean free path of the moving particle is less than or equal to one diameter, and that no external force (such as gravity, electric field or magnetic force) plays any role in the formation of the cluster. It would be relatively simple to modify the simulation to take these forces into consideration, but it was decided to work with the simplest form. At the beginning of any particular step in the random walk, the particle is located at some point P (x,y,z). From this point, it is going to move one diameter in any direction. This happens in the following manner: First, I use the RNG to determine a new pair of angles, 9’ and < J > ’. The new position for the moving particle is given by the following equations: x' = x+sin0'cos^\ y ' = y + sin0'sin^\ z' = z+ c o s 0 \ Once the particle has moved to this new position, I perform the following two tests. First, if the new position is further from the center of mass of the cluster than the boundary of the region of interest, then the particle is considered “ gone” from the random walk, and a new particle is defined at a random location along the boundary of the region of interest. Second, if the new position places the center of the particle a distance less than one diameter from the center of any particle in the cluster, then contact has been made, and the particle no longer walks. The final position is adjusted so as to have tangent contact between the new particle and the particle(s) it has made contact with, making it a part of the cluster. 15 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. If neither of these conditions occurs, then the particle is free to continue its random walk. 2.3.3 Final Particle Positioning When the moving particle makes contact with any particle(s) in the cluster, it will essentially "smash” into the cluster, causing the particle to overlap its contact neighbor(s). Once this occurs, the particle’s course must be reversed along the same trajectory from which it last came in order to keep all particles identified as spheres, and to keep their individual centers one unit from all neighbors. v Figure 2.3-1: Particle B acktracking Referring to Figure 2.3-1, assume that the particle labeled n is a fixed particle in the cluster, and that r2 is the final position after the last random walk step of a new particle that has just made contact. The task at hand is to 16 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. retrace the path from which r2 arrived, and place it at position r3 , where the distance from h to r3 is exactly one diameter. The particle at r2 approached the particle n along direction (6, < J > ) , (shown as the ray connecting points r2 and r3 in Figure 2.3-1) relative to r2s last position. The point labeled r3 lies along this trajectory at some point towards the previous position. Each of the three points in Figure 2.3-1 are defined, for purposes of this derivation, as follows: r2 =(x2,y2,z2), r3 = (x3,y3,z3), where x3 = x 2-at, y3= y2-W , z3 =z2-cf. Here, t is there parameter for the random walk’s single step equation for particle r2 and it has a value in the range of t e (0,1). The distance between positions ri and r3 is exactly one diameter, and is displayed in the diagram as the line connecting these two points (D). This is given as follows: |r, -r ,| = D = > /(x1 - x 3)2 +(y, - y 3)2 +(z, - z 3)2 . Substituting the values for r3 into this equation gives 0 = V(*i 0 ^2 ~at)Y +{y, ~(y2 - b t) f +(z, -(z 2-cttf . 17 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Regrouping this equation gives D = V ((x i - x 2)+atf +((y1 - y 2)+btf +{{z,-z2)+ctf . Expanding the equation above gives 0 = (x, - x 2)2 +2(x, - x 2)at+a2t2 + . ( / i -Y zY +2( / i - y 2)bt + b2t2 + |(z1- z 2)2+2(z1- z 2)cf+c2f2 At this point, it is helpful to define the following quantity, displayed in Figure 2.3-1: d = V(xi-x2)2+ (y i-/2)2+(zi-z2)2 . This is the actual distance between the cluster particle and the moving particle at the end of the final random walk step. Because there is an intersection of some kind between these two particles, this value will never be greater than one diameter. This turns the equation for D into D = ijd2 +2f[(x, -XjJa+O, - y 2)b+(z1 - z 2)c]+|a2 +b2 +c2} 2 The value a2 +b2 +c2 in the last term under the radical is the three- dimensional trigonometric identity for the value 1. Making this substitution gives D = Jd2 + 2 * [ ( * i - * 2 ) a + ( y i - / 2 y > + ( z i - Z a M + f 2 • 18 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The middle term is a quantity I defined as 8. Its value is based solely on known quantities at the time of completion of the moving particle’s last random walk step. This now gives a quadratic equation in t, which can be solved to get The “+" sign is used here in the quadratic solution because the sign option does not place the particle at the desired distance. The sign would place the particle along the same trajectory, but at a point on the other side of its starting point from the last random walk step. In other words, it would be shifted completely out of contact with the cluster into a position it never occupied during the random walk. As a final result, this says that the final position for the particle currently at r2 is given as This gives the exact location to place the particle so that the overlap between the new particle and the cluster particle is removed. This will be at a distance less than one diameter from the position that the particle was at before the last step of the random walk. t = -B + ylB2 - d 2+D2 . 19 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. As I mentioned earlier, each time the particle moves, I check its position at the end of the random walk step against the cluster to see if there are any overlaps. It is possible that it can be in contact with more than one particle at a time, or that it could have actually moved through a particle before coming to rest in the current random walk step. Because of this, when the particle’s position is adjusted, the test for overlap with cluster particles begins again, including all particles already tested. This way, all overlaps will be removed, and when the test is complete, the particle will be in tangent contact with at least one cluster particle, and will not overlap any cluster particle. (The necessity of this redundancy for correcting the placement of the moving particle was not discovered until the implementation of the AVL tree method, discussed later. At that time, it was observed that the data did not match for the same random number seed. An investigation into small clusters revealed that the Particle Position Method, described here, was not completely evaluating all intersections, sometimes leaving the moving particle still intersecting cluster particles after a single check against all particles. Once that was learned, this redundancy was added back into this method and the Inner Radius Method, also discussed later.) 20 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.3.4 Growing the Cluster Earlier, I said that the cluster grows one particle at a time. Each new particle is placed at the boundary of the region of interest as the first step in its random walk. The boundary of the region of interest will change slightly with the addition of each new cluster particle because of the dynamic method used to track the center of mass. As the particle wanders in from the boundary, it will do one of two things. Either it will leave the region of interest, or it will move closer to the cluster. If it leaves the region of interest, then it has entered the larger container where all the other not-yet-added particles are located. When this happens, another particle is defined at a random location along the boundary. Since there could be a significant length of time where no particle crosses the boundary, it is assumed that these time steps are simply not recorded. With each new move the particle makes, its position is compared relative to all the previously accumulated particles. This comparison is solely for the purpose of determining if the moving particle is in contact with any of the cluster particles. If, at the end of a step in the random walk, a particle makes contact with any particle(s) in the cluster, then the particle is repositioned to a final position, as described above. This locks the particle into its final cluster 21 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. position, and the coordinates relative to the coordinate origin (not the center of mass) are recorded. Once the moving particle has been placed in the cluster, a new particle is defined at the boundary. This process repeats until the desired number of particles has been acquired. 2.3.5 Cluster Size There is one other point that I need to address regarding growing the cluster. What happens when the cluster’s size reaches the boundary of the region of interest, or is close enough to the boundary that no particles can move around it in that area? I have chosen to define “ duster size” as the maximal distance between the center of mass of the cluster and any particle in the cluster. This means that there is some particle already in the cluster that has the unique property of being the particle that is farther from the center of mass than all other particles in the cluster. Each time a new particle is added, the center of mass of the cluster shifts slightly. This means that all particles must be compared to the center of mass to determine which is the farthest. Once the farthest particle is determined, its position is compared to the distance of the region of interest from the center of mass. If there is a high ratio between them (in the simulation runs that I performed, I used a 22 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ratio of 0.9), then the region of interest expands by a predetermined amount (I used 3 diameters) to allow for more cluster growth. Insofar as it’s been observed, these numbers do not affect the random growth cycle for the clusters. In two dimensions, the growth of 5,000 particles for a random number seed (the first number used by the RNG) can look like this: 41 Fkiuw 2.3-2: Sample 2-0 Cluster of 5.000 Particles As I mentioned, the region of interest for my simulated cluster’s growth expands as the cluster size gets larger. The problem that this addresses is this: the cluster is free to grow as it may within the region of interest. 23 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. However, as it grows, it could possibly grow to a point where some particle is less than one diameter from the boundary of the region. If that were to happen, then any particle that enters the region in the neighborhood of that cluster-bound particle would immediately be attached to the cluster, and the cluster size would then be greater than the region of interest. Initially, I chose to have a sufficiently large region within which to grow the cluster. The size I initially chose was 100 diameters across. However, the amount of time necessary to add only a few particles into the cluster was unrealistic. The particles would wander aimlessly around inside the region of interest, looking for a single point at the center of this volume. The probability of the new particle making contact with the young cluster was too low to continue in this direction. Therefore, a region of interest was defined in which the seed particle is initially not far from the edge of the region. The size that was finally used was five diameters. This was coded into the program in a way that would allow flexibility in case I decided to change it. With some examination into different values, I finally stayed with an initial value of five diameters as the radius of the region. The remaining two points for this were to determine first, how large the cluster was allowed to grow to before the region of interest would be increased, and second, how much to increase the region by. I chose that the 24 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. cluster size would trigger the increase in the region of interest when the distance between the center of mass and the particle farthest from the center of mass was at a threshold value of 90% of the radius of the region of interest/ Again, other values were used, but I eventually chose to work with this value. If the cluster radius, which the distance described above represents, reaches 90% of the region of interest’s radius, then the region of interest’s radius was increased by three diameters. It seemed that this value would give particles enough distance to continue the random walk around any extremely long dendrites that formed in the cluster, without reducing the likelihood of contact in other areas of the cluster. 2.3.6 Timing Analysis Initially, one of the biggest concerns I had was how long it would take to grow a cluster of large enough size that there would be a clearly defined region in which the cluster was fractal. My first guesses for this kind of size were about 20,000 particles. However, my attempts to grow this size cluster using this algorithm proved too slow. At every step of the new particle’s path, it had to compare its position to the position of every particle already placed into the cluster. This meant that even steps where contact was not possible were comparing particle positions for contact. As the cluster got larger, there were more 25 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. steps taken where contact was impossible, meaning that there was more time wasted on comparisons for each step of the new particle’s movement. One effort I made to see about predicting the growth rate of the cluster was to capture the computer’s current time when the particle was finally added to the cluster. By determining the time that had passed since the start of the simulation (placement of the origin particle), I was able to build up a table of timing values. As It turns out, the time to add a new particle to the cluster only gets longer, on average, with each new particle. Plotting this data, as it was generated on the 1 5-GHz computer, led to the following graph: 0.7 0.6 0.S jo , I 0.3 0. 1 500 1000 1500 2000 2500 3000 3500 4000 4500 0 5000 Particle Count Figure 2.3-3: Particle Position Timing Graph (S.000 Particles) 26 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Here, the horizontal axis measures the particle count in the cluster and the vertical axis measures the time, in days, to add that particle to the cluster. Obviously, this represents a power law of some kind. When plotted on a log-log-graph, it looks like this: o -1 •2 h L s -5 -6 -7 0.0 0.5 1.0 2.0 2.5 3.0 3.5 4.0 Log (Particle Count) Fiourt 2.3-4; Particle Position Timing Loo Graph (6.000 Partictes) It can be seen from this graph that there are at least three segments to this plot. The first two segments cover only the first 50 particles in the cluster. The real power law dominance can be seen by looking at all of the timings coming after this. The power law here shows a slope of approximately 2.5. 27 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. This gives a timing equation of t = kn2 S for this area, where k represents a normalization constant for the actual data. Knowing that it took approximately 16 hours to grow this cluster, and knowing that it grew to 5,000 particles, I can solve for an approximate value of k to allow for projecting the amount of time needed to grow a sizeable cluster of 20,000 particles or more. The value of k given from this data is approximately 3.0 x 10'1 0 days/particle^5. This provides an approximate time of almost 17 days in order to grow a 20,000-particle cluster. To reach the 50,000-particle count that I eventually achieved would have taken 167 days with this method on the 1.5-GHz computer. Given these timings, it was not realistic to consider using this method as is. After reviewing the algorithm, I came up with the Inner Radius Enhancement, described in the next section. 2.4 Inner Radius Enhancement As I examined the algorithm above, I realized that a great deal of calculation time was lost in regions where contact was impossible. Although movement within the cavities defined by the cluster would be more difficult to optimize, I realized that I already had all the tools in place to provide speed enhancements. There are several tactics that I employed in an effort to increase the speed of the simulation. The enhancements that I kept as part 28 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. of the simulation are discussed here. There were two enhancements that I chose not to continue working with, due to complexity, incorrect results and/or problems developing the technique. These are discussed separately in Chapters 3 and 4. 2.4.1 Inner Radius Method As noted above, I am keeping track of the particle that is currently farthest from the center of mass. This particle also defines a two- dimensional circle or a three-dimensional sphere that is the smallest one that totally encloses the cluster. The region of interest for the cluster's growth has a radius measurement, as discussed in the previous section. Since this new radius is always defined as being within the region of interest, I chose to refer to this as the “Inner Radius Enhancement.” If a particle is beyond this farthest-particle distance from the center of mass, there is no possibility of contact with any cluster particles. In this case, the only comparison necessary is the distance from the center of mass to the moving particle versus the distance from the center of mass to the particle farthest from the center of mass. If the first distance is greater, the particle simply takes another step in the random walk. 29 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. If the particle should move closer than this defining distance, then the simulation will resume testing the position of the particle to the position of all cluster particles. For very small clusters, this enhancement did not make any noticeable improvement in the time it took to grow a cluster. However, once the cluster size got over about 1,000 particles, the differences became very noticeable. For a three-dimensional cluster of 5,000 particles, the original random walk method took 15 hours and 51 minutes to reach completion. The same cluster grown with the Inner Radius Enhancement took only 714 hours to reach completion. As more particles are added to the cluster, this time difference improves noticeably, but still prompted a desire for improvement. One method for improvement from this point that I investigated, and eventually decided not to implement, is a method I refer to as the “ Skin Method,” which I explain in detail in the next chapter. 2.4.2 Timing Analysis The first improvement made to the algorithm was the Inner Radius Enhancement. This test was developed to remove as many unnecessary comparisons as possible, and thus, to speed up the simulation. The success of this can be seen by looking at the timing of the cluster growth, itself. The same analysis was done with this cluster as was done with the Particle Position Method cluster. Since both of these started with the 30 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. same random number seed, the particle placements for all 5,000 particles are guaranteed to be identical, making for the proper comparison. (An examination of the final particle’s position was sufficient to guarantee this, as any deviation earlier in the code would result in a very different placement for the last particle.) Here is the graph of the raw timing data. 0.33 0.30 0.27 0.24 i 0.18 0.12 0.09 0.03 0.00 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 Particle Count Figure 2.4-1: Inner Radius Method Timing Graph < 5 . 000 Particle*) Again, this graph clearly shows a power law for the growth of the cluster. The log-log plot of this data, on the next page, gives a better breakdown of this. 31 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 0 -7 ---------------- ---------------- ------------------------ ---------------- -------- -------- -------- ---------------- -------- 0.00 0.25 0.50 0.75 1.00 125 1.50 1.75 2.00 2.25 2.50 2.75 3.00 325 3.50 3.75 Log (Particle Count) Flaws 2.4-2; InnerR«diu» Mthod Log Timing Graph (5.000 PcrticKut Here again, the same three sections are clearly visible. The first two sections, as with the Particle Position Method, cover the first 50 particles. The next segment begins immediately after that, and covers the remainder of the cluster’s growth. The exponent in this part of the growth is approximately 2.0. Comparing this to the previous method shows an expected decrease in the slope. The power law equation has an approximate form off = kn20. This cluster was grown to a size of 5,000 particles in 7% hours. This gives an approximate value of k - 1.2 x 1 0 " ® days/particle20. 32 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Using this value, I can estimate that it would take approximately 5 days to grow a 20,000 particle cluster with the 1.5-GHz computer, and approximately 31 days to reach the 50,000-particle count that I finally achieved. To have created the 30-cluster library that I will be discussing later would have taken almost 3 years at this rate. On the original machine, one cluster would have taken 37.5 days to reach 20,000 particles and at least 234 days to grow to 50,000 particles. It was still apparent that more work was needed. At this point, I attempted to develop the Skin Method, discussed in Chapter 3. After considerable time attempting to make that method work, and having no success in that area, I stopped working on improving the algorithm, and began looking at improving the data structure, starting with simple binary trees. 2.5 Binary Tree Enhancement At this point, there didn’t seem to be any further way to improve the algorithm. The random walk had to do whatever was needed within the confines of the cluster. Outside the cluster, it either entered the cluster or it left the region of interest. Inside the cluster, either the particle attached, or it wandered back outside the inner radius, possibly leaving the region of interest. The only thing that remained to improve the speed of the program was to improve data access. 33 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.5.1 Binary T re© Method Initially, the cluster positions were all saved in an array, in the order in which they were added to the cluster. There was no special reason for using this, other than the fact that it was the simplest data storage method available. There really wasn’t any way to sort these items without doing significant comparisons and calculations with each step, so it was the approach that I continued to work with as long as I could improve the algorithm. Now, however, I realized that data storage was the next hurdle to improving the speed of the cluster growth. My initial attempts went with simple binary trees. A binary tree is a data type that consists of a node and two branches. The node is a parent value and the two branches are data that have some specifically defined relationship with that parent value. The implementation I sued was that the “ left” branch contained all values that were less than the value of the parent, and the “ right” branch contained all values that were greater than or equal to the parent. For a simple binary tree, this is all that is needed. As a value is added to the tree, it ricochets down the branches, based on these tests of its value, until it finds where it fits. The complication with this method occurs when the tree becomes somehow lopsided, so that one side of a branch becomes 34 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. significantly “ heavier,” or filled with more data, than the other. When this occurs, the speed available from a binary tree diminishes, as searches will tend to fall into the heavier side. Initially, my attempts to use a binary tree did not take this into consideration. However, I did notice that there was a significant increase in speed. I found that the three-dimensional run of 5,000 particles mentioned above took a total of 54 minutes. Once I realized that a simple binary tree was a flawed structure, I investigated balancing the binary tree. 2.5.2 Timing Analysis With the inclusion of this data structure into the algorithm, a significant increase in speed was immediately noticed. While the inner radius method reduced the entire growth time from 15 hours and 52 minutes to 714 hours, this method reduced that time even further to 54 minutes. The timing for this duster growth is given here: 35 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. T i m i n g (d q n t 0.040 0.035 0.030 0.025 D.020 0.015 0.010 0.005 0.000 0 500 1000 1 500 2000 2500 3000 3500 4000 4500 5000 Particle Count Flaw* 2.C-1: Bimrv T m Method Timing Graph < 8 .0 0 0 Pirticieet Again, an obvious power law dominates the growth time of this cluster, as can be see here in Figure 2.5-2: Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. - 1 -6 -7 ------------------------------ --------------- ----------------— ■ ■ ■ — 1 --------------- --------------- --------------- 0.0 0.S 1.0 1.5 2.0 2J S 3.0 3.5 4.0 log (Particle Count) Fioure 2.5-2: Binary Tree Method Timing Loo Graph (5.000 Particleel Again, the first 50 particles form two rapid growth sections in the cluster, leaving the remainder of the cluster within the realm of the major power law. The power law for this realm is approximately 1.9, smaller than the inner radius method, but not significantly smaller. I believe the fact that the timing is significantly faster indicates that the data structure must be more effective, once it is actively used in the code, but that most of the time must still be spent on activity occurring outside the cluster. The power law equation, t = knua, for this cluster has a value of 3.5 x 10‘9 days/particle19. Using this information, I reached an approximate time of 12 hours for 20,000 particles on the current machine, which translates 37 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. roughly to a time of 4 days on the original machine. The growth up to 50,000 particles would have taken at least 3 weeks on the original machine. This was a very good timing, but with the nature of the binary tree being open to lop-sided content storage, and the possibility of data being accumulated more on one side than on the other of the root node, I decided to investigate balancing the binary tree, as explained in the next section, in the hope of further increasing the growth speed of the cluster. 2.6 AVL (Balanced) Binary Tree Enhancement Two Soviet mathematicians, G. M. Adelson-Velskii and E. M. Landis, first introduced the concept of a self-balancing binary tree5. In their paper, they defined a binary tree as balanced if and only if the height (defined as the number of levels below the current node) on one side of a node differed from the height on the other side of the node by no more than one level. Using this definition, I was able to construct my own version of the AVL tree. 2.6.1 AVL Tree Method As with any binary tree, this data structure has very simple rules for data storage. If the current data element being viewed is not the desired element, then a choice is made. If the desired element is greater than or 5 G. M. Adelson-Velskii, E. M. Landis, "An Algorithm for the Organization of Information,* Soviet Mathematics Doklady, 3,1259-1263 (1962) 38 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. equal to the current element, then the search ignores everything on one side of the node. All of those elements are less than the current element. Likewise, if the desired element is less than the current element, then everything on the other side is ignored. All of those elements will be greater than the current element. This immediately removes approximately half of the elements of data remaining in the tree to be examined. A simple structure like a data array must examine all elements to be sure that it has found what it is looking for. An unbalanced binary tree may end up being very lopsided, so that more searches will still result in examining a majority of the data before finding a result. The benefits to this data structure are simple. For each element examined, the AVL tree automatically removes approximately half of all remaining elements from consideration. This means that the number of elements viewed from top to bottom of a balanced binary tree with N values is approximately log^N). As N gets larger, this will significantly increase the speed of any searches done on the data. So, a balanced binary tree guarantees that log2 (/V) will be the maximum number of elements viewed in a top-to-bottom traversal of the tree. The question remains though - how does the tree get balanced? 39 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. A basic binary tree has at least three values associated with each element in the tree. The first is element This is the actual data that is stored at the current node of the tree. Next are left child and right child, which I have chosen to refer to as lesser and greater, respectively. These are typically each coded as points to new instances of the AVL tree. I also implemented a parent node for each element in the tree. This node is used primarily when balancing the tree, as it allows the current element to know what to point to as its child when the balancing is complete. The lesser node will contain an element that is less than (by whatever rules govern comparisons of data) the value of the current element In fact, all elements on the lesser branch are less than the current element Likewise, all elements associated with greater are greater than (or equal to, as the case may be) the current element Although I won’t go through the derivation of the AVL tree, I will explain briefly how it is implemented. There are four scenarios that need to be explained in order to understand how the AVL tree was utilized in this simulation. The first two are very simple - a clockwise rotation of the unbalanced element and its parent and first child node on the unbalanced side, a counter-clockwise rotation of the unbalanced element with the same respective elements, a combined clockwise/counter-clockwise rotation, and a combine counter-clockwise/ 40 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. clockwise rotation. The last two occur less frequently than the first two, but can happen with the first rotation leaves the element still unbalanced. Also, as I discussed when I first mentioned binary trees, there is a possibility of the tree becoming lopsided. With the version of the AVL tree that I have implemented, the tree auto-balances with each new element added into the tree. This guarantees that the tree is always balanced. Each element of the AVL tree also knows its balance immediately upon insertion of a new cluster particle. The element has a value called balance that is incremented each time an element is placed on the greater branch, and is decremented each time an element is placed on the lesser branch. If this value goes above 1 or below -1, balancing is automatically triggered. Using this method has a small, but noticeable, difference in timing from the Binary Tree Method. It was fast for the cluster measured, but the difference for such a small size was nominal. It should be noted that for all size clusters up to the sizes examined, the time saved by using the balanced binary tree versus the simple binary tree increased as more particles were added. For smaller clusters, this savings was not significant, but it illustrates that the balanced binary tree continues to be a faster data structure than the simple binary tree as more data is added. 41 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.6.2 Timing Analysis The timings between the simple binary tree and the self-balancing (AVL) binary tree were very similar. The graph of the timing data for the AVL tree is given here. I have chosen to represent the time values in hours, as the speed was significantly faster than the previous method. This only requires a factor of 24 to convert the data to the same time scale as the previous methods used. 0.9 0.8 0.7 0.6 O S 0.3 0.2 0.0 s o o o 1500 2000 2S00 3000 3500 4000 4500 0 500 1000 Particle Count Figure 2.6-1: AVL Tr— Method Timing Graph (S.000 Particles) Again, this is obviously a power law growth rate. Looking at this graph shows what appears to be a flattening of the curve towards the end of the 42 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. growth. When looking at this data on a log-log plot, as shown on the next page, it is dear that this is still a power law: o -1 -2 -3 a 5 -4 -5 -6 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Log (Particle Count) Figure 2.6-2: AVL T im Method Timing Log Graph (6.000 Particle*) Here, again, there are only three visible sections to the graph. The first two sections, again, cover approximately the first 50 partides, and really don’t give any useful information. The third section contains the dominant power law function for this growth, with an exponent of approximately 1.8. This simulation took 52 minutes to run, marginally faster than the Binary Tree Method. Knowing this timing information, I can determine that k = 3.4 x 10'9 days/partide18. This number is only marginally smaller than the 43 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. coefficient I determined for the Binary Tree Method, but it is smaller, meaning that it represents a more efficient use of the machine’s resources. Using this value, I can approximate a 20,000-particle cluster to take about 12 hours on the current machine. Were this to have been run on the original machine, it would have turned into 316 days. In reality, when I finally did grow a 20,000-particle cluster, this simulation method took almost a week to grow it. This indicates that the power law nature of the clusters’ growth continues to slow as the individual cluster gets larger, and longer random walks are required, on average, for more particles to join the cluster. 2.7 Windows GUI Enhancement The final enhancement to the random walk method that I implemented was to actually create a Graphical User Interface (GUI) for the Windows environment. All previous simulation methods were written as “ console” interfaces, with text being typed into a DOS-style window for all methods of interaction. Initially, I did this as a personal exercise, not as an actual part of growing the simulation into a faster model. In fact, I had no expectations of that occurring. My thought on doing this was very simple - I wanted to leam how to write a GUI and how to work with threads as part of the C++ environment 44 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. that I was working in. Here, I already had a fully developed C++ program from which I could make my GUI-based program. Once I was able to run it, and confirm the data was identical to the original runs, I realized that I had managed to find the approach that would allow me to grow clusters large enough to prove interesting. 2.7.1 Windows GUI Method When I began writing the simulation methods described herein, I failed to take into account that the DOS environment is no longer a primary environment since the implementation of Windows 95. With the release of that version of Windows, the DOS environment became a supported environment, and required significant background processing from Windows to support that environment. This background processing was very time consuming, and was primarily responsible for the slow run times of the simulations. Upon further examination into the compilers that I had to work with, I also learned that console programs are not optimized for the CPU as efficiently as GUI programs are. This also added to the difference in speed that I saw. With essentially no change in the algorithm concept, I implemented all input through the use of “ buttons" and text boxes. This removed the need for 45 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. any kind of text-based requests for the necessary input and allowed me to remove the simulation from the DOS environment. The reasons why I pursued the console approach for so long were partly out of ignorance. I did not realize the significance of the DOS environment’s implementation as a supported environment in terms of the speed of implementation of the algorithm, nor did I understand about the compiler’s inability to optimize console program code. Along with that, there was a desire to do timing analyses in parallel between the PC environment and the UNIX environment to see which of the two environments would provide a faster implementation. It turns out that the timings were almost identical when the code was portable between the two operating systems, and as the PC was updated to faster speeds, there was a noticeable increase in speed on the PC side of the comparison. However, once the code was converted to a Windows interface, the decrease in run time was phenomenal. The same 5,000-particle run, when done with the GUI, was accomplished in 1 minute and 26 seconds. This was the breakthrough in timing that I was looking for. Once this timing was achieved, I grew 30 clusters, all of which contain 50,000 particles. The times to grow these clusters were very widely scattered, as they ranged from just under 7 hours to just under 24 hours for different clusters. Each was identified by the value of the RNG seed used. 46 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. For these clusters, I used the seed values one through thirty to grow the clusters. As a comparison and extension, I also grew the two fastest clusters to larger values. I grew one of them to just over 80,000 particles and the other to 150,500 particles. 2.7.2 Timing Analysis The timing on the cluster was so fast that it doesn’t make sense to look at the growth chart in terms of days or hours. In fact, because of the brevity of the run, I show the graph here in seconds: 90.0 80.0 70.0 60.0 ? 50.0 40.0 30.0 20.0 10.0 0.0 1000 1500 2000 2500 3000 3500 4000 4500 5000 0 500 Particle Count Figure 2.7-1: Windows GUI Method Timing Graph (8.000 Particiasi This shows a power law dominating, as seen in the log-log graph: 47 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.0 15 0.S I I s 0.0 -0.5 - 1.0 -1.5 - 2.0 4.0 3.5 2.0 2.5 3.0 0.0 0.5 1.0 15 Log (Particle Count) Flour* 2.7-2: Window GUI Method Timing Loo Graph (5.000 P«rticl«») There are three regions in this chart with different power laws. The first region goes up to particle 450, where the second region takes over until particle 2,500. The rest of the cluster is ruled by the remaining power law. This last power law is the best one from which to make estimates of continued growth. The value for this power law is approximately 2.3. Using this with the data of the growth, (5,000 particles grown in 86.70 seconds), I can determine the coefficient fr = 2.7 x 10*7 sec/particle2 3. Comparing this to the other coefficients shows that this is 3.1 x 10'1 2 days/particle23, showing the tremendous speed available from this method, versus the other methods that I have explored. 48 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The fact that this run took less than two minutes to complete was sufficient evidence to begin building the cluster library. 2.8 Cluster Library Over the course of a month, I built a total of 30 clusters of 50,000 particles, each with a different random number seed. Extending this, I took the fastest two clusters and attempted to build clusters of one million particles. Due to circumstances beyond my control at the time, the first attempt stopped at approximately 82,000 particles. The second attempt at the million-particle cluster was deliberately terminated at 150,500 particles after approximately 10 days, as it was becoming very obvious that the computer I was using still didn’t have the speed to make this a realistic goal. This table demonstrates the timings of the cluster library, each cluster being distinguished by its random number seed. The clusters here were generated on an 800-MHz machine. The timings reflect how long it took to grow them on that machine. Seed Run Time Seed Run Time Seed Run Time 1 14:57:01.06 11 23:25:35.18 21 08:03:32.74 2 14:05:12.82 12 16:26:10.19 22 17:26:09.57 3 12:19:56.99 13 12:46:56.96 23 06:52:12.40 4 19:34:42.92 14 11:47:42.19 24 17:50:11.80 5 14:38:39.42 15 13:52:09.47 25 13:49:30.18 6 11:25:30.62 16 15:34:04.27 26 09:39:34.21 7 13:19:39.56 17 17:22:58.27 27 08:40:15.97 8 11:49:37.97 18 08:55:00.05 28 22:10:06.74 9 13:15:19.71 19 09:50:44.30 29 07:38:17.79 10 12:32:58.42 20 12:48:21.62 30 15:42:34.58 TlMtjfcfc: : Growtlli Times for Cluster Library 49 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The growth time for the 82,000-particle cluster was not available for this table. I was able to successfully time the growth of the 150,500-particle cluster at 10 days, 6:45:55.39. In all cases henceforth, I will refer to a particular 50,000-particle cluster by its seed. The 150,500-particle cluster was expanded from cluster 29, so I will refer to it as cluster 29a from this point forward. From this table, it is clear that there is a wide range of times for growing these clusters. The cluster that corresponds to the timing measurements already discussed is cluster 5. The average time to grow a cluster is just over 13 hours and 30 minutes, so cluster 5, at just over 14% hours to grow, is average. One point that was obvious after looking at this timing data is that the length of time that the cluster needed to grow to 50,000 particles was not obviously correlated to the seed. Each of these clusters also had particle-particle separation distances calculated, as each particle was added to the cluster. This data was used to determine the fractal dimension for each cluster. 2.8.1 Timing Analysis I performed a timing analysis on three particular clusters out of this collection. These three clusters were cluster 11 (the longest), cluster 23 (the shortest) and cluster 5, the cluster I had been doing timing analyses on. 50 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Cluster 23 was completed in slightly less than 7 hours. Considering that most of the clusters at this point were taking over 12 hours to complete, this was a surprising result to see. The timing values are displayed here: 7.0 6.0 5.0 4.0 | 30 2.0 0.0 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Particle Count Fiaure 2.6-1: Library Cluster 23 Timing Graph < 8 0 .0 0 0 Particleel The log-log plot of this timing data shows the power law changing slightly as the cluster grows. 51 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 -6 ------------- 0.0 0.S 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Log (Particle Count) Fioure 2.8-2: Library Clutter 23 Timing Loo Graph (80.000 Perticlett In this plot, the power law that dominates the last part of the cluster’s growth appears to manifest at about 3,200 particles. This portion of the growth has a power law value of about 2.4. The total time to grow this cluster, as given in the table above, taken with the particle count of 50,000 particles gives a growth coefficient of k - 3.6 x 10*1 1 hours/particle2 * 4. The comparison of this cluster size and algorithm to the previous algorithms is done by extending the original timing analysis cluster done for the GUI to 50,000 particles. This refers to cluster 5. The timing analysis for this cluster is given in Figure 2.8-3: 52 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Ti mB(houra) 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 0 Particle Count Figure 2.8-3: Library Cluster 8 Timing Graph (80.000 Particle») The power law is apparent for this cluster, and is better highlighted in the log-log graph: 53 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ? -3 4.5 5.0 3.0 3.5 4.0 1.5 2.0 2.5 1.0 Log (Particle Count) Fioiira 2.8-4: Library Cluster S Timing Loo Graph (50.000 Particlss) The power law dominating this cluster becomes noticeable around particle 16,000 and rules the growth for the remainder of the sample. This power law appears to be about 2.9, with a coefficient of 3.5 x 10*1 3 hours/ particle29. The exponent for this cluster growth is greater than the exponent for the previous cluster, which is attributed to the slower growth time for this cluster. The slowest cluster grown in the library was cluster 11. This cluster took almost an entire day to reach 50,000 particles. It would seem that this particular sample must have had a larger number of random walks that either ended with the moving particle leaving, or the moving particle(s) took longer 54 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. to eventually attach to the cluster. Since I didn't have any software available to actually view the cluster, I do not know if perhaps this cluster was more loosely packed than the rest of the clusters grown. Larger voids within the inner radius could also account for a longer growth time. The graph for the timing of this cluster is given here: 0.9 0.8 0.7 0.5 0.3 02 0. 1 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 0 Particle Count Figure 2.8-6: L ib ra ry Cluster 1 1 Timing Graph (50.000 Particles) This graph was plotted in terms of days simply because it took almost an entire day to grow it. The log-log plot given in Figure 2.8-6 shows the power law more clearly. 55 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 0 - 1 -2 f 3 L 9 _i -s -6 -7 0 0.5 1 1.5 2 2.5 3 3.5 4 45 5 Log (Particle Count) Figure 2.8-6: Library Clutter 1 1 Timing Loo Graph <60.000 Particles) With this cluster, the dominating power law takes over at about 7,600 particles, and stays very smooth for the remainder of the cluster’s growth. The exponent demonstrated with this power law is 3.0. This is the largest exponent of the three, which makes sense, as this was the slowest cluster to grow. The timing coefficient for this rand is k = 7.8 x 10'1 5 days/ particle3 0. This translates to 1.9 x 10*1 3 hours/particle3 0. It’s an interesting observation that as the cluster’s growth time increases, the exponent gets larger and the coefficient gets smaller. How much this relates to the algorithm used, or to the location within the RNG’s cycle are beyond the scope of this investigation. 56 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.9 Fractal Analysis As I previously mentioned, I also measured particle-particle separation distances for each of the 30 clusters grown for the library. I was able to incorporate this into the actual simulation so that it was done as the cluster was grown, and did not significantly increase the amount of time for the simulation to run as it was an exact number of operations for each new particle added. Once the moving particle was added to the cluster, and its position was finalized using the backtrack method that I described earlier, I calculated the distance between this new particle and every particle already in the cluster. I sorted all of these distances into bins so that I would be able to construct a histogram plot of the frequency with which these distances occurred. I binned the values into units of 0.01 Diameters. (Recall that the base unit for all measurements has been in terms of the cluster particle diameter. I will discuss a physical value for this unit later.) The plot of this correlation data shows a dramatic peak in the number of distances between particles at a distance that is near the breaking point of the fractal nature of the cluster. For example, cluster 23 has a correlation plot that looks like this: 57 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 180 160 140 120 100 0 10 20 30 40 S O 60 70 80 90 100 110 120 130 140 1 50 160 170 180 190 Bi n (D iam eters) Fioure 2.9-1: Library Clu s f r 23 Auto-Correl«tion Graph Notice that the peak for this duster occurs at approximately 60 diameters. There is also an expected singularity which you can see as a data point with approximately 50,000 counts at one diameter on this graph. There is another singularity at two diameters, which is not dearly visible in this graph, and a third at three diameters, which is only visible when examining the log-log plot. These singularities are expected, and appear in all of the correlation plots for all of the dusters grown. The singularity at one diameter occurs because every cluster partide, by definition of the growth algorithm, is exadly one diameter from at least one other partide in the duster. There are some occasions where a cluster 58 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. partide would be adjacent to two other cluster partides that were already in the duster when it joined. The number of hits in this bin is 50,213. The singularity at two diameters comes from most of the partides having contact with at least one partide that is already in contact with at least one other partide in the duster. Because of the angles that any of these combinations of three partides will make, there are many hits within the bins between one and two diameters. The singularity at two diameters occurs primarily when there are three partides all lined up in what is essentially a perfect line (there is some wiggle room here, as the bin takes into account values slightly larger and slightly smaller than the actual distance). These two singularities become much more visible in a log-log plot of this data, given here: 59 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 5 D - 3 4 .3 ! D- 2 2 1 0 1.5 2.5 2 0.5 1 0 Log (Bin [Dim w<er«D Fkiuro 2.9-2: Library Clutter 23 Auto-Correlation Loo Graph In this graph, the singularity at one diameter stands alone, noticeably separated from the rest of the graph. The singularity at two diameters also stands alone, sandwiched in between the segment for distances less than two and for distances greater than two. Closer inspection of this graph also shows the singularity at three diameters. These singularities have all been noted by others6 and will not be further discussed here. 6 A. Hasmy, E. Anglaret, M. Foret, J. Pelous, R. Jullien, 'Small-angle neutron-scattering investigation of long-range correlations in silica aerogels: Simulations and experiments', Physica Review B, S0(9V 6006-6016 (1994) 60 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Also, it becomes clear that the number of hits for the larger bins drops dramatically before even getting to 100 diameters in size, as expected, due to the increased spacing between particles, and the smaller number of particles at the outer fringes of the cluster. The slope of this graph in the region after the singularities noted above gives an estimate of the fractal dimension of the cluster. To extract this from the data, I used a weighting function C (r)= n rd _ 1 on the data. Here, C(r) is the weighted count in the bin at distance r, n is the original count in the bin, r is the distance between cluster particles, and d is the fractal dimension of the cluster. Using this modified data, I was able to find an area where the weighted correlation function became essentially flat. The value of d the causes this effect is the fractal dimension for the cluster. The plot of the weighted correlation data for cluster 23 is given here. 61 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 2 0 0 1000 800 1 600 I 400 200 0 20 40 60 80 100 120 140 160 180 200 Bin (Diameter*) Firnrt 2.9-3; Libntv Clu«t»r 23 Full Weighted Correlation Function This cluster is shown here with a dimension of d - 2.48. Closer examination of the graph shows the detail of the flat region much more clearly. 62 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1200 1000 800 | 600 I 400 200 0 0 5 1 0 1 5 20 25 30 35 40 4S 50 Bin (Diam eters) Ftauw 2.9-4: Library C lu fr 23 Partial W iahfrd Correlation Gr«Ph Here, the region with fractal behavior breaks down at approximately 25 diameters between particles. Beyond this, the cluster demonstrates less fractal behavior until it reaches a point towards its extremities where there are essentially no particles available to interact with. 1 performed this same analysis on all 30 clusters in the library. The fractal dimensions that I came up with are given in Table 2.9-1. 63 t 1 L * I . i j L ifik . I ? * r i t i Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Seed Fractal Dimension Seed Fractal Dimension Seed Fractal Dimension 1 2.50 11 2.54 21 2.49 2 2.47 12 2.50 22 2.48 3 2.50 13 2.53 23 2.48 4 2.50 14 2.50 24 2.48 5 2.45 15 2.52 25 2.48 6 2.55 16 2.52 26 2.49 7 2.50 17 2.52 27 2.49 8 2.49 18 2.54 28 2.50 9 2.54 19 2.52 29 2.49 10 2.48 20 2.48 30 2.41 Table 2.9-1: Fracial Dimensions for Cluster Librarv The maximum value for the fractal dimension in my clusters was 2.55, which was found in cluster 6, and the minimum value was 2.41, in cluster 30. The average fractal dimension for the set is 2.50, which matches the average fractal dimension of 2.5 that others have predicted and seen.7 At this point, I was ready to being looking at the free path distributions within my clusters, which I discuss in the next section. 2.10 Free Path Analysis Since the intent of this investigation was to look at the internal structure of the aerogel clusters grown with my simulation, the free path distribution is the tool that others have used to look at this same problem. It was at this point that sizes for the cluster particles, and the sampling 3 He quasi-particle, were defined in physical dimensions. I determined that 7 S. Tolman, P. Meakin, ‘Off-lattice and hyparcubic-lattice models for diffusion-limited aggregation in dimensionalities 2-8*, Physical Review A, 40(11 428-437 (1989) 64 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. the cluster particle was approximately 30 Angstroms in diameter. Likewise the 3 He quasi-particle was placed at 3 Angstroms in diameter. In this section, I will discuss the approach I took in doing each of the analysis methods. I will discuss my findings in the Results section at the end of this work. 2.10.1 Concentric Spheres Initially, I needed to determine how I was going to actually work the free path distributions. The first method I came up with to do this was to work with concentric spheres, centered on the origin, of course. These spheres would increase in radius by one particle diameter. This means I would be sampling the free path distributions at one diameter from the center, than at two diameters, etc., all the way out to some limit. The limit I chose was halfway from the origin to the farthest particle in the cluster. By this point, the fractal nature of the cluster would be gone, based on the fractal analysis I did on each cluster in the previous section. At the surface of each of these radial spheres, I did a sampling of 20,000 points. Each point was checked against the cluster first to make sure that there were no intersections with cluster particles. This guarantees that the point is at some mid-point in an actual free path. Now, I selected a trajectory along which the sample particle moves through this point. This trajectory is extended in both the “ positive” 65 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. I carefully analyzed each cluster to determine what would be the size of a cube, centered on the origin for that cluster, which would contain a cluster of 98% porosity. I defined this number as the cube-radius. It is the radius of the sphere that will exactly fit inside the cube that contains the proper porosity. This means that the cube-radius is one-half the length of the cube side. This table gives the cube-radius for each cluster. Seed Cube-Radius Particles Seed Cube-Radius Particles 1 49.56 37,192 16 48.69 35,280 2 49.48 37,024 17 51.48 41,689 3 50.18 38,613 18 50.05 38,308 4 48.05 33,896 19 51.82 42,523 5 48.05 33,903 20 47.67 33,105 6 50.60 39,592 21 51.19 40,988 7 49.79 37,714 22 49.44 36,926 8 48.74 35,377 23 49.63 37,358 9 50.35 38,999 24 49.65 37,407 10 50.41 39,140 25 47.85 33,480 11 48.23 34,284 26 50.92 40,339 12 51.00 40,528 27 50.73 39,901 13 46.24 30,213 28 49.39 36,822 14 50.61 39,611 29 49.06 36,077 15 51.04 40,632 30 49.31 36.641 Tab* 2.10-1; Cute-Radiu* Maasuramant» for 98 % PoroatoLglMftttf These values represent the maximum radius from the cluster that matches the minimum cube side half-length to hold a 98% porosity version of the cluster. Any particle with coordinates beyond this cube wall will be discarded when this cluster is loaded into the analysis program. Once this step is done, I had to look at repeating this cube in neighboring cubes out to some finite distance. For simplicity, I chose 10 67 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. cubes along any of the axial directions from the center cube, which makes for what I labeled a “ Megacube" of side-length 21 cubes, each containing an exact replica of the cluster. In reality, I did not reproduce the coordinates of the cluster into each of these 21 cubes, as that would be too intensive on the task of coding and tracking the movements and placements of the sampling particle, and also unnecessary. Instead, I only populated the central box in the cube, and then adjusted the positions of the trajectory that the sampling particle was moving along whenever it hit the wall in the “ current" cube. If the sampling particle should move all the way out to the far side of one of the edge cubes without intersecting a cluster particle, the path was labeled as an infinite path. It turns out that the number of infinite paths that I found for each cluster was an extremely small number. I chose to sample each cluster for 100,000 paths. The sampling occurred in a similar fashion as in the Concentric Spheres analysis, but with a simple difference. The sample point was placed at some random location within the center cube. The only constraint on the sample point now was the size of the cube. Once the sample point was determined, I examined it to make sure that it was not intersecting any of the cluster particles. At this point, the sampling continued identically as in the Concentric Spheres analysis. I 68 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. picked a trajectory and followed it until it either connected with the closest cluster particle on each side somewhere in the Megacube, or until it escaped. I also ran this analysis on cluster 29a. This cluster has a cube-radius of 67.78 diameters (2033.40 Angstroms), and had 95,166 particles contained in the cube. On this cluster, I sampled one million paths. Another approach for measuring the free path lengths has already been published8. The work done in [8] also examined the free path distribution in an uncorrelated collection of the particles that were grown into the cluster examined. Since this was not data I had previously generated, I did not have anything to compare that finding with. At this point, I did not have any such collection of particles. In order to do that comparison, I created a volume of space equal to the size of the cube for cluster 29a and populated it with the same number of particles as the 98% porosity cluster has. In doing this population, I followed the same concept as others had done when populating the lattice structures of their pre-populated clusters9 : when a new particle was being placed into the random distribution, it was checked against all other particles, and was 8 T. M. Haard, G. Gervais, R. Nomura, W . P. Halperin, ‘The Pathlength Distribution of Simulated Aerogels*, http://spindry.phys.nwu.edu/DLCA/pathfength.html, Physica A, 284-288 (2000) 289-290 9 P. Meakin, :Formation of Fractal Clusters and Networks of Irreversible Diffusion-Limited Aggregation*, Physical Review Letters, S1/13L 1119-1122, (1983) 69 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. discarded if that position intersected a particle already placed. These particle coordinates were never constrained to lie only on lattice sites. I ran the Repeated Cluster analysis on these uncorrelated particle collections for one million paths so that I could contrast my work to [8]. I will discuss this more in the Results section. 2.10.3 Comparison to Published Work After examining my results from the Repeated Cluster analysis, I realized that I had very different results from what I was looking at in [8]. Upon reviewing the work done there, I constructed a modified version of my analysis program in order to directly compare to that work. First, a random point in the center cube is found, such that it intersects no cluster particles. This is the starting point for the search for a cluster particle. A trajectory starting from this point is followed in one direction until it intersects with a cluster particle, or until it leaves the center cube. When a cluster particle was found, I calculated the normal vector created by the sampling point’s contact position with that particle. This allows me to find a trajectory that will take the sampling point away from this cluster particle along a free path that will intersect another cluster particle. There is no reason to be looking in the “ negative” direction as the cluster particle prevents the sampling particle from moving back along the trajectory. 70 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Once a duster partide is found along the trajectory, the path length is noted and another search begins. For comparison, I analyzed duster 29a and its uncorrelated counterpart. I will discuss this more in my Results. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3 Random Walk: Skin Method (Abandoned) Previous to implementing the Binary Tree approach for storing the cluster's particle data, I considered implementing a “ skin” method as a secondary test for the random walk. I investigated this approach because the timing for the growth of the dusters after completing the Inner Radius method was still slower than I could use to grow reasonably large dusters. As the title of this chapter implies, this approach was abandoned after significant effort because its benefits were minimal for a considerable increase in complexity. At that point in time, I had only one test available to eliminate any unnecessary comparisons. If the moving partide was further from the center of mass than the cluster partide that was farthest from the center of mass, then the random walk continued. It was not until the moving particle crossed this boundary, the drde or sphere that completely endosed the duster that the extensive testing of position relative to all cluster partides began. However, I realized that there were still opportunities available to increase the speed of the simulation. By taking advantage of the understanding that if there is a "farthest” particle that defines the radius of the smallest volume or area that completely contains the duster, I realized that that partide was unique to the cluster. Only one partide should statistically fall upon this boundary. 72 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. This means that there existed another boundary between points in the cluster that could be used to bypass testing. If the moving particle wanders into the region defined by this first boundary, then it could still be located outside the cluster. It was this understanding that led to the idea of the cluster having a “ skin” and the realization that I could implement a test to speed up the algorithm. In the Skin Method, there is a perimeter established which would be defined by all particles that are “ farthest” out from the center of mass of the cluster, and that can be joined in two dimensions by a line, or in three dimensions by a plane. The important characteristic of this line or plan is that there are no particles further from the center of mass of the cluster. Together, these lines or planes form the “ skin” that fully contains the cluster. The particles that define this skin are referred to here as “ vertex” particles. The skin itself cannot exist until there is a third cluster particle for a two-dimensional cluster, or until there is a fourth particle for a three- dimensional cluster. In other words, there has to be a clear “inside” and “ outside” to work with. 3.1 Two-Dimensional Skin Growth In two dimensions, there is a unique structure to the equation of the line in space that is non-parametric. This is the standard equation, 73 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. y = mx + b . If I were only working in two dimensions, this equation would be sufficient. However, knowing that any method that I develop for the two- dimensional cluster would need to be implemented for the three-dimensional model as well, I decided to work with the parametric form of the equation instead. The equation takes on the following form: x = a t+ x0 and y = bt+yQ . In these equations, t is the parameter value for the particle at (x, y), with a and b as the factors of the parametric equation. By equating the value of t in both equations, the standard equation can be recovered. As I stated, no skin can be defined until there are at least three particles in the two-dimensional cluster. The algorithm, as implemented for the Inner Radius method, remains unchanged until the third particle is added. Beginning with the fourth particle, I can check two specific quantities: the distance between the moving particle and the center of mass, and the position of the closest vertex particle. Each vertex particle is contained within a doubly linked list data structure that knows the "left” and "right” neighboring vertex particles. The equation of the line that connects the moving particle to the center of mass is easily determined at every point on the moving particle’s path. Also, knowing the coordinates of the particles that make up the skin, and knowing the 74 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. neighboring vertex particles for each skin particle, I also know the lines that define the skin. CM V3 V2 Figure 3.1-1:2-0 Perimeter with Moving Particle P In Figure 3.1-1, the moving particle, P, is approaching the cluster. It has just completed its most recent random walk movement, and comparisons are now being performed, using the Skin method. For the equation from the Center of Mass (CM) to the moving particle (P), I have xp = a t+ x C M , andyp = b t+ y C M . Likewise, for the line connecting V\ to V 3, I havex3 =a1 3 s + x 1 , andy3 = bns + y ,. For the line connecting V 2 to V3 ,1 havex3 = a^u+ x2, and yz=b 23u+y2. The point where P->CM intersects Vi->V3 is labeled in the figure as point A. The point where P->C/Vf intersects Vz+Vs is labeled as point S. 75 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. One look at the picture clearly shows that these are two very different kinds of intersections. Point A occurs at a point that lies “ within” the region of the skin that is between the points that define that part of the skin. Contrary to this, however, is point 6, which lies “ outside” the region of the skin that is between the points defining the line upon which it lies. This information is exploited as follows: The intersection point A has a unique distance from each of the two vertex points that define the line it is part of. The distance A-^Vi will be added to the distance A-^Vz. If this distance is equal (within some appropriately small round-off tolerance) to the distance between points V\ and V z, then I know that this line is the skin segment that intersects the line from the moving particle to the center of mass. If, however, the sum of these two distances is greater than the distance between the vertex particles, then the intersection point does not lie on the segment of the skin line that serves as part of the skin, and the corresponding pair of vertex particles can be removed from consideration in the current set of comparisons. There is one unique case that also needs to be considered here. If the point A and 8 coincide, then that means that the vertex particle that both skin segments share lies along the line from the moving particle to the center 76 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. of mass. In this case, that vertex particle will be the intersection point of the moving particle’s line to the center of mass and the skin segments. From here, there is only one more test needed to determine if there is a chance of collision of not. If the parameter for the point A along the moving particle’s line with the center of mass is less than the value of the parameter for the point P along that same line (here, I am using the position of the center of mass as the zero value for the parametric equations), then the moving particle is still outside the cluster, and only the vertex particles need to be examined for collision. (The vertex particle, like all particles in the cluster, is defined by the coordinates of its physical center, which means that a portion of the particle is “ outside" the cluster relative to the skin segment. These portions could intersect the moving particle, so they need to be examined.) As the cluster grows larger, the number of skin segments grows at a significantly slower pace, as most particles will have a final location within the confines of the cluster. This means that as the cluster grows, there will be significantly fewer calculations occurring while the moving particle goes along its random walk. The only way that the skin will grow is when the new particle is added to a position that is outside the existing skin. Only once the partide actually crosses the skin does the complete set of comparisons get evaluated. This occurs when the parameter for point A is 77 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. greater than or equal to the parameter for the moving partide in the line connecting it to the center of mass. The partide adually crosses the skin while it is still a distance of one radius from the skin, but could contact an interior duster partide as soon as it is one diameter from the skin. This is factored into the parameter comparisons to allow for the possibility that the partide could dip into the skin, even though that cluster partide’s coordinate point never crossed the skin. It also works with the possibility that the duster partide has attached inside the skin, but its position is such that a portion of the particle adually projeds out of the skin. When a moving particle joins the duster, it can do it in two ways. First, it can become a new vertex partide. Second, it will become an interior cluster partide. The addition of a new partide requires some new tests to be performed. The first test is to check its final position with all skin segments. This is done by the same test that occurs while the particle is adually moving outside the duster - comparison between the line conneding the partide to the center of mass and all skin segment lines. If the new partide is located “ inside" when compared with all skin segments, then the second case mentioned above has occurred, and nothing more happens with this partide. 78 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. However, if the partide is now located outside the current skin of the cluster, then the vertex partides must be reorganized. This could result, again, in one of three conditions - either the new partide replaces an existing vertex partide, or there is an increase in the number of vertex partides in the cluster. The third condition is rare, but still statistically possible. The new partide may be so placed as to “ engulf more than one vertex particle back into the duster. In two dimensions, this should not ordinarily happen to more than one partide at a time, but it could. The two dosest vertices are determined, again, by comparison between the skin and the center-of-mass line for the new particle. This will reveal which skin segment is being broken, and which vertices might be replaced. The process for determining which of the three scenarios is present is identical, in the beginning. A line is “ drawn between the new particle and the neighbors of both of the vertices defining the skin segment that the new partide breaks. If either of the two vertices to that skin segment is now closer to the center of mass than the line, then that vertex is removed from the list of vertices, and the new partide takes its place. If both of the vertices in question are removed, then the number of vertices will decrease by one. This event will usually not occur until the duster has reached some size where the angle between adjacent skin segments is almost 180s . (It will be 79 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. very unusual in two dimensions for more than one vertex to be replaced at a time.) Finally, if neither of the neighboring vertices can be removed from the list, then the new particle is added into the linked list as a new vertex, and the skin grows by one vertex. Initially, the growth of the cluster will actually promote this event more often than the other two. This process is fairly simple to code for a two-dimensional cluster. It is the addition of the third dimension that adds a level of complexity to the skin that mikes it more challenging to implement. 3.2 Three-Dimensional Skin Growth With the addition of the third dimension, the problem takes on a different nature altogether. Now, planes form the skin. It is a simple piece of geometry to recall that three non-collinear points define a plane. In the two-dimensional case, there was a method to ordering the vertices. Since each vertex could only have two neighbors, then any one vertex could be considered the starting vertex, and I could walk around the cluster going from vertex to vertex, similar to a fence surrounding a piece of property. With the three-dimensional case, this is no longer possible. Now, any vertex can be included in any number of skin segments. The minimum number of segments for any vertex point is three skin segments. 80 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. To demonstrate the complexity of this structure, an n-sided polygon can be constructed on paper. Pick any interior point in this polygon, and call that a vertex point. Now, draw a line from this vertex to every comer of the polygon. Notice that by adding a new vertex in any location outside this polygon, any number of vertices could be removed, and a new one could be added, but the number of skin segments that the central vertex point is part of changes in unpredictable ways. Expanding this to a third dimension can be done by placing the vertices at different heights relative to the paper. When the new vertex is added, some of the existing vertices will combine with it to form new triangles, while others are removed from future consideration. However, there is no easy way to order the arrangements of the vertices, and any one vertex can be part of any number of the triangular planes in this construction. Given this detail, it becomes extremely difficult to quickly grow the skin. Once a vertex is removed from the skin, how many new surfaces need to be investigated before the new skin can be complete? How do I grow the skin? Again, I have to wait until an “ inside” and an “ outside” can be clearly defined. This occurs as soon as the fourth particle is added to the cluster. At this point, the first four planes are defined. From this point on, the growth of the cluster is worked based on the skin. 81 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. As a new particle moves past the boundary determined by the Inner Radius method, the Skin method takes over. As with the two-dimensional Skin method, I draw an imaginary line from the moving particle to the cluster’s center of mass. This line is compared with all the planes that comprise the skin of the cluster, looking for an intersection. Determining an intersection for this method is more challenging in three dimensions than it was in two dimensions. The first step is to find the vertex that is closest to the moving particle. As long as these particles are not actually in contact, then the comparisons against the skin can begin. Having found the nearest vertex, the task at hand is to find which skin the center-of-mass line intersects. There is no set rule for how many planes any one vertex can be attached to, although it is safe to assume that it will always be greater than or equal to three. The equation for the plane is easily determined by use of vector methods. The equation for a plane is given in standard form as Ax + By + C z-D , where D = Ax0 + By0 + Cz0. The values of A, B, and C are determined by the cross product of the vectors that make two sides of the plane. It turns out, through vector algebra, that the value of D is uniquely determined as the determinant of the 3x3 matrix comprised of the coordinates of the three vertices. 82 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Knowing the vertex that is closest to the moving particle, there is a collection of planes in the skin that need to be examined. For convenience, I call this vertex V\. For each of these planes, use the remaining two vertices (V2 and V 3 ) to define the line that connects them. I defined the point where the center-of-mass line intersects the plane as point P. There is now a line, starting at the common vertex, which goes through P and intersects the line defined by V 2 and V 3. This is point Q. Now, the determination of whether this point P is inside the area defined by the three vertices is based upon the method used in the two- dimensional Skin method. First, the parameter of P along the three- dimensional line, starting at Vi, must be smaller than the parameter of Q along that same line. Along with this piece of information, the distance from V 2 to Q when added to the distance from V3 to Q, must give the distance from V 2 to V 3 . If both of these conditions are met, then this skin segment is the one that is between the moving particle and the center of mass. If the moving particle is more than one diameter from the skin (on the outside of the cluster), and it is not intersecting any of the vertices, then it moves again. If it intersects a vertex, then it needs to be examined to see if there are any new skin segments, and to see if any of the vertices need to be removed. 83 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. This becomes very complicated, because each vertex particle shares each of its planes with two other vertices. There is no easy way to organize these particles, nor is there any easy way to organize the planes. This creates a very difficult and messy set of programming. After many weeks attempting to solve this problem, I realized that I wasn’t able to program this into a form that would work for the two- dimensional problem and be easily converted into a three-dimensional solution as well. I also could not come up with a simple method for growing the skin in three dimensions that was analogous to the two-dimensional method that I had developed. With that realization, I decided to look for other methods that might increase the speed of the simulation. Since I could not think of any other way to remove comparisons, I looked into the use of more efficient data structures, leading me to the Binary Tree method. 84 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4 Direct Placement Method (Abandoned) Once I implemented the AVL tree as a data structure for storing the cluster data, I ran out of ideas as to how to directly improve the random walk method. In the course of looking at the physical process, I realized that the random walk method was really an RNG that somehow managed to randomly pick a particle in the cluster to which it attached a new particle. From this, I was able to come up with the ballistic approach that I refer to as the Direct Placement Method. As with the previous chapter, this chapter’s title notes that this approach was also ultimately abandoned, as it did not reproduce the results of the DLA simulation. This method is completely different from the random walk method. I realized that this would remove the increasingly large number of comparisons used in the random walk method by using one set of comparisons throughout the entire cluster in order to place the new particle. The first order of time savings on this became obvious once I realized that, for each new particle, there is exactly one more comparison done relative to the addition of the previous particle. The Direct Placement Method works in the following manner: the simulation “ randomly” picks a particle in the cluster to serve as the anchor for the new particle. This anchor is the intended particle to which the new one 85 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. will attach. If the new particle is blocked by other cluster particles on its way in from outside the cluster, it won’t actually make contact with the anchor. (The reason I put “ randomly” in quotes above is because the random act itself requires the definition of a distribution for the cluster. At the time that I began working on this method, there were no reasonably sized clusters available from the random walk method that I could use to get a good distribution function from, so I used a simple distribution in order to develop the algorithm. This was due to the fact that the computer I was working on was a 200-MHz computer, which as demonstrated by the timing results for the 1 .5-GHz computer, made it too long to grow any clusters beyond a few thousand particles. The best cluster available for this was grown, using the AVL method, to just over 20,000 particles in just over a week. This cluster came after this method was developed.) Once the anchor particle is determined, the simulation generates a random trajectory in three-space, in the identical manner as described for the random walk method. This trajectory is actually two angles - phi ( < ) > ) in the xy-plane and theta (0) against the z-axis. This trajectory represents a three- dimensional line that uses the anchor as its origin point, and extends parametrically in two or three-dimensions away from the anchor in both directions towards infinity. The new particle is assumed to be coming from somewhere outside the cluster’s radius. Along that path, however, the new 86 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. particle may find other particles “ in the way.” Because of this detail, the outermost cluster particle that lies within one radius of the trajectory path is the real contact point. This stipulation was added to keep this method as real and physical as possible. The concern here is that if the anchor lies behind a solid wall of cluster particles, then any particle coming along the trajectory cannot pass through that wall in any physical sense. It would make contact with the wail and stick there. If the particle were to attach to the anchor, it essentially would have been as if the new particle were created from nothing, as there would be no physical way that the particle could get there. Since this doesn’t happen in the real world, I had to prevent it from happening in the simulation. Initially, of course, there are few particles, and the likelihood of a new particle not attaching to the anchor is small. However, as the cluster grows, it begins to manifest layers of a sort that will impede the new particle along its path to the anchor. Given this understanding, the particle must attach to the first particle it makes contact with on its way into the cluster. The point of this simulation was explicitly to increase the speed by which the random walk simulation ran. Without knowing the distribution of particles from the random walk method, I realized that I would not get identical or even similar data. My main task was to see how quickly I could grow a large cluster, and what its fractal dimension would be. 87 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. What I found was that in less than two days, I could grow a cluster to one million particles. This timing analysis of this cluster is given here. 1.4 1.3 1.0 0.9 I 0.8 fl 0.7 B 0.6 0.5 0.4 0.3 0.2 0. 1 0.0 500 600 700 800 900 1000 0 100 200 300 400 Partide Count (Thousands) Figure 3.2-1: Direct Placement Method Timing Anatv»i« (1.000.000 Particloo) By measuring the time of the cluster growth, I saw that it had a power law exponent of 2.26. As can be seen in the following log-log graph of the timing data, this time exponent stays very constant throughout most of the collection of particles. It seemed that I had beaten the timing problem after all. 88 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1.00 0.00 - 1.00 r 0 0 g -3.00 £ £ -4.00 -5.00 • 6.00 -7.00 5.5 6.0 2.0 2.5 3.0 3.5 4.5 5.0 4.0 Log (P artide Count) Fkiur* 3.2-2: Direct Ptecwwnt Mthod Loo Timing Analv«i» <1.000.000 P«rticl6») At this point, I set this method aside to begin growing a 20,000-particle cluster using the random walk method. Considering the type of distribution that I needed to work with for the Direct Placement Method, I looked at the ratio of the new particle’s distance from the center of the coordinate system at time of addition to the distance from the origin of the particle that was farthest. Since I was not making reference to the center of mass in the Direct Placement Method, I had to have a distribution that would not work with that concept either. I found that this random walk distribution had a peak at a ratio of approximately 0.55, as well as an expected singularity at 1.00. 89 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 300 250 200 150 100 S O 0 0.0 0.1 02 0.3 0.4 05 0.6 0.7 05 0.9 1.0 Ratio (r« ) Fkiuie 3.2-3: Distribution Graph - R an d o m Walk Clu«t»r (20.000 P«rticK») An estimated function representing this distribution was placed into the Direct Placement Method code for testing. What I saw was a cluster that, when using this distribution analysis, had a peak at 0.80. The singularity was still present at 1, but that was where the similarities between the two methods ended. 90 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 250 - * * * * • « •% * l* • H •V “ • * . v • *• •» f # • * * V ' 0.0 0.1 02 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Ratio (r/R) Fiaura 3.2-*: Distribution Graph - Direct Placement Cluattr < 20 .00 0 P«rticl»») It was clear that the Direct Placement Method could not directly reproduce a cluster distribution that matched the target particle distribution included in the algorithm. There was never an expectation that this method would identically reproduce the clusters, but there was an expectation that it would reproduce the distribution function it was given. The fractal dimensions for the clusters grown using the random walk distribution in the Direct Placement Method were also much higher than those for the random walk cluster. These new clusters had fractal dimensions as high as 2.75, while the random walk clusters ranged from 2.40 to 2.55. 91 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. In light of this, I continued trying to find ways to make this method more closely match what the random walk method was giving for distributions. After many weeks of effort, I finally abandoned this method when I developed the Windows GUI for the random walk. I had developed this new code for the sole purpose of improving the interface between the user and the simulation. Since the GUI removed the DOS-environment console support that Windows required to run it, the unexpected increase in speed was sufficient to no longer consider the Direct Placement Method for investigation. 92 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5 Results Having finally been able to grow my simulated aerogel clusters to a size where the fractal region was clearly defined and was large enough to not be obscured by the finite size limits of the cluster, I was able to begin the actual analysis of the free path distributions. The analysis began with the Concentric Spheres Analysis, and moved on to examine the cluster when placed in a three-dimensional matrix of cubes, all containing copies of the cluster. This concept was used in both the Repeated Cluster Analysis and my comparison to [8]. To accommodate the physical dimensions of the aerogel particles in the clusters used in [8], I took the particle size to be 30 Angstroms. Likewise, for the purposes of all the free path measurements, the 3 He quasi-particles are given an effective diameter of 3 Angstroms, based on the Fermi wave- vector. Although the data is in terms of Diameters of cluster particles, it is a simple matter to convert the number to Angstroms. 5.1 Concentric Spheres This was the first free path analysis done on any of my clusters. The samplings were done by starting rays from random locations on the surface of a spherical shell of a specific radius from the center of the cluster. I did this analysis to determine where the fractal region within the cluster occurred, as well as to examine the finite size effects on the clusters. 93 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. I examined several dusters in the duster library and found that at very small and very large radii, there was no distribution function to fit the data well, but, after moving just a few radii from the center, the data matched a Poisson distribution, as shown here for duster 29a. 100 • Data • P oisson 0 20 40 60 80 100 120 140 160 1 80 Fros Patti Lengths (Diam eters) Figure 5.1-1: Cluster 2 1 Fr— Path Sampling «tr«8 D U m ttn Here, the actual free path measurements are in gray and the Poisson fit that best matches that data is in black. I moved out from the center of each cluster that I examined like this in increments of one Diameter (30 Angstroms). The radius of the shell from which I did the sampling in Figure 5.1-1 translates to 240 Angstroms in [8]. 94 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The Poisson fit function is A / = Axexp[-x/A], where x is the length of the path found, A is the amplitude of the decay graph, and X is the decay length. In Figure 5.1-1, A and X are 20.5/Diameter and 9.3 Diameters, respectively. For cluster 29a, the graphs show the best match between the data and the decay function for radii from approximately 15 Diameters to 50 Diameters, which resolves to 250 Angstroms out to approximately 1,600 Angstroms in [8], as shown here: eo 70 60 • Data • P oisso n 0 20 40 60 80 100 120 140 160 180 Free Path Langth (Diam eters) Figure S . 1-2: Cluster 29a Fim Path Sampling at r» 18 Diameter* 95 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. In Figure 5.1-2, the values for A and X are 11.7/Diameter and 12.5 Diameters, respectively. In Figure 5.1-3, the Poisson distribution fits well with the data in the area moving away from the peak, but begins to brush the edge of the data’s noise as it gets to longer paths. The values for A and X in this sampling are 3.1/Diameter and 25.4 Diameters, respectively. 25 • Data • Poisson 120 140 200 60 80 100 160 180 0 20 40 Free Path Length (M m atare) Figure 8.1-3; Cluster 29a Free Path Semolina «tr« S 3 D im f re As the radius of the sampling sphere increases from this point, the Poisson distribution no longer matches the data for small lengths, and the data continues to show less coherency and greater spread, as shown in Figure 5.1-4. 96 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 40 * Data • Poisson 60 80 100 120 140 160 180 200 220 240 20 40 0 Free Path Length* (D iam eters) Figure 6.1-4: Clutter 2 S a Free Path Sampling «tr-60 Diamtere Here, the decline of the Poisson distribution continues to match the data, but the rise no longer follows it. At about this point, it becomes clear by continuing to examine the correlation between the Poisson distribution and the raw data that the sampling is no longer within the fractal region. This same result appeared in the analysis of all clusters. The fractal regions within the cluster were highlighted by very strong correlation between the Poisson distribution used and the data. In regions at or near the center of the cluster, there was very little correlation, as that was already beyond the fractal region. 97 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5.2 Repeated Fractal Cluster The next step in examining the clusters was to model a physical aerogel by building a repeating structure that would house an exact replica of the cluster in question in each of its cells. The clusters were cut into cubic volumes that would house a 98% porosity cluster, and the cubic volumes were repeated ten times out in each of the three coordinate directions, both positive and negative. This resulted in a Megacube of 21 cubes per side. I examined most of the clusters grown by sampling them with 100,000 free paths. As expected, I found no infinite paths in this extended structure. All paths began in the central box, and were extended into the outer boxes only when a cluster particle could not be found along the quasi-particle’s trajectory before leaving the central box. This applied to both the “ positive” and “ negative” sides of the trajectory. As a contrast to this cluster data, I also generated a “ before” image of what the cluster could have looked like before it was allowed to aggregate by a injecting a 2% density of non-intersecting, but otherwise uncorrelated particles. I used the particle counts from Table 2.10-1 for the 50,000-particle clusters and also determined that the value I referred to in that table as ‘Cube Radius” is 67.78 Diameters (2033.40 Angstroms) for cluster 29a. This means that this cube has a side length of twice this value (135.56 Diameters 98 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. = 4066.80 Angstroms). This cluster was populated with 95,166 particles to give 98% porosity. The graph of the uncorrelated particles corresponding to cluster 29a shows a Poisson distribution to the path lengths found, as seen here: 8 7 6 5 4 £ 3 2 1 0 250 300 350 450 150 200 400 500 50 100 0 Path Length (Diam eters) Figure 5.2-1: Cluster 2»g UncorreHfrd Pirticto* Ft— Path Ptotribution This collection of data fits a first-order Poisson distribution, given by P(x) = A(x/A)exp[-x/A]. Here, x is the path length measured, A is the decay length, and A is a dimensionless scale factor to match the Poisson distribution to the actual data. For this data, A = 3729, and A = 26.82 Diameters. The match between this data and the Poisson distribution given here is, as expected, excellent throughout the entire data set. 99 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. All of the uncorrelated particle collections made similar graphs. As expected, these particle collections will be well spread out over the full 4k steradians available, and have a high probability of a particle lying in the path of any quasi-particle along any trajectory. For the sampling displayed above, there were one million paths found. This was set up in the repeated cluster structure so that it was within a Megacube of 21 collections per side. In this sampling, I found no infinite paths, but did have just over 27,000 intersections with existing particles for the starting points of the trajectories. This number is about 2% of the number of samples taken, and fits well with a randomly filled 2% density after taking into account the 10% increase in effective radius/diameter, owing to the assumed size of the quasi-particle. Also, with this uncorrelated collection of particles, the longest path found was just over 450 Diameters in length. Given that the length of one cube in the repeating structure is just over 135 Diameters, there was no path that extended further than 4 cubes in length. Considering that the uncorrelated particles are expected to be distributed throughout the center cube with some kind of uniform, random placement, this makes sense. It won’t take long going in any direction to find a particle. When I ran the same sampling method on the actual cluster, I got the following free path graph: 100 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 0 100 200 300 400 500 600 700 800 900 1000 Path Length (ptem etere) Figure 8.2-2: C lu fr 2— Fr— Path Dirtribution One major difference visible between the duster and the uncorrelated partides is at the beginning. Small path lengths occur in roughly equal numbers, as long as the path length is less than the size of one cube width. Once the path lengths reach a greater size, the path lengths decrease in count steadily. Just as important is the observation that the exponential tail for this sampling has a much longer tail than in the uncorrelated particle sampling. Because the partides are more ordered than in the uncorrelated collection, there were now voids present which allow sampling paths to extend further and possibly even escape the Megacube completely in one or both 101 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. directions. For the measurement of one million path lengths, I found 46 such infinite paths. The number was more than expected in theory, and is a by product of the order imposed on the structure by repeating it exactly within each cube. This particular analysis on the cluster yielded fewer than 27,000 intersections for the sampling point on the quasi-particle’s path. Again, for one million paths sampled, this fits the 2% density. Although the decline of the path lengths beyond the cube side length can be easily matched to first-order Poisson distributions, there is no obvious distribution to work with the smaller paths. Again, this is characteristic of the clusters when analyzed in this manner. There seems to be no readily available mathematical structure to describe the structure within the cluster when modeled in this way. 5.3 Comparison to Published W ork I chose to do the method in [8] to see if I could reproduce the results obtained in that publication, as a way to better understand what I have already learned, and to understand why my results differ from those published results. That approach is similar, in the beginning, to my analysis method. It locates a position within the empty space defined by the 98% porosity duster and selects a three-dimensional random trajectory from that point. In my 102 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. analysis, the trajectory is the path for the 3 He quasi-particle. In the method published in [8], there is no quasi-particle involved at this point. The trajectory is used solely for the purpose of isolating a random cluster particle. Once this cluster particle is found, it is used as the base for the free path measurement. In my method, I was looking along the free path trajectory in both the “ positiven and “ negative” directions. With this method, only the “ positive” direction will have any meaning, as the cluster particle already found will block all travel into the “ negative” direction. The contact of the quasi-particle with the cluster particle defines a vector normal to the surface of the cluster particle. A new random trajectory is determined, such that it creates an angle less than 90 degrees relative to this normal vector. The quasi-particle’s free path is now based upon this trajectory from the contact point with the cluster particle. The combined graph of the free paths that I measured in this manner for cluster 29a is given here: 103 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1000 900 800 700 600 Cluster Random 500 400 300 200 100 80 90 40 60 70 100 0 10 20 30 50 Path Length (D iam eters) Figure 8.3-1: C lusf r 29« Combined Free Path Anatvsie Comparison This graph combines the free path data generated by using the fully formed 98% porosity cluster and the uncorrelated particle collection, also of 2% density. The cluster has an extremely large number of very small path lengths available from this method. Also, the uncorrelated particle collection shows a definite peak at relatively small distances. This compares well with the graphs displayed in [8] for the path length distributions. In contrast, the free path method that I initially used on these same data sets generated the following combined graph: 104 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1600 1400 1200 1000 Cluster Random 800 600 400 200 400 500 600 100 200 300 0 Path Length (D im e te r*) Figure 8.3-2; Clutter 29a Combined Ft— Path Anglv«i> This graph shows no discernible peak for the path lengths measured using the cluster, and the uncorrelated particle collection has essentially no paths found for small distances. By initiating the free path measurement at a point along the path itself, and tracing back to where the quasi-particle began its trip, I believe I have been able to get a better view of the region inside the cluster than when I start at the surface of a cluster particle and follow that path. Starting at the surface of a cluster particle brings into immediate focus the fact that each particle has, by definition of the growth method, at least one other cluster particle in direct contact. The deeper into the cluster this 105 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. particle is, the more particles that it could be in contact with. This results in a heavy bias towards very small paths, as those neighboring particles will very likely lie within contact of the trajectory selected for the quasi-particle’s path. This is noticeable in Figure 5.3-1. By starting the trace of the quasi-particle’s path out in the void of the cluster’s volume, I removed this bias from the analysis. I still found very small paths, but they appeared in much smaller numbers, as is seen in Figure 5.3-2. Also, the uncorrelated particles show a difference as well. Using the particles, themselves, as anchors showed a sharp peak for the free paths. Using a point in the free space within the collection showed that the particles were pretty uniformly distributed within the cube, defined by the cluster for its porosity. Another difference with the method performed in [8] comes out of the nature of free path measurements. With that method, the actual measured quantity is the nearest-neighbor partide-particle separation distance, not the free path. It is a small sampling of the auto-correlation that I did as part of the growth of the cluster. Essentially, each line found counts only once. The method I used was motivated by modeling electrons in a metal with scattered impurities. Starting at some electron within the metal, how far can it move until it reaches an impurity? This is identical to the concept of 106 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. starting at a 3 He quasi-particle and seeing how far it has to go before reaching a cluster particle. The shorter paths will be found less frequently in this manner because the likelihood of “ grabbing” a quasi-particle that is currently on a short path is small. Likewise, the probability of getting a quasi-particle currently on a long path is proportionately larger, relative to the path length. Therefore, it is possible to sample the same longer path more than once by finding different quasi-particles along that path at different positions. Since I am sampling over some number of quasi-particles, as opposed to some number of cluster particles, every quasi-particle counts, as opposed to every line in [8]. 107 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 Conclusion The difference between my method of free path analysis and the method used in [8] should not be confused with experimental measurements of the structure of aerogels. In both simulations, the cluster particles were idealized as perfect spheres. The true aerogel particles are not spherical in shape. Overall, there doesn't seem to be any easily determined analytic distribution function for the free paths within the clusters that I grew. Although the randomly generated particle collections match a first-order Poisson distribution, once they are collected together as an aggregate, that disappears for small path lengths. In contrast to the free path analysis done in [8], it appears that both methods identify a fractal region within the cluster itself. The method used in [8] does not identify a large fractal region. The analysis method that I used seems to indicate a fractal region large enough to include a sizeable portion of the cluster itself, given the position of the peaks in the free path analysis graphs. Finally, because I was able to see these peaks reproduced similarly for all clusters that I examined, I believe that it shows that there is a noticeable fractal region within each cluster, as opposed to the nominal ones demonstrated in [8]. The presence of a large fractal region within the cluster 108 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. may be one piece of the puzzle as to why 3 He behaves so oddly with aerogels present. 109 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Bibliography G. M. Adelson-Velskii, E. M. Landis, “ An Algorithm for the Organization of Information”, Soviet Mathematics Doklady, 3,1259-1263 (1962) G. Gervais, T. M. Haard, R. Nomura, N. Mulders, and W. P. Halperin, “ Modification of the Superfluid 3 He Phase Diagram by Impurity Scattering”, Physica B, 2 8 0 .134-139 (2000) T. M. Haard, G. Gervais, R. Nomura, W. P. Halperin, “ The Pathlength Distribution of Simulated Aerogels”, http://spindry.phys.nwu.edu/DLCA/pathlength.html, Physica B, 284-288 (2000) 289-290 A. Hasmy, E. Anglaret, M. Foret, J. Pelous, R. Jullien, “Small-angle neutron- scattering investigation of long-range correlations in silica aerogels; Simulations and experiments”, Physica Review B, 50(91. 6006-6016, (1994) P. Meakin, “ Formation of Fractal Clusters and Networks of Irreversible Diffusion-Limited Aggregation”, Physical Review Letters, 51(131.1119- 1122, (1983) W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Numerical Recipes in C. 2n d Edition, Cambridge University Press, New York NY, 1997 M. Tokayama, K. Kawasaki, Physical Review Letters, 100A. 337 (1984) S. Tolman, P. Meakin, “ Off-lattice and hypercubic-lattice models for diffusion- limited aggregation in dimensionalities 2-8”, Physical Review A, 40(11. 428-437, (1989) T. A. Witten, Jr., L. M. Sander, “ Diffusion-Limited Aggregation, a Kinetic Critical Phenomenon", Physical Review Letters, 47(191. 1400-1403, (1981) Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A case study comparing measured and simulated acoustical environments: Acoustical improvements of Verle Annis Gallery
PDF
An evaluation of boat wake energy attenuation by a tule stand on the Sacramento River
PDF
A moving boundary model of concrete sewer pipe corrosion: Theory and simulation
PDF
Imputation methods for missing items in the Vitality scale of the MOS SF-36 Quality of Life (QOL) Questionnaire
PDF
Active deformation at Canyonlands National Park: Distribution of displacements across the grabens using spaceborne geodesy
PDF
Determination of paleoearthquake age and slip per event data, and Late Pleistocene-Holocene slip rates on a blind-thrust fault: Application of a new methodology to the Puente Hills thrust fault,...
PDF
A proposed plan of group guidance in a home room
PDF
Cross-cultural analysis in anxiety, positive mood states and performance of male cricket players
PDF
Bayesian estimation using Markov chain Monte Carlo methods in pharmacokinetic system analysis
PDF
Economic valuation of impacts to beneficial uses of water quality in California: Proposed methodology
PDF
Grain-size and Fourier grain-shape sorting of ooids from the Lee Stocking Island area, Exuma Cays, Bahamas
PDF
A model of student performance in principles of macroeconomics
PDF
Distribution of travel distances with randomly distributed demand
PDF
Design and synthesis of a new phosphine pincer porphyrin
PDF
Dual functions of Vav in Ras-related small GTPases signaling regulation
PDF
A Kalman filter approach for ionospheric data analysis
PDF
Debt reduction by way of inflation: The case of Lebanon
PDF
Derivatization chemistry of mono-carboranes
PDF
Fourier grain-shape analysis of quartz sand from the Santa Monica Bay Littoral Cell, Southern California
PDF
Brouwer domain invariance approach to boundary behavior of Nyquist maps for uncertain systems
Asset Metadata
Creator
McElroy, Kenneth James
(author)
Core Title
Analysis of free path distributions in simulated aerogels
School
Graduate School
Degree
Master of Science
Degree Program
Physics
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,Physics, General
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Gould, Christopher M. (
committee chair
), [illegible] (
committee member
), Haas, Stephan (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c16-295596
Unique identifier
UC11341510
Identifier
1411799.pdf (filename),usctheses-c16-295596 (legacy record id)
Legacy Identifier
1411799.pdf
Dmrecord
295596
Document Type
Thesis
Rights
McElroy, Kenneth James
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
Physics, General