Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators
(USC Thesis Other)
Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ENHANCED QUASI-STATIC PARTICLE-IN-CELL SIMULATION OF ELECTRON
CLOUD INSTABILITIES IN CIRCULAR ACCELERATORS
by
Bing Feng
_________________________________
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(PHYSICS)
May 2009
Copyright 2008 Bing Feng
ii
Dedication
To my parents
iii
Acknowledgements
I wish to thank a number of people who made this thesis possible.
First and foremost I want express my deepest gratitude to my Ph.D. supervisor,
Tom Katsouleas. I owe most of my development as a student and as a scientist to him. He
taught me the right way to approach challenging problems, and how to overcome the
many difficulties research entails. His guidance and understanding throughout this long
journey were more than I could have asked for. He, more than anyone I have ever met,
always sees the sunny side of life and manages to see the silver lining in any situation.
His unabashed optimism, even during discouraging set backs, not only helped me to
make progress but taught me the value of perseverance in the face of adversity. He was
supportive of me professionally and personally, and I will always appreciate the kindness
he showed me while I was his student.
I would also like to thank Chengkun Huang and Victor Decyk for being available
to discuss many aspects of this work, especially during the implementation of the
pipelining algorithm. When I ran into a difficult problem I could always turn to them to
help find a solution.
I would also like to thank Panagiotis Spentzouris from Fermi lab and Georg
Hoffstaetter from ERL at Cornell. It was through the collaboration with them that I have
gained experience of relating my simulation with the real experiments.
Also, I would like to thank all the members of my group Patric Muggli, Ali
Ghalam, Suzhi Deng, Themos Kallos, Reza Gholizadeh, Xiaodong Wang, Brian Allen,
Xiaoying Li. With their knowledge and expertise I always knew I was part of great team
iv
but their friendship and support made me feel like I was part of a family. Thank you all
so much.
I would also like to acknowledge to the computing and technical support from
HPCC at USC and NERSC.
I would also like to thank our collaborators at UCLA, Prof. Warren Mori,
Miaomiao Zhou, Wei Lu, Frank Tsung. They were always available to offer a fresh
perspective on difficult problems that allowed me to find a way forward.
I would also like to thank my committee members Prof. Werner Dappen, Prof.
Stephan Haas, Prof. Aiichiro Nakano for spending their time on my dissertation and
giving me their advices.
I would also like to acknowledge to patient assistance of Betty Byers, Taylor
Nakamura, Noreen Tamanaha, Kim Reid, for helping to keep me on the right track
towards graduation.
Finally, I would like to thank my parents and my fiancé Michael Kavic. They
have always believed in me even at the time when I had doubt in myself. I would have
never achieved this without their encouragement and love.
v
Table of Contents
Dedication ...........................................................................................................................ii
Acknowledgements ............................................................................................................iii
List of Figures ...................................................................................................................vii
List of Tables......................................................................................................................xi
Abstract .............................................................................................................................xii
Chapter 1: Introduction .......................................................................................................1
1.1 Particle accelerators and computational modeling........................................................1
1.2 The electron cloud build-up and the importance of its effect........................................4
1.3 Experimental evidence of electron cloud effect on the beam dynamics .......................7
1.4 Existing simulation models and simulation results .....................................................11
Chapter 2: QuickPIC algorithm for modeling beam-electron cloud interaction ...............15
2.1 PIC simulation.............................................................................................................15
2.2 Quasi-static approximation..........................................................................................17
2.3 Adaption for modeling circular accelerators ...............................................................20
2.4 QuickPIC domain decomposition ...............................................................................22
Chapter 3: Enhancing QuickPIC simulation with a pipelining algorithm.........................24
3.1 Limitation of computing efficiency of QuickPIC code...............................................24
3.2 Advantage of pipelining algorithm..............................................................................26
3.3 Domain decomposition of the pipelining algorithm....................................................28
3.4 Algorithm of pipelining...............................................................................................30
3.5 Implementation of the pipelining algorithm................................................................32
3.5.1 Pipelining environment setup...............................................................................33
3.5.2 Communication among processors in the pipelining algorithm...........................34
3.6 Improved QuickPIC simulation with pipelining algorithm.........................................39
3.6.1 Fidelity of the pipelining algorithm......................................................................40
3.6.2 efficiency of the pipelining algorithm ..................................................................43
Chapter 4: Enhancing fidelity of the physics model of QuickPIC ....................................47
4.1 Beam space charge effect ............................................................................................47
vi
4.2 Dispersion effect..........................................................................................................51
Chapter 5: Simulation results of FNAL main injector ......................................................57
5.1 Fermi lab main injector ...............................................................................................57
5.2 Parametric study of the beam instabilities...................................................................59
5.3 The beam spot size growth before and after the upgrade............................................70
Chapter 6: Electron cloud effect on electron beam at ERL...............................................75
6.1 Review of observed electron cloud effect on electron beam.......................................75
6.2 Electron cloud effect of the Cornell Energy recovery linac........................................77
Chapter 7: Future work......................................................................................................85
7.1 Modeling real pipe shape in QuickPIC .......................................................................85
7.2 Benchmark with WARP..............................................................................................85
7.3 Implementation of adaptive mesh refinement into QuickPIC.....................................91
7.4 Simulating electron cloud build-up in QuickPIC model .............................................93
Reference...........................................................................................................................95
Appendix A Pipeswitch Subroutine ..................................................................................99
Appendix B Code correction...........................................................................................101
Appendix C Input Deck for the Simulations ...................................................................103
vii
List of Figures
Figure 1.1: Electron line density as a function of time during the passage of a batch in an
LHC dipole for different bunch intensities .........................................................................6
Figure 1.2: Electron cloud induced coast beam instability in the INP PSR .......................7
Figure 1.3: Vertical beam size along the train with a trailing test bunch versus the bunch
number at KEKB LER .......................................................................................................9
Figure 1.4: Snap shots of the vertical beam size in the head and the tail of the train at
KEKB. The vertical axis is along the bunch; the horizontal axis shows the transverse
vertical direction. Consecutive bunches are separated and displayed horizontally .........10
Figure 1.5: Schematic of HEADTAIL simulation algorithm............................................13
Figure 2.1: Superparticles and mesh schematic implemented in a 2D PIC simulation. The
red dots represent the superparticles. The blue diamonds represent the grid points on
which the electromagnetic field is defined .......................................................................16
Figure 2.2: outer loop of PIC code ...................................................................................17
Figure 2.3: The flow chart of QuickPIC ...........................................................................22
Figure 2.4: Domain decomposition of QuickPIC in 2D and 3D ......................................23
Figure 3.1: computational time of QuickPIC without pipelining algorithm ....................26
Figure 3.2: Domain decomposition used for the pipelining algorithm (2 subgroups and 4
processors per subgroup) ..................................................................................................29
Figure 3.3: Algorithm of pipelining implemented into QuickPIC. (The beam moves from
left to right. Four groups are used in this simulation. The boxes represents the domain for
each subgroups, and are labeled with the group ID. The blue block is the part of the
Gaussian beam in the domain of the subgroup.) ..............................................................32
Figure 3.4: The pipelining environment setup. Eight processors and four subgroups are
used. Each light blue brick represents a processor. The subgroups are represented by
separating the processors with the solid blue line. ...........................................................34
viii
Figure 3.5: the inter-group communication for transferring the 2D electron cloud slab
between adjacent subgroups .............................................................................................37
Figure 3.6: move the beam particle after 3D push in the pipelining algorithm ...............39
Figure 3.7: the snap shot of the center x-z slice of the electron cloud and the beam at 3D
time step 1000. 1(a) and 2(a) are the results from original QuickPIC. 1(b) and 2(b) are the
results from QuickPIC with the pipelining algorithm using two subgroups. ...................42
Figure 3.8: Horizontal and vertical spot size of the beam from simulation of the original
QuickPIC and the QuickPIC with a pipelining algorithm.................................................43
Figure 3.9: Comparison of the computational speed between the pipelining and the
original QuickPIC..............................................................................................................46
Figure 4.1: horizontal spot size of the beam at CERN-LHC with/without space charge
using the ten-kick method ................................................................................................51
Figure 4.2: Horizontal spot sizes with and without matching initial conditions (dispersion
effect) included..................................................................................................................54
Figure 4.3: Horizontal spot size of the beam at CERN-LHC with and without dispersion
using the ten-kick method ................................................................................................55
Figure 4.4: Vertical spot size of the beam at CERN-LHC with and without dispersion
using the ten-kick method ................................................................................................56
Figure 5.1 Main Injector at FNAL ...................................................................................58
Figure 5.2 Horizontal spot size of Main Injector at FNAL with different electron cloud 60
Figure 5.3 Vertical spot size of Main Injector at FNAL with different electron cloud
densities ............................................................................................................................61
Figure 5.4 Horizontal spot size of Main Injector at FNAL with different synchrotron
tunes ..................................................................................................................................62
Figure 5.5 Vertical spot size of Main Injector at FNAL with different synchrotron tunes 63
Figure 5.6 Horizontal spot size of the Main Injector at FNAL with constant
€
ρ
e
Q
s
......64
ix
Figure 5.7 Vertical spot size of the Main Injector at FNAL with constant
€
ρ
e
Q
s
...........65
Figure 5.8 Motion of electron cloud particles in the presence of the dipole magnetic field
in y direction .....................................................................................................................66
Figure 5.9 Horizontal spot size of the Main Injector at FNAL with and without a dipole
magnetic field ...................................................................................................................67
Figure 5.10 Vertical spot size of the Main Injector at FNAL with and without a dipole
magnetic field ...................................................................................................................68
Figure 5.11 Horizontal spot size of the Main Injector at FNAL with different beam
charge. (The blue curve is behind the red curve.) .............................................................69
Figure 5.12 Vertical spot size of the Main Injector at FNAL with different beam charge. 69
Figure 5.13 horizontal beam spot size before and after the upgrade ................................71
Figure 5.14 vertical beam spot size before and after the upgrade ....................................71
Figure 5.15 Horizontal spot size of proton beam
€
N
b
=9.5×10
10
.....................................73
Figure 5.16 Vertical spot size of proton beam
€
N
b
=9.5×10
10
.........................................73
Figure 5.17 Horizontal spot size of proton beam
€
N
b
=3×10
11
.......................................74
Figure 5.18 Vertical spot size of proton beam
€
N
b
=3×10
11
............................................74
Figure 6.1 (a) The measured RFA data of ten electron bunches with varied spacing; (b)
Electron cloud buildup and saturation ..............................................................................76
Figure 6.2 proposed design of ERL ..................................................................................77
Figure 6.3 Vertical tune shifts of e+ and e- beams in ERL. (a) Electron beam; (b) Position
beam .................................................................................................................................80
Figure 6.4 Electron beam centroid motion and spot size in the transverse plane with
simulation resolution of 256:256:256 cells. (a) Horizontal beam centroid motion; (b)
Vertical beam centroid motion; (c) Horizontal spot size; (d) Vertical spot size (Beam
x
center positions are determined by the distance between the beam center and the
boundary of the simulation box. The centers of the beam is oscillating at the betatron
frequencies, which is not visible in these graphs) .............................................................82
Figure 6.5 Electron beam centroid motion and spot size in the transverse plane with
simulation resolution of 128:128:128 cells. (a) Horizontal beam centroid motion; (b)
Vertical beam centroid motion; (c) Horizontal spot size; (d) Vertical spot size (Beam
center positions are determined by the distance between the beam center and the
boundary of the simulation box. The centers of the beam is oscillating at the betatron
frequencies, which is not visible in these graphs) .............................................................83
Figure 6.6 Snap shot of the electron beam in the x-z plane in the first three time steps
with an electron cloud density of
€
ρ
e
=2×10
11
cm
−3
.........................................................84
Figure 7.1 results of transverse spot size growth from QuickPIC and Warp. (a) Horizontal
spot size growth from QuickPIC; (b) Vertical spot size growth from QuickPIC; (c)
Horizontal spot size growth from Warp; (d) Vertical spot size growth from Warp .........90
Figure 7.2 A typical snap shot of beam and electron cloud distributions in the x-z plane.
Left: beam; right: electron cloud ......................................................................................92
Figure 7.3 Grid structure of the adaptive mesh refinement method..................................93
xi
List of Tables
Table 3.1: Simulation parameters of LHC ........................................................................40
Table 3.2: Simulation parameters used for test of pipelining efficiency...........................44
Table 4.1: Parameters used for LHC simulation ..............................................................50
Table 4.2: Parameters and results for dispersion effect evaluation of LHC and SPS .......55
Table 5.1: Simulation parameters for the FNAL Main Injector........................................59
Table 6.1: Simulation parameter for the ERL ...................................................................78
Table 7.1: Simulation parameter for comparison of Warp vs QuickPIC ..........................87
xii
Abstract
Electron cloud instabilities have been observed in many circular accelerators
around the world and raised concerns of future accelerators and possible upgrades. In this
thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC)
code QuickPIC.
Modeling in three-dimensions the long timescale propagation of beam in electron
clouds in circular accelerators requires faster and more efficient simulation codes.
Thousands of processors are easily available for parallel computations. However, it is not
straightforward to increase the effective speed of the simulation by running the same
problem size on an increasingly number of processors because there is a limit to domain
size in the decomposition of the two-dimensional part of the code. A pipelining algorithm
applied on the fully parallelized particle-in-cell code QuickPIC is implemented to
overcome this limit. The pipelining algorithm uses multiple groups of processors and
optimizes the job allocation on the processors in parallel computing. With this novel
algorithm, it is possible to use on the order of 10
2
processors, and to expand the scale and
the speed of the simulation with QuickPIC by a similar factor.
In addition to the efficiency improvement with the pipelining algorithm, the
fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge
effect and the dispersion effect.
Simulation of two specific circular machines is performed with the enhanced
QuickPIC.
First, the proposed upgrade to the Fermilab Main Injector is studied with an eye
upon guiding the design of the upgrade and code validation. Moderate emittance growth
xiii
is observed for the upgrade of increasing the bunch population by 5 times. But the
simulation also shows that increasing the beam energy from 8GeV to 20GeV or above
can effectively limit the emittance growth.
Then the enhanced QuickPIC is used to simulate the electron cloud effect on
electron beam in the Cornell Energy Recovery Linac (ERL) due to extremely small
emittance and high peak currents anticipated in the machine. A tune shift is discovered
from the simulation; however, emittance growth of the electron beam in electron cloud is
not observed for ERL parameters.
1
CHAPTER 1: INTRODUCTION
1.1 Particle accelerators and computational modeling
Particle accelerators play a central role in high energy physics research. These
machines accelerate charged particles to near the speed of light, and allow them to
collide. Such collisions reveal structure of matter, the fundamental forces of nature and
the origin of the universe. Aside from these large scale accelerators, there are much
smaller so-called “tabletop” accelerators which are attractive for other applications. They
are used for X-ray lithography, structural biology, radioisotopes, medicine, fusion
research, food sterilization, transmutation of nuclear waste, and cancer therapy [1].
The conventional particle accelerators that have existed for decades make use of
microwave cavities to accelerate particles. An electric field is generated by powerful
microwave radiation and moves along with the particle beam. Since the accelerating field
is limited to about 20~50MV/m due to electrical break down, in order to achieve a higher
energy gain, a longer accelerating path is required. There are two general types of the
particle accelerators: linear accelerators and circular accelerators. The advantage of
circular accelerators is that the charged particle beams can circulate in the pipe repeatedly
before they gain their final energy. Bending magnets are installed along the pipe to keep
the beam in a circular path. This provides an efficient way to reuse the RF power and
increase the number of collisions. The world’s largest particle accelerator is the Large
Hadron Collider (LHC) at CERN that sits on the French-Swiss border. The LHC is a
circular accelerator. It has a circumference of 27Km and is designed to accelerate
counter-rotating protons and anti-protons to 7TeV and bring them into collision.
2
As the microwave technology based particle accelerators are reaching their
technical and economic limits, a great deal of attention has been shifted to a novel
approach, plasma- based accelerators. Plasma contains equal amounts of negative charge
from electrons and positive charge from ions and is electrically neutral as a whole. The
particles are accelerated by surfing on the plasma wakefield. There are typically two
ways to excite the plasma wakefield: using a short pulse from an intense laser which
leads to the laser wakefield accelerator (LWFA), or a short pulse of intense particle beam
which leads to the plasma wakefield accelerator (PWFA). When the laser pulse or the
electron beam is sent through the plasma, the light electrons in the plasma are pushed
away. The heavier ions do not move much and form a region with excess positive
charges. The ions later pull back the electrons after the drive pulse has passed by. The
electrons overshoot and oscillate to form the wake. This results in a strong longitudinal
electric field with the phase velocity approximately equal to the speed of light, c. The
relativistic charged particle beams can surf on the wake of the plasma and gain energy
through the wakefield.
There have been great breakthroughs in the field of plasma-based accelerators. In
2006, the energy doubling of a 42GeV electron beam has been realized at the Stanford
Linear Accelerator Center (SLAC)[2]. It takes the conventional particle accelerator at
SLAC about 3km to achieve 42GeV energy gain, but the plasma wakefield accelerator
with the accelerating gradient around 52GeVm
-1
achieved the same energy gain for some
particles within 85cm.
Particle accelerators are giant and complex systems and are very costly and time
consuming to build. Consider the LHC at CERN mentioned above. The construction
3
began in 1995 and is expected to begin operation in 2008, and even this was facilitated by
using the pre-existing tunnel from the Large Electron Positron Collider (LEP). The total
cost is estimated to be 5 to 10 billion US dollars. The world’s largest linear accelerator,
SLAC took about 4 years to design and construct. Its original cost for the preconstruction
research and development was $18 million back in 1962; later design and construction
cost another $114 million. Because such experiments require such an enormous
investment, it is critical to fully understand the physics involved in particle accelerators
and the evolution of a charged particle beam. However, it is not an easy task due to the
complex nature of the particle accelerators, in particular plasma based accelerators. The
interaction between the beam and the plasma can be highly non-linear, and analytic
description for the physics involved may be impossible to derive. Numerical methods
become necessary to deal with this situation. Computer simulations have been widely
used to understand the physics involved in the plasma accelerator, to predict the behavior
of the beam and to help come up with the optimal design. Moreover computer
simulations provide a cost effective way to facilitate this type of research. It is also
possible to have access to thousands of processors for a given simulation simultaneously,
which can potentially increase the computing speed by a similar factor. For example, for
the E167 energy doubling experiment at SLAC, the particle-in-cell (PIC) code has been
extensively used to model the acceleration process of charged particles and played an
important role in the development of the experiment [2]. Because the electron cloud can
be treated as non-neutral plasma, PIC code simulation has also been applied to study the
beam instability phenomena in many circular machines such as LHC that is caused by
spurious electron cloud generated within the pipe [3].
4
Despite the advantages of using computer simulation, modeling of the accelerator
physics also faces a lot of challenges. One of the challenges is to resolve different time
scales. Consider the interaction between the beam and the electron cloud. The electron
cloud evolves at the scale of the corresponding electron cloud wavelength, while the
beam evolves at the scale of the betatron wavelength and travels thousands of turns in the
pipe with circumference at the order of kilometer. This is much larger than the electron
cloud wavelength. The electron cloud cannot be accurately modeled if the time step is
chosen according to the scale of the beam. However if the time step is chosen based on
the wavelength of the electron cloud, the enormous amount of time required to simulate
the propagation of the beam will make the simulation impossible. The solution to this
problem is to use the quasi-static approximation and use two different time steps for
modeling the beam and the electron cloud.
1.2 The electron cloud build-up and the importance of its effect
Ever since the first observation of the beam instability induced by the electron
cloud at the Budker Institute of Nuclear Physics (BINP) in the 1960’s [4], it has become a
critical issue in the design, construction and operation of the circular machines.
The build-up of the spurious electron cloud has two stages: the primary electrons
generation and the secondary electrons emission.
There are three main sources of the primary electrons formation: photoelectrons
from synchrotron radiation, residual gas ionization, and electrons generated from the
stray beam particles hit the chamber wall. Photoelectrons are the dominant source of the
primary electrons in the circular machines with very high beam energy such as the LHC.
It is expected to generate about 10
-3
photoelectrons per proton per meter in the arcs. In
5
most circular machines, the electrons yield from the positively charged particles hitting
the chamber wall or residual gas ionization are the dominant source. Heavy ions usually
have a higher electrons yield when striking the wall than protons.
Once the primary electrons are produced, they are kicked by the passing bunch
towards the chamber wall. As they strike the wall with higher energy, more electrons are
produced. These secondary electrons are pulled by the passing bunch and strike against
the wall to produce yet more electrons. This is the process of secondary electrons
buildup. In most of the circular accelerators, the secondary electron production has the
dominant contribution to the total electron cloud density. The secondary emission yield
(SEY), , is often used to quantitatively describe the secondary electron emission. The
SEY is the average number of electrons produced per incident electron, which is a
function of the energy and angle of the incident electrons, the surface material of the
chamber wall, and the surface conditioning. When the effective secondary emission yield
is greater than 1, there are more electrons ejected than absorbed. The overall electron
cloud density can grow rapidly inside the pipe. If is less than 1, there are less
electrons produced than absorbed, and the density of the electron cloud decreases.
It is crucial to understand the electron cloud formation and distribution to further
study its effect on the beam dynamics. Several computer programs have been developed,
such as ECLOUD [5] and POSINST [6], to model the electron cloud build-up. The
results of these codes can be imported to the codes that simulate beam and electron cloud
interaction. Figure 1.1 gives an example of the resulting electron cloud line density from
code ECLOUD [7].
6
Figure 1.1 Electron line density as a function of time during the passage of a batch in an
LHC dipole for different bunch intensities [7]
The beam instability due to the electron cloud has been widely observed in
circular machines over the world, including SPS at CERN, RHIC at Brookhaven National
lab, KEKB-factory in Japan. It is also expected to be an important issue for the LHC at
CERN. The spurious electron cloud increases the pressure inside the pipe, causes the
beam spot size and emmitance growth, the turn by turn tune shift, beam loss, heating of
the vacuum pipe and interference with the beam diagnostics. There are two types of beam
instabilities: coupled-bunch instability and the single-bunch instability. In the case of the
coupled-bunch instability, the leading bunch excites the long range wakefield through the
interaction with the electron cloud. The wake in turn exerts force on the trailing bunch,
couples the motion of the leading and the trailing bunches and causes the beam
instability. Single-bunch instability occurs when the electron cloud density is high, and
7
the wake field is short range or the bunch length is long enough. The head of the beam
interacts with the electron cloud and generates the wake field. The motion of the head and
the tail of the beam is coupled by the wake field and causes head-tail instability.
1.3 Experimental evidence of electron cloud effect on the beam dynamics
The first experimental observation of electron cloud effect can be traced back to
1965 at the Budker Institute of Nuclear Physics (BINP). In the small-scale storage ring
with the circumference of 2.5m, electrons were injected for space charge neutralization
(compensation). The compensating particles drove the transverse instability of the
bunched proton beam and lead to a fast loss of a bunched beam. This effect is mainly a
two-stream instability. At the time, the instability was damped by a negative feed back
system.
Ever since then, more and more electron cloud instability observation have been
reported from the storage rings and damping rings at numerous experiments, including
CERN ISR in 1971, synchrotron radiation source ALADDIN, INP PSR and Fermilab
antiproton accumulators. Figure 1.2 shows the measured beam loss and the beam current
reduction in the INP PSR [8]. Other storage rings also had similar observation.
Figure 1.2 Electron cloud induced coast beam instability in the INP PSR [8]
8
The first observation of the electron cloud effect for a positron beam was at the
Photon Factory storage ring (PF ring) at KEK in Japan back in the early 90’s. The storage
ring is mainly used for synchrotron radiation experiments. The machine is operated in the
multibunch mode with the energy of the positron or electron beams at 2.5GeV. The
vertical beam size growth was observed for the positron beam and was not observed in
the electron beam case. It showed features of coupled oscillation, and broad distribution
of the betatron oscillation sidebands, which indicates a vertical instability.
The positron beam size blow up was observed at the KEKB above a certain
threshold current in 1999. The positron beam consists of many closely spaced bunches,
and later this instability was identified as a single-bunch electron-cloud instability. After
this observation, further research was carried out at KEKB LER by injecting a test bunch
right after a train of bunches. Figure 1.3 shows the vertical beam spot size with different
test bunch currents. The spot size increases along with the increase of the current of the
test bunch.
9
Figure 1.3 Vertical beam size along the train with a trailing test bunch versus the bunch
number at KEKB LER [9].
The single bunch instability is induced by coupling the head and the tail of the
beam by the wake field of the electron cloud. The tail undergoes amplified oscillation and
results in the head-tail instability. Figure 1.4 shows the snap shot of the head and tail
vertical beam size taken with a streak camera in dual sweep mode. There are two features
of instability shown in figure 1.4: vertical beam size blow up in the tail and head-tail tilt
in some bunches.
10
Figure 1.4 Snap shots of the vertical beam size in the head and the tail of the train at
KEKB. The vertical axis is along the bunch (z); the horizontal axis shows the transvers
vertical direction (y). Consecutive bunches are separated and displayed horizontally [10]
The examples that have been presented here are just a few among many that have
been observed over the past several decades. Electron cloud instability also has become
an important issue of consideration when designing and operating new machines, such as
LHC at CERN. The instability of the LHC beam in the CERN SPS was observed in 2000.
It exhibits some special characteristics that it is mainly coupled bunch instability in the
horizontal plane and single bunch instability in the vertical plane.
Measures have also been taken to reduce the electron cloud formation and its
effect. The common ones involve transverse feed back systems; changing chromaticity to
mitigate the beam dynamics; improving the vacuum pressure, applying TiN coating to
reduce secondary emission and using solenoidal windings on the beam pipe to inhibit
11
electrons moving from the walls to the center [11]. These methods have proved effective
in some degree, but more effort is required to better understand the electron cloud effect
and to discover more efficient ways to protect the beam quality.
1.4 Existing simulation models and simulation results
Beam interaction with the electron cloud is a complex non-linear process. Various
analytical and simulation models have been developed to model this process through
different approaches. Two macro-particles model is one of the analytical ways to analyze
the linear effect of electron cloud. In this model, the beam is simplified into two macro
particles representing the head and the tail respectively. The macro particles undergo
betatron oscillation and synchrotron oscillation [12]. A brief summary will be presented
on some of the simulation models including BEST code [13] which is based on solving
the Vlasov-Maxwell equations using the method, the particle-in-cell (PIC) code
HEADTAIL [14] and QuickPIC [15] which abstracts the beam and the electron by macro
particles with mass, charge, velocities and positions.
The Beam Equilibrium, Stability and Transport (BEST) code uses the nonlinear
formalism [13] to solve the Vlasov-Maxwell equations that describe the electron-
proton (e-p) two-stream instability in the Proton Storage Ring (PSR) [16]. The intense
charged particle beam propagates through the electron cloud in the longitudinal z
direction. The distribution of the particles in each species is described by
€
f
j
(x,p,t),
€
j
denotes either the beam or electron cloud species. Including the electric field
€
E
s
=−∇φ(x
→
,t), the magnetic field
€
B
s
=∇× A
z
(x
→
,t), and the focusing force on the beam
€
−γmω
β
2
x
→
⊥, where
€
ω
β
is the effective betatron frequency,
€
x
⊥
is the displacement from the
12
beam axis in the x-y plane, then the nonlinear Vlasov-Maxwell equations can rewritten as
equation (1-1) [13]
€
∂
∂t
+ v
→
•
∂
∂x
→
− γ
j
m
j
ω
βj
2
x
→
⊥+e
j
∇φ−
v
z
c
∇
⊥
A
z
•
∂
∂ p
→
f
j
(x
→
,p
→
,t) = 0
€
∇
2
φ =−4π e
j
d
3
pf
j
(x
→
,p
→
,t)
∫
j
∑
€
∇
2
A
z
=−
4π
c
e
j
d
3
pv
z
f
j
(x
→
,p
→
,t)
∫
j
∑
(1-1)
In order to solve the above equation, the
€
δf method is used and the
€
f
j
is
decomposed into two parts,
€
f
j
= f
j0
+δf
j
(1-2)
Where
€
f
j0
is the known solution at the equilibrium state.
€
δf
j
is the perturbed term
that is determined by updating the term
€
δf
j
f
j
together with positions and momenta of
the particles. The advantage of this method is the significantly reduced noise level
compared to other methods. The disadvantage is computational time because enough
particles are needed at each spatial cell to give a distribution of momenta, hence many
more than in a particle-in-cell code.
€
δf methods are generally limited to one-dimension
as a result.
In the particle-in-cell (PIC) code, the simulation space, which is contained in the
simulation box, is divided into multiple cells in three spatial directions. The beam and the
electron cloud particles are represented by macroparticles residing in the cells. Based on
the number of interaction points of the beam and the ecloud, there are two types of PIC
code in use for beam-cloud interaction modeling: discrete and continuous.
13
HEADTAIL is an example of discrete PIC code. The electron cloud instead of
being distributed continuously is concentrated at a number of locations (interaction
points) along the ring as shown in figure 1.5 [14].
Figure 1.5 Schematic of HEADTAIL simulation algorithm
The blue points in the figure represent electrons concentrated at one interaction
point. The density is determined so that the average integrated electron cloud density is
preserved. The beam is sliced in the longitudinal direction and interacts with the electron
cloud slice by slice. Between the interaction points, the beam particles are advanced with
a transformation matrix based on the lattice structure of the ring. For simplicity, we
assume a constant betatron oscillation frequency,
€
ω
β
, in the horizontal direction. The
transformation matrix derived from the betatron harmonic oscillator equation for
transverse phase space is
€
x
x'
=
cosω
β
t
1
ω
β
sinω
β
t
−ω
β
sinω
β
t cosω
β
t
x
0
x
0
'
(1-4)
Although the discrete PIC code has the advantage of low computational cost and
requires less computing time, the assumption that the electron cloud and the beam
14
interact at one or several kick points instead of continuously along the ring captures only
the first order linear effect of the electron cloud [17]. In fact, the wakefield of the electron
cloud is highly nonlinear, and the discrete PIC code cannot model this effect.
In order to capture the nonlinear physics of the electron cloud perturbation, a
quasi-static continuous PIC code was developed [15]. In the continuous model, the
electron cloud is spread all over the ring and the beam interacts with the electron cloud
continuously. In order to overcome the incurred high computational cost, the quasi-static
or the frozen field approximation is used for the electromagnetic field calculation and the
PIC code is highly parallelized which enables the program to be executed with multiple
processor at the same time. One example of such codes is QuickPIC developed at UCLA-
USC. A detailed description of QuickPIC will be presented in chapter two.
15
CHAPTER 2: QUICKPIC ALGORITHM FOR MODELLING
BEAM-ELECTRON CLOUD INTERACTION
QuickPIC code is the primary simulation tool that is used to study the dynamics
of the beam interacting with the spurious electron cloud generated in the circular
accelerators. QuickPIC code is a particle-in-cell (PIC) code based on a quasi-static or
frozen field approximation.
2.1 PIC simulation
In the PIC simulation of beam and electron cloud interaction, the real electron
cloud particles and beam particles are modeled by “superparticles”. Each superparticle
represents multiple numbers of real charged particles. The superparticle has
proportionally larger mass and charge. The charge-to-mass ratio of the superparticle will
remain the same so that the trajectory is going to be the same as that of the real particles.
The simulation box is the domain in which the electron cloud and the beam
reside. This is divided into multiple cells in all three dimensions. A 2D illustration is
shown in figure 2.1 [1]. The electromagnetic fields are only defined on the grid points,
and need to be extrapolated to determine the field at the location of a particle that is not
on the grid. The charge and the current densities are also only defined on the grid points
by weighting the charge and current of each particle.
16
Figure 2.1: Superparticles and mesh schematic implemented in a 2D PIC simulation. The
red dots represent the superparticles. The blue diamonds represent the grid points on
which the electromagnetic field is defined [1].
The PIC simulation starts with the initialization of superparticles with the desired
distribution of positions and velocities. The dynamic particle interaction is calculated
through a loop. For each step of the loop, the computational cycle is broken up into four
parts as shown in figure 2.2 [2]. From the particle positions and velocities, the current and
charge density on the grid are deposited. The electromagnetic fields on the grid are
determined by a field solver. The field is then used to calculate the force on the beam
particles. In the end, the beam particles are pushed, and the updated positions and
17
velocities are deposited. The cycle repeats until the desired number of time steps has been
reached.
Figure 2.2 outer loop of PIC code [2]
The electric and magnetic field solver of the full PIC code solves the Maxwell’s
equations by discretizing them into a set of finite differential equations. When modelling
long term beam-electron cloud interaction, the time step Δt is required to resolve the
electron cloud oscillation and to satisfy the Courant condition [3]. Δt ends up being much
smaller than the real time span of the problem of interests. This makes the full PIC code
very computationally intensive. In order to overcome this limit, the quasi-static or frozen
field approximation is introduced so that the beam pusher can use a much larger time step
than the electro magnetic field solver, which greatly reduces the computing time.
2.2 Quasi-static approximation
QuickPIC is based on the quasi-static equations. For the beam and electron cloud
interaction, the electron cloud evolves on the scale of the wavelength of the wake, which
is much shorter than the scale over which the beam evolves. During the time scale it takes
18
the electron cloud to pass through the beam, the beam does not evolve significantly. In
other words,
€
β >>σ
z
, where
€
β is the average beta function and
€
σ
z
is the bunch length of
the beam [4].
Starting from the Maxwell equations in the Lorentz gauge,
€
(
1
c
2
∂
2
∂t
2
−∇
2
)
A =
4π
c
j (2-1)
€
(
1
c
2
∂
2
∂t
2
−∇
2
)φ = 4πρ (2-2)
Where
€
j is the total axial current density of the beam and the electron cloud,
€
j =
j
b
+
j
c
(2-3)
It is assumed that the motion of the beam is highly relativistic, and the electron
cloud particles are non-relativistic. The contribution to the total current from the electron
cloud is negligible compared to that of the beam. Thus the total current can be written as,
€
j ≈
j
b
= cρ
b
ˆ z (2-4)
Where c is the speed of light and
€
ρ
b
is the beam charge density.
The quasi-static or the frozen field approximation assumes that the wakes are
functions of
€
(z−ct) only, in which case the full three dimensional (3D) Maxwell
equations are reduced to the two dimensional (2D) quasi-static equations [1]. Within the
quasi-static approximation the 3D Maxwell equation solver is reduced to a series of 2D
Poisson solvers. The full quasi-static approximation takes into account the axial plasma
current that is of importance when the plasma or cloud electrons move at relativistic
speeds. This axial current is typically important when modeling plasma wakefield
problem. When including the axial current the solver is done iteratively within a
predictor-corrector loop. While for electron cloud problem, the axial current is typically
19
small enough to be ignored since the electron cloud particles are non-relativistic. In this
case the fields and particle trajectories can be advanced using a leap frog algorithm where
the fields are solved using 2D Poisson equations. This later case is referred to as the basic
quasi-static mode of QuickPIC and is the mode used for the electron cloud simulation in
this thesis. Ignoring the current density of the electron cloud as in equation (2-4), the full
quasi-static equations become scalar Poisson equations as equation (2-5) and (2-6)
€
∇
⊥
2
A =−
4π
c
j (2-5)
€
∇
⊥
2
φ =−4πρ (2-6)
Currently, the generation of the electron cloud is not included in the model, and
the cloud is assumed to be generated by the preceding bunch and it is initially uniform
and cold. The Poisson equations are solved on a 2D slab of electron cloud with
conducting boundary conditions to obtain the electromagnetic field. Once the wake field
is obtained and the electron cloud particles are updated, the slab is pushed back a small
step. After the 2D slab has pushed through the whole beam from head to tail, a 3D beam
pusher is entered. The force on the beam
€
F
b⊥
can be obtained as follows:
€
ψ =φ−A
||
(2-7)
€
F
b⊥
=−e∇
⊥
ψ (2-8)
The time saving of the QuickPIC compared to full PIC code is between 2 to 3
orders of magnitude. This is contributed by two sources. One is the bigger step used in
3D routine.
The time step of the 3D beam pusher is chosen to resolve the beam betatron
oscillation. This usually requires about 30 time steps per betatron oscillation. In other
words,
€
Δt = β 30. In the full PIC code, the time step is about cell size,
€
Δz , divided by
20
the speed of light c. QuickPIC uses a 3D time step that is 2 to 3 orders of magnitude
larger than used by the full PIC code [4]. In order to simulate the same propagation
distance of the beam through the electron cloud, many fewer time steps are required
using QuickPIC. But simply increasing the size of the 3D time step improves the
efficiency by no more than 10 times [5] because most of the computing time is spent on
FFT solvers for the field calculation. So the major contribution of time saving is from
reducing the 3D field calculation to 2D one with the quasi-static approximation. It
reduces the number of particles being pushed by the amount that is proportional to the 2D
time step, which is the number of grids in the z direction, N
z
. For a typical simulation
with N
z
of 256, the time saving will be two orders of magnitude [3].
2.3 Adaptation for modeling circular accelerators
QuickPIC was first developed to model particle acceleration in plasma
wakefields. In order to enable it to model the beam and electron cloud interaction inside
the circular machine, betatron and synchrotron oscillations were added into the model.
These oscillations are introduced by the external fields of the magnets and RF power in
the accelerator. The betatron oscillation occurs in the transverse plane (x-y plane, this
plane is perpendicular to the direction in which the beam travels). The frequency of the
betatron oscillation of a single beam particle is made to depend on its longitudinal
momentum offset in order to take into account chromatic effects. Synchrotron oscillation
occurs in the longitudinal direction (z direction, the direction in which the beam
propagates). Including the effect of betatron and synchrotron oscillation, the equations of
motion are as follows [4]:
In the transverse direction,
21
€
d
2
x
dt
2
+ Q
x
+ΔQ
x
δp
p
0
2
ω
0
2
x =
1
m
p
γ
F
cl _ x
(2-9)
€
d
2
y
dt
2
+ Q
y
+ΔQ
y
δp
p
0
2
ω
0
2
y =
1
m
p
γ
F
cl_y
(2-10)
In the longitudinal direction,
€
dz
dt
= −ηc
δp
p
0
(2-11)
€
d
dt
δp
p
0
=
Q
s
2
ω
0
2
ηc
z +
1
p
0
F
cl _ z
(2-12)
Where
€
Q
x
and
€
Q
y
are the horizontal and vertical tunes respectively;
€
ΔQ
x
and
€
ΔQ
y
are
the chromatic shifts;
€
δp
p
0
is the longitudinal momentum spread of the beam;
€
F
cl
= F
cl _ x,
F
cl _ y,
F
cl _ z ( )
is the total force exerted on the beam particles from the electron
cloud;
€
ω
0
is the angular revolution frequency of the beam in the circular accelerator,
since the beam in our model is ultra-relativistic, we have
€
ω
0
= c R
0
with
€
R
0
being the
average radius of the circular machine;
€
η is the slippage factor;
€
Q
x
is the longitudinal
synchrotron tune. Equations (2-9)-(2-12) are non-simplectic because they ignore the
terms that are function of x and y in equation (2-11) due to their small values.
Comparing equation (2-9) and the equation of motion of a charged particle in
electric and magnetic field as shown in equation (2-13),
€
d
2
x
dt
2
=
q
γmc
2
(E
x
−
V
z
B
y
c
) (2-13)
22
The effect of chromaticity and betatron focusing force in the horizontal direction
can be modeled by equation (2-13) with an external magnetic field
€
B
y
and electric field
€
E
x
with the following forms:
€
B
y
=
2Q
x
ΔQ
x
(δp p)ω
0
2
xγm
p
c
qv
z
(2-14)
€
E
x
=−
γmc
2
Q
x
2
ω
0
2
x
q
(2-15)
The equations for
€
B
x
and
€
E
y
can be derived in a similar way for the equation of
motion in the y-direction.
2.4 QuickPIC domain decomposition
In QuickPIC, both the 2D inner layer and the 3D outer layer are fully parallelized.
Figure 2.3 shows the structure of 3D routines and 2D routines [3].
Figure 2.3 The flow chart of QuickPIC
23
Different domain decomposition methods are used for the beam and the electron
cloud to enable the parallel computing using multiple processors. As shown in figure 2.4
[1], the beam is divided into equally spaced domains in the longitudinal or axial (z)
direction. The 2D electromagnetic field calculation is carried out on a slab in the
transverse plane sweeping through the beam from head to tail. The electron cloud is
decomposed evenly in one of the transverse directions (vertical y direction in our model)
on the slab.
Figure 2.4. Domain decomposition of QuickPIC in 2D and 3D
As shown in Figure 2.4, the calculation within each partition is done by one
processor. The necessary communications among the processors are realized by the
Message-Passing Interface (MPI) [1]. The output of QuickPIC is the combination of the
results from all the processors used. Parallel computing provides an enormous
computational time savings when compared to single processor computing.
24
CHAPTER 3: ENHANCING QUICKPIC SIMULATIONS WITH A
PIPELINING ALGORITHM
3.1 Limitation of computing efficiency of QuickPIC code
With the quasi-static approximation applied in the 2D routine of the fully
parallelized code, QuickPIC has achieved computational time saving of 2 or 3 orders of
magnitude with reasonable accuracy compared to the full PIC code such as OSIRIS [1].
Although the improvement of the computational efficiency has made it possible to
simulate the long term effects of the electron cloud in the circular machines, in order to
model the beam propagation of the timescale that is of the same order as the real beam
lifetime, a faster and more efficient simulation code is needed. For example, it takes 16
processors about five days to simulate LHC beam propagation of 500 turns on the current
HPCC clusters at USC. 500 turns propagation is only 44ms in real time compared to the
beam lifetime of 30 minutes [2].
The benefit of parallel computing is that using more processors reduces real
computing time. Ideally, the reduction in computing time is linearly proportionate to the
number of processors used.
For quasi-static particle-in-cell (PIC) codes, such as QuickPIC, a two-dimensional
PIC code is embedded in a three-dimensional PIC code and most of the computation time
is spent in the two-dimensional part of the code. With the rapid development of the high
performance parallel computers, thousands of processors are easily available for parallel
computations. However, it is not straightforward to increase the effective speed of the
simulation by running the same problem size on an increasingly number of processors
because there is a limit to domain size in the decomposition of the two-dimensional part
25
of the code. This is especially true in the decomposition direction of the fast fourier
transform (FFT) operation for the field solver (i.e., the y-direction) [pipelining paper].
For QuickPIC, the time savings of the 3D outer layer is proportional to the
number of processors used, because the workload on each processor is inversely
proportionate to the number of processors used. Each processor only need to
communicate to its adjacent 2 possessors (the previous one and the next one), which
means as the number of processors increase, the communication for each processor
remains the same.
For the 2D inner layer of QuickPIC, the relationship is not linear. As mentioned
before, the 2D field routine is carried out on a slab in the x-y plane. The slab is
decomposed in the y direction. The partitions are evenly distributed to all the processors
during the parallel computing. The fast Fourier transform (FFT) solver is used to solve
the Possion’s equations. Since the electron cloud particles can move freely within the 2D
x-y slab, one processor is potentially required to exchange information with all other
processors. As more processors are used for the computing, the communication cost of
the 2D routine increases at the order of N, the number of processors used. This prevents
QuickPIC from achieving the ideal linear computational speed up. In fact, simply
increasing the number of processors can decrease the efficiency of the program as shown
in Figure 3.1. The simulation system utilized has the number of grids in three directions
as 64: 64: 1024. Since there are only 64 cells in the y direction, the maximum number of
processors can be used is limited to 64, in order to make sure there are at least one cell
per each processor. But even before reaching 64 processors, we have already observed
that the increased cost of communication exceed the advantage of having all processors
26
working parallel. As shown in figure 3.1, the time required to complete the same
simulation with 32 processors is 20% larger than that with 4 processors, and about 45%
larger than that with 8 processors.
10000
11000
12000
13000
14000
15000
16000
17000
18000
19000
0 4 8 12 16 20 24 28 32 36 40
# processors
computing time (s)
Figure 3.1 computational time of QuickPIC without pipelining algorithm
3.2 Advantage of pipelining algorithm
In order to overcome the limitation on the performance of QuickPIC, we must
implement an algorithm that will allow more processors to be used and keep the
efficiency growing proportionally. In order to achieve this goal, a pipelining algorithm is
implemented which is similar to the idea of the pipelining on an assembly line or the
pipeline applied to CPUs.
In QuickPIC simulation, the moving window travels at the speed of light with the
beam. So in most cases the information is passed backward with only a few exceptions.
In QuickPIC, the beam particles are pushed after the 2D field calculation is completely
done in the whole simulation domain. This means that the 2D slab has passed through the
27
beam. However in order to push a particular beam particle, particle i, only the force
exerted on the particle itself is needed. If the slab has passed through the partition in
which particle i resides, enough information of the force is already gathered to push the
particle, and there is no reason to wait for the 2D routine to be finished on the rest of the
beam before pushing particle i. So a way to implement this idea is to divide up all the
processors used into multiple subgroups. Each subgroup takes care of one portion of the
domain along the longitudinal direction. When the 2D routine is done in a given
subgroup, the processors of the subgroup can move onto the 3D routine to update the
beam particles in its domain. It does not have to wait for the 2D routine to finish in the
rest of the simulation box. This is the central idea of pipelining algorithm.
There are two primary reasons why this approach is able to overcome the limit of
QuickPIC: First, more processors can be used compare to QuickPIC without pipelining
algorithm; Second, the communication cost during the 2D routine is well managed so that
it won’t cancel out the benefit of parallel computing. In order to make sure there are at
least one cell per processor for the 2D and 3D routine, the maximum number of
processors used cannot exceed the number of grids in both y and z direction. In addition,
for the 2D field calculation, the FFT solver requires that each processor communicate
with all the other processors. If too many processors are used, the time saving of using
more processors will soon be cancelled out by the cost of the communication. So the
number of processors must be carefully determined to make sure that the communication
does not negatively impact the whole performance too severely. Conclusions can be draw
from the above that usually, stricter constraints on the number of processors used are
imposed by the 2D routine that is associated with the decomposition in the transverse y
28
direction. Using pipelining, the field calculation on the slab at a particular 2D time step is
not performed by all the processors. Instead, it is done by the processors in a subgroup.
We only need to chose the number of processors in the subgroups to satisfy the
constraints of the 2D routine. This is usually the same as the total number of processors
that can be used in QuickPIC without the pipelining algorithm being implemented.
However with the pipelining algorithm, multiple subgroups can be used. Thus the total
number of processors used in the pipelining approach can be increased multiple times.
This is determined by the number of subgroups used, which can potentially be of the
order of 10
2
or 10
3
. Overall, not only can we make use of more processors for the
simulation, but we can also keep the efficiency of using more processors by limiting the
cost of communication among them.
3.3 Domain decomposition of the pipelining algorithm
The domain decomposition utilized by the original QuickPIC is illustrated in
Figure 3.2. The pipelining algorithm rearranges how the processors work together to
perform the simulation. Instead of working as a single group, all the processors are
divided evenly into multiple subgroups. The domain decomposition also needs to be
changed accordingly to support the new way in which the processors work together. The
domain decomposition for pipelining algorithm is shown below in Figure 3.2.
Understanding the domain decomposition is also critical for understanding the algorithm
of pipelining.
29
Figure 3.2 Domain decomposition used for the pipelining algorithm (2 subgroups and 4
processors per subgroup)
The beam is evenly divided up in the longitudinal direction and allocated to each
subgroup. In the particular example shown in Figure 3.2, the beam is divided into halves.
Subgroup 1 has the head half and the subgroup 2 has the tail half. Within each group, the
beam is further decomposed evenly and assigned to each processor in the group. In the
example considered here, there are 4 processors in each subgroup, so each half of the
beam is further divided in to 4 parts and given to each processor. In the 2D
electromagnetic field computation, the decomposition of the 2D slab is still in the
transverse y direction. The difference is that the number of partitions is determined by the
number of processors in each subgroup instead of the number of processors used as a
whole. In the example considered here, the total number of processors used is eight,
30
while in each subgroup, there are four. So the slab is distributed among the four
processors evenly instead of eight. Each group performs the 2D routine on a slab only
within its own 3D domain instead of the whole simulation box.
Compared to what was used for the original QuickPIC (Figure 2.4), there is still
the same number of processors doing the 2D field calculation. The efficiency of the FFT
solver is not affected. However with pipelining, we managed to use twice the number of
processors for the whole simulation, which will lead to a doubling of the computational
speed. In general, even more subgroups can be used and the increase in computational
speed can potentially be order of 10
2
-10
3
.
3.4 Pipelining algorithm
The pipelining algorithm begins with the initialization of the parallel computing
environment. All the processors to be used are divided into a certain number of
subgroups. Each processor is labeled with the group id representing which subgroup it
belongs to and a rank within the subgroup.
The algorithm of pipelining is illustrated in Figure 3.3. The calculation starts with
the beam initialization on all the processors of all the subgroups. The positions and the
velocities of each simulation beam particle are initialized based on the parameters set in
the input deck. Then it moves into the main iteration loop of the QuickPIC program
starting from the first subgroup.
The first subgroup begins the 2D field calculation routine with the initialization of
a uniformly distributed electron cloud on a 2D slab. This 2D slab is advanced through the
part of the beam in the domain of the first group. Let nz be the number of grids in the
longitudinal direction, and N be the total number of subgroups used, then the number of
31
2D time step performed by each subgroup is nz/N. During the time that the first subgroup
is doing the 2D calculation, the other subgroups are idling. After the 2D calculation is
done, the first subgroup sends the results of the last slab to the second subgroup (Figure
3.3 (a) and (b)). This information is necessary for the second subgroup to start its 2D
calculation. Next, the first subgroup moves into the 3D push of the beam particles within
its domain and deposits the beam charge density. Meanwhile, the second group is
working on the 2D electromagnetic field calculation. At t=2, the first subgroup finishes a
full cycle of 2D and 3D routines and is ready to start the next time step. The second
subgroup having just finished the 2D field calculation is about to start the 3D beam
particle push. The third subgroup ceases idling status. It receives its initial 2D electron
cloud slab from the second subgroup in the same way, in which the second subgroup does
from the first subgroup, and enters the 2D routine. At t=3, the fourth and final subgroup
joins the other subgroups. The whole pipe is filled up and it enters a steady state, where
none of the subgroups are idling and each subgroup is leading the one behind it by half
the time step.
32
Figure 3.3. Algorithm of pipelining implemented into QuickPIC. (The beam moves from
left to right. Four groups are used in this simulation. The boxes represents the domain for
each subgroups, and are labeled with the group ID. The blue block is the part of the
Gaussian beam in the domain of the subgroup.)
In summary, if N subgroups are used, it takes about (N-1) time steps to fill up the
pipe. After that, during the computation, the Nth subgroup is always approximately a half
time step ahead of the (N+1)th subgroup (Figure 3.3 4d). A reverse emptying sequence
occurred at the end of the simulation. Since in general the total number of time steps for a
given simulation is much larger than N, the filling and emptying time takes only a very
small fraction of the total simulation time.
3.5 Implementation of the pipelining algorithm
The pipelining algorithm does not change the physics in the QuickPIC model. The
same numerical algorithm will be used by every subgroup. What is changed, however, is
the way in which all the processors are organized to work together, and the timing of
33
calling 2D and 3D routine by different processors. Thus the subroutines added to the
QuickPIC code mainly deals with grouping the processors and managing the
communications among subgroups.
3.5.1 pipelining environment setup
Setting up the pipelining environment deals with dividing up the processors into
the desired number of subgroups. Fortunately, the Massage-Passing Interface (MPI) [3]
provides a function, “MPI_COMM_SPLIT”, that can create multiple subgroups
simultaneously. The total number of processors to be used is 2
n
, where n is an integer.
The number of subgroups is 2
m
, where m is an integer smaller than n. The number of
processors in each group is then 2
n-m
. Because n>m, there are at least two processors in
each subgroup. Each processor has two identification numbers attached to it in the
simulation using subgroups: pipekey and ID in its subgroup. All the subgroups are
labeled with a number from 0 to (2
m
-1). The pipekey indicates to which subgroup a
processor belongs. All the processors in the same subgroup should have the same
pipekey. Within each subgroup, different processors are labeled with a different number
ranging from 0 to (2
n-m
-1). In the pipelining algorithm, every processor has a unique
pipekey and subgroup ID set. After the pipekey and the subgroup ID are determined,
function “MPI_COMM_SPLIT” is called to create the subgroup communicators.
Although most of the computation will be done with subgroups, there are certain routines
that will be executed more efficiently if all the processors are treated as in a single group.
These routines are mainly those called at the beginning of the simulation, such as
initialization of the beam particles; broadcast of the information from the input deck. It is
useful to keep the single group communicator. Thus every processor also has a third ID
34
that is the ID in the large group. The large group ID ranges from 0 to (2
n
-1). Subroutine
PIPESWITCH (see Appendix A) is implemented to enable switching between the large-
single-group mode and the multiple-subgroup-mode. Figure 3.4 gives an example of how
the subgroups are structured and how the three IDs of each processor are assigned.
Figure 3.4 The pipelining environment setup. Eight processors and four subgroups are
used. Each light blue brick represents a processor. The subgroups are represented by
separating the processors with the solid blue line.
3.5.2 Communication among processors in the pipelining algorithm
In the QuickPIC algorithm, both in the 2D and the 3D routines, there is
information passed among processors. An example is the reallocation of the particles that
are pushed outside of the original domain. With the implementation of the pipelining
algorithm, the processors are divided into subgroups. Consequently, the communication
among the processors has two levels: intra-subgroup and inter-subgroup. The intra-
subgroup communication is the communication among the processors from the same
subgroup. The inter-subgroup communication is the communication among processors
that are from different subgroups.
The intra-subgroup communication behaves very much as the communication that
is performed in the large group in the original QuickPIC. The only difference is that in
the original QuickPIC, the communicator is the large group, but in the pipelining
algorithm, the communicator is a subgroup.
35
Inter-subgroup communication is much more complex as compared to the intra-
subgroup communication. For the intra-subgroup communication, all the processors are
synchronized, and are on the same stage of the simulation all the time. So there is no
timing issue for the intra-subgroup communication. But for the inter-subgroup
communication, as we described in the pipelining algorithm section, all the subgroups are
always on a different stage of the simulation. Although it is critical to keep the flow of
computation smooth, it makes the communication among the subgroups complex.
There are two types of inter-group communication classified by the direction in
which the information is passed: the forward (from beam tail to beam head)
communication and the backward (from beam head to beam tail) communication.
During the pipelining process, group N is always approximately a half step ahead
of group (N+1). If the communication is passed backward, from group N to group (N+1),
after group N sends out the information, the information can be stored in a buffer and
group N can move on to the next part of computation, group (N+1) picks up the
information whenever it is ready to receive it. Obviously, backward communication does
not interrupt the computational flow and preserves the time saving benefit of the
pipelining algorithm.
The other direction of the communication is forward, that is, the information is
sent from group (N+1) to group N. Since group N is ahead of group (N+1), when group
N is ready to receive, it has to stop and wait for group (N+1) to be ready to send. This
basically synchronizes the computational progress on group N and N+1. The waiting time
involved diminishes the time saving benefit of the pipelining algorithm.
36
In the simulation of the beam and electron cloud interaction, there are two main
inter-subgroup communications: a) In the 2D field calculation, the last electron slab
needs to be passed to the next group, and the next group starts the 2D field calculation
from the received slab. b) After the 3D beam push, beam particles that move out of the
original domain due to synchrotron oscillation or acceleration/deceleration from the wake
need to be sent to the other groups. Scenario (a) only has the backward communication,
which is easier to handle. Scenario (b) involves the communication in both directions and
some special treatment is needed to handle the communication. We will give the detailed
discussion below:
a) Passing electron cloud slab among subgroups
Figure 3.5 illustrates the communication for passing the electron cloud slab from
one subgroup to another. There are four subgroups used. The subgroups are separated by
the red dotted line and labeled as subgroup 0, 1, 2, and 3 with subgroup 0 being the head
of the beam and subgroup 3 being the tail of the beam. The light blue bar represents the
2d slab, for simplicity, only 2 slabs are calculated by each subgroup. The arrows on the
top of the figure indicate the direction of the communication.
Subgroup 0 initializes the first 2D slab of the uniformly distributed electron cloud,
so it does not need to receive the slab from other subgroups. When the 2D routine is done
in subgroup 0, it sends the last slab to subgroup 1. Since subgroup 1 is about a half time
step behind subgroup 0, when subgroup 0 tries to send the slab, subgroup 1 is ready to
receive. Upon receiving the slab, subgroup 1 starts the 2D routine. The similar message
passing occurs for the subgroups behind. However, after the last subgroup, which is
37
subgroup 3 in this example, is done with the 2D routine, it does not send its last slab to
subgroup 0 because the first subgroup initializes the first slab by itself.
Figure 3.5 the inter-group communication for transferring the 2D electron cloud slab
between adjacent subgroups
Since the communication of the 2D ecloud slab is always backward from Nth
subgroup to the (N+1)th subgroup, there is no waiting time between sending and
receiving. So the flow of the computation is not disrupted.
b) Reallocate the beam particles into the right domain
Due to the RF power of the circular accelerator, the beam particles oscillate back
and forth in the longitudinal direction. This is the synchrotron oscillation. When this
phenomenon is translated into the simulation model of QuickPIC, the simulation particles
undergo the longitudinal oscillation within the simulation box. It is very possible for the
simulation particles to move out of their old partition and move into either the previous or
the next partition in the z direction. As they move to a different partition, they will be
under the governance of a different processor in the same subgroup or even a different
38
processor in a different subgroup. It is obvious that the inter-subgroup communication
will take place, and the communication can be in either forward or backward direction.
As we discussed above, the backward communication is easy to handle and will not cause
any difficulties for the pipelining. So we will focus on the forward communication here.
In the forward communication, the information is passed from subgroup (N+1) to
subgroup N. But the progress in subgroup N is ahead of that in subgroup (N+1). After
subgroup N is done with the 3D routine of a given time step, before it moves to the 2D
routine of the next step, it has to collect the new particles that move in from other
domains, for example, the domain of subgroup (N+1). On the other hand, subgroup
(N+1), which is half time step behind, just began the 3D routine. So subgroup N needs to
wait for subgroup (N+1) to finish the 3D routine and pick up the particles that move to
the domain of subgroup N. Then the send and receive can take place. By this time, the
progress in subgroup N and subgroup (N+1) are synchronized, and then, the subgroup
(N+1) will be idling until subgroup N finishes the 2D routine of the next time step. The
waiting time eventually will make the pipelining process to lose its advantage.
In order to solve the problem described above, the timing of the communication is
changed. In the QuickPIC simulation, the 3D time step is chosen to resolve the betatron
oscillation, typically
€
λ
β
/30, where
€
λ
β
is the betatron wavelength. In most cases, the
frequency of the synchrotron oscillation is much smaller than that of the betatron
oscillation. Within the time scale of a 3D push, the displacement of the beam particles
due to the synchrotron oscillation is small and satisfies equation (3-1)
€
(v
th
+ω
s
L
B
)Δt <
Δz
2
(3-1)
Where
€
v
th
is the thermal velocity of the beam,
€
ω
s
is the frequency of the
39
synchrotron oscillation,
€
L
B
is the length of the simulation box in z,
€
Δtis the 3D time step,
and
€
Δz is the cell size in longitudinal (z) direction. Equation (3-1) guarantees that only
the particles within the
€
2Δz distance from the boundary of the subgroup domain can
possibly travel to the domain of another subgroup. So if there are particles from group
(N+1) that need to be sent to group N, there is no need for group N to wait until the 3D
beam pusher is finished with the particles in the domain of group N+1. All the particles to
be sent can be gathered after the 3D push of the particles within
€
2Δz distance from the
boundary between subgroup N and subgroup (N+1). After the communication, group N
moves on to the next step 2D field calculation and group N+1 continues the rest of the
beam push. As illustrated in Figure 3.6, subgroup 1 is done with the 3D routine, and
subgroup 2 only did the 3D push within the first
€
2Δz range (for simplicity, the slab
separated by the red dotted line is
€
2Δz in the z direction). Then subgroup 2 sends the
particles that move out to subgroup 1. Because it takes much less time to compute the
beam push in
€
2Δz range than the whole subgroup domain, this method keeps the
computational flow of the pipelining algorithm and effectively diminishes the waiting
time.
Figure 3.6 move the beam particle after 3D push in the pipelining algorithm
3.6 Improved QuickPIC simulation with pipelining algorithm
40
The goal of the pipelining algorithm is to speed up the simulation in real time
without losing the accuracy of the original QuickPIC. So we will check two aspects of the
performance of the pipelining algorithm: fidelity and efficiency.
3.6.1 Fidelity of the pipelining algorithm
During the implementation of the pipelining algorithm, the physics model and the
numerical methods are not changed. We only change the way in which all the processors
work together. So the results of QuickPIC with and without the pipelining algorithm
should be the same.
In order to test the fidelity of the pipelining algorithm, a long-term simulation of
12000 3D time steps has been performed by QuickPIC for the electron cloud problem
with and without the pipelining algorithm. This simulation typically takes 16 processors
about a week to finish using the original QuickPIC. So the test should be robust. Both
runs used the same set of the parameters from LHC as shown in the table below.
Table 3.1 Simulation parameters of LHC
Horizontal Spot Size (mm) 0.884
Vertical Spot Size (mm) 0.884
Bunch Length (m) 0.115
Horizontal Box Size (mm) 18
Vertical Box Size (mm) 18
Bunch Population 1.1x10
11
Momentum Spread 4.68x10
-4
Beam Momentum (GeV/C) 4.796x10
8
Circumference (km) 26.659
Horizontal Betatron Tune 64.28
Vertical Betatron Tune 59.31
Synchrotron Tune 0.0059
Horizontal Vertical Chromaticity 2, 2
Electron Cloud Density (cm
-3
) 6x10
5
# of cells in each direction (x, y, z) 64, 64, 64
# of beam particles 64x64x128
# of ecloud particles per cell 4
3D time steps (1/ω
p
) 1.90059894561
41
Figure 3.7 shows the snap shot of the center x-z slices of beam and electron cloud
at 3D time step 1000. The results from both the original QuickPIC and the QuickPIC with
the pipelining algorithm are presented. The pipelining simulation uses two subgroups and
each group generates a snap shot of the electron cloud and the beam images within its 3D
domain. The results from the two subgroups are combined to create the complete image
of the electron cloud and the beam. As shown from Figure 3.7, the results from two
versions of QuickPIC have very good agreement.
1(a) 1(b)
42
2(a) 2(b)
Figure 3.7 the snap shot of the center x-z slice of the electron cloud and the beam at 3D
time step 1000. 1(a) and 2(a) are the results from original QuickPIC. 1(b) and 2(b) are the
results from QuickPIC with the pipelining algorithm using two subgroups.
The simulation results of the horizontal (x) and vertical (y) beam spot size growth
are of great interest in electron cloud effect study. The spurious electron cloud causes
beam instability as it interacts with the beam during its propagation, and one of the
important pieces of evidence of beam instability is the growth of the spot size in the
horizontal and/or vertical direction. For the fidelity check of the pipelining algorithm, the
results of the spot size are shown in Figure 3.8. Each plot contains two curves: the blue
curve is from the pipelining QuickPIC simulation; the red curve is from the original
QuickPIC. The two curves overlap each other in both graphs indicating that the
pipelining algorithm maintains the accuracy of the original QuickPIC.
43
Figure 3.8 Horizontal and vertical spot size of the beam from simulation of the original
QuickPIC and the QuickPIC with a pipelining algorithm
3.6.2 Efficiency of the pipelining algorithm
The biggest benefit of using the pipelining algorithm is that the increase in speed
grows proportionally with the increase of the total number of processors used. Figure 3.9
shows the comparison of the computational speed of the pipelining and the original
QuickPIC.
44
The simulation is performed on the HPCC computer cluster at USC. Each node
has a dual Intel P4 3.0 GHz processor and is interconnected with Ethernet and Myrinet.
Both the pipelining and the original versions perform the simulation of 2000 3D time
steps using the same set of parameters as listed in table 3.2. The number of grids in three
dimensions is set to be 64:64:1024 respectively.
Table 3.2 Simulation parameters used for test of pipelining efficiency
Horizontal spot size (mm) 0.884
Vertical spot size (mm) 0.884
Bunch length (mm) 115
Bunch population
€
11×10
10
Horizontal emittance 21.70
Vertical emittance 20.02
Momentum spread
€
4.68×10
−4
Relativistic factor 480
Radius (km) 4.24291
Horizontal Betatron tune 64.28
Vertical Betatron tune 59.31
Synchrotron tune 0.0059
Electron cloud density (cm
-3
)
€
6×10
5
Chromaticity x and y 2, 2
Phase slip factor
€
3.47×10
−4
Horizontal box size (mm) 18
Vertical box size (mm) 18
Box length (mm) 2000
Total cell number
€
64×64×1024
Total beam particles
€
64×64×2048
Ecloud particles per cell
€
2×2
3D time step (
€
1/ω
p
) 1.90059894561
Total simulated 3D time steps 2000
With the original QuickPIC, the computing time decreases as the number of
processors is increased, but this trend is not sustained when the number of processors
used is increased to 32. As the communication costs of the 2D field calculation increased
dramatically when the number of processors used goes beyond 32, the time saving by
using more processors is cancelled out. This sets a limit on the total number of processors
45
that can be used to achieve the optimal computational efficiency of the original
QuickPIC.
With pipelining algorithm, on the other hand, when 32 processors are used, there
is great amount of time saving compare to the case of the original QuickPIC. All the
processors are divided into subgroups in a way that each subgroup contains four
processors. For the 2D calculation, instead of using 32 processors for each slab, only 4
processors are used. The communication among 4 processors are much more time
efficient compared to that among 32 processors. Furthermore, we can increase the
number of subgroups used, which proportionally increase the total number of processors
used, and the time saving advantage continues. The increasing computational speed is
proportional to the total number of processors used for the pipelining code up to 256
processors. Further increasing the total number of processors to 512 does not improve the
efficiency proportionally. This is because with more subgroups and processors used, the
3D communication due to the synchrotron oscillation increases and it reduces the
computational speed up.
46
Figure 3.9 Comparison of the computational speed between the pipelining and the
original QuickPIC.
47
Chapter 4: Enhancing fidelity of the physics model of QuickPIC
Enhancing fidelity with additional physics is crucial to assess importance of
physical effects and to enable detailed comparisons to experiments and other codes. In
this chapter two effects, the beam space charge effect and the dispersion effect are added
to the QuickPIC model.
4.1 Beam space charge effect
In the previous model of QuickPIC, the effect of the space charge of the electron
cloud on the beam and itself were self consistently modeled. However, the effect of the
beam’s own space charge on itself was neglected due to the approximation that the speed
of the beam is equal to the speed of light. Under this approximation, the E field and the B
field induced by the beam space charge will cancel each other out, so the beam space
charge does not affect the beam dynamics.
In the previous model of QuickPIC, the electric potential and the pseudo
potential are calculated as
(4-1)
where and are the density of the electron cloud and the beam respectively.
Define the vector potential along the direction of motion as , which satisfies the
following equation,
(4-2)
In equation (4-1) and (4-2), potentials have been normalized by a factor of .
Then the transverse electric field and magnetic field are derived as follows:
48
(4-3)
The transverse force on the electron cloud is
(4-4)
With the approximation that , the transverse force on the beam can be
expressed as
(4-5)
However, in reality, the high energy beams have a speed that is smaller but very
close to the speed of light. For example, the LHC proton beam has a relativistic factor of
479.6. To include the effect of the beam space charge on itself, we re-derive the field and
force equations used in QuickPIC for beam velocity less than c [1].
For the transverse force on the beam, equation (4-5) can be written as
(4-6)
The beam space charge force on the electron cloud, equation (4-4), remains
unchanged. The extra term appears in equation (4-6) and can be expressed in terms of
potentials,
(4-7)
where is the relativistic factor. Now, if we define a new pseudo
potential as
(4-8)
49
Then the expression of the force on the beam keeps the same form as (4-5) with
instead of . In this way, most parts of the code calculating the field and the force do
not need to be changed. As indicated by (4-8), the only part that needs to be adjusted in
the code would be the computation of . Instead of being determined only from the
electron cloud density as in (4-1), some function of the beam density also plays a role as
in (4-9). The contribution from the beam density can be derived from the extra term
in (4-8)
(4-9)
Because of the high energy beam inside the pipe is large, the beam’s own space
charge effect is very small as we had expected with the approximation in the previous
model, such as in the case of the LHC beam. For the computation of the pseudo potential,
the contribution from the space charge is of the order of . In order to more
accurately check the effect of the space charge, the simulation parameters are given in
Table 4.1 and the results of the horizontal spot size of LHC beam are shown in Figure
4.1.
50
Table 4.1: Parameters used for LHC simulation [2]
Electron cloud density
Bunch population
Beta function 100m
rms bunch length 0.115m
rms beam size 0.884mm
rms momentum spread
€
4.68×10
−4
Synchrotron tune 0.0059
Momentum compaction factor
Circumference 26.659km
Nominal tunes 64.28, 59.31
Chromaticity 2, 2
Space charge yes/no
Magnetic field no
Linear coupling no
Dispersion 0m
Relativistic factor 479.6
The simulation models the beam propagation inside the LHC pipe for 6000 turns,
which corresponds to 0.5s in real time. We have chosen to use the multi-kick method
instead of the continuous method for a quick check. There are ten interaction points set
along the pipe. The multi-kick may not be quantitatively accurate, but it gives good
insight into the physics qualitatively. As shown in figure 4.1, the space charge has a very
small effect (less than 3%) on horizontal spot size growth of the LHC beam, which is in
good agreement with what the previous approximation had assumed.
51
Figure 4.1 horizontal spot size of the beam at CERN-LHC with/without space charge
using the ten-kick method
4.2 Dispersion effect
The equation of motion for a single beam particle in the x direction previously
implemented in QuickPIC is as follows:
(4-10)
In this equation is the horizontal tune, represents the chromatic shift
proportional to the particle’s relative momentum offset ; is the angular
revolution frequency of the beam in the circular accelerator, which for the ultra-
relativistic beams can be written as , being the average machine radius; is
52
the force exerted by the cloud on each beam particle in the x direction; is the nominal
beam particle momentum.
However, because of the momentum spread , not all the beam particles
have the momentum of exactly . As the beam particles propagate in a circular pipe, the
particles with momentum have a radius of (corresponding to x=0). The lower
momentum particles tend to fall toward the center of the ring, and have a radius less than
(x<0); the higher momentum particles tend to move away from the center and result in
a radius greater than (x>0). The equation of motion (4-10) indicates that all particles
oscillate about x=0. Thus we neglected a term due to dispersion in the equation governing
the motion in the x direction.
When the beam is moving in the circular ring, there are three forces exerted on the
beam: the quadrupole focusing force or the betatron force , the average dipole magnet
force and the centrifugal force . If the beam particle has a momentum
of , the resulting radius of the trajectory is . Then the three forces
are computed as
(4-11)
where v is the particle velocity. At the position , the above forces balance
and satisfy the following equation,
(4-12)
With the energy spread, we have by inserting (4-11) into (4-12),
53
(4-13)
For the particles with the momentum exactly equal to , and are zero,
equation (4-14) satisfies,
(4-14)
Now we combine equation (4-13) and (4-14), and take the Taylor expansion,
dropping the higher order terms. The displacement due to the dispersion is calculated as:
(4-15)
Adding the extra term due to the dispersion, the equation of motion in the x
direction becomes,
(4-16)
where,
(4-17)
Another way to adjust the model is to keep the original form of the x equation (4-
10), and can be solved as in the previous model. Then the effect of dispersion can be
added to that solution via the following expression
€
x = x
0
+δx = x
0
+η
d
δp
p
(4-18)
where
€
η
d
is the dispersion factor and has the value from (4-15).
Dispersion also affects the beam matching condition at the initialization. For
example, in the case of a Gaussian beam, the position in the horizontal direction x and the
54
velocity in the z direction are initialized independently, satisfying the Gaussian
distribution. Including the dispersion effect, the position x should be adjusted with a term
depending on v
z
as follows,
€
x
matched
= x +η
d
γv
z
(4-19)
Figure 4.2 shows the effect of matching the initial condition with dispersion. After
adjusting the position x using equation (4-19) at beam initialization, the amplitude of the
betatron oscillation in the horizontal direction is greatly decreased.
Figure 4.2 Horizontal spot sizes with and without matching initial conditions (dispersion
effect) included.
We now evaluate for two cases – the LHC at injection and the SPS (parameters
and results are shown in Table 4.2). is on the order of 10
-4
m for the LHC case and is
on the order of 10
-3
m for SPS. Since the beam size is on the order of mm, we conclude
that the dispersion effect is significant for both the LHC injection and SPS.
55
Table 4.2 Parameters and results for dispersion effect evaluation of LHC and SPS
LHC at injection SPS
Nominal tunes Q
x
64.28 26.185
rms momentum spread
0.002
Circumference C 26.659km 6.9km
Radius R
0
4.25km 1.10km
Velocity v m/s m/s
Angular revolution frequency
Relativistic factor 479.6 27.728
rms horizontal spot size 0.884mm 2mm
A long-term simulation of the LHC beam propagation of 6000 turns is performed
with QuickPIC with and without the dispersion effect. The simulation is done using the
ten-kick method. The results are presented in Figure 4.3 and 4.4.
Figure 4.3 Horizontal spot size of the beam at CERN-LHC with and without dispersion
using the ten-kick method
56
Figure 4.4 Vertical spot size of the beam at CERN-LHC with and without dispersion
using the ten-kick method
In the simulation described in Figure 4.3 and 4.4, dispersion changes the plane of
instability. More growth is observed in the vertical direction when dispersion is not
included, and more growth is observed in the horizontal direction with the dispersion
included in the model. But the dispersion effect couples the beam dynamics in the
horizontal and vertical direction in a way that the total spot size growth in the transverse
plane is approximately unchanged.
57
Chapter 5: SIMULATION RESUTLS OF FNAL MAIN INJECTOR
Since the construction and maintenance of an accelerator is highly costly, there is
great interest in using high performance simulations to perform model validation of the
experimental results. These simulations can thus be used to test possible upgrades of the
existing machine and potential designs of future machines. With this motivation, in this
chapter and the next we will present simulation of specific circular machines.
In this chapter, we first simulated the current Main Injector with varied
parameters, and then we simulated the proposed upgrade to the Fermilab Main Injector.
We expect the simulations of the current machine to provide a basis for code validation,
collaborating with Fermilab to compare to experiments. The upgrade simulations are
performed with an eye upon guiding the design of the upgrade.
5.1 Fermi lab Main Injector
Fermilab has the highest-energy particle accelerator in US, the Tevatron (Figure
5.1). The Tevatron is a circular proton-antiproton collider four miles in circumference
that accelerates protons to 2TeV. In order to fully exploit the Tevatron it is crucial to
maintain low emmittance and high brightness during this acceleration, the Main Injector
was added (Figure 5.1) in 1999. It increases the luminosity (number of collisions per
second) by a factor of five and allows the two types of Fermilab experiments, “fixed
target’ and “ collider”, to run at the same time. Proton and antiproton beams are
transferred into the Main Injector at energy of 8 GeV, and are further accelerated to 150
GeV.
58
Figure 5.1 Main Injector at FNAL
Recently, an upgrade to the Main Injector has been proposed. It will increase the
bunch intensity,
€
N
b
, by a factor of 5 from its current value of
€
6×10
10
to about
€
3×10
11
. A
significant electron cloud effect has been observed in other hadron machines such as
PSR, SPS and RHIC when the bunch intensity reached the order of 10
11
. It would
therefore be helpful to use computer simulations to study the electron cloud effect
induced by a possible upgrade to the Main Injector.
Electron cloud build-up processes have been studied with POSINST [1]. This
simulation assumes the beam energy at injection is 8 GeV, and the peak value,
€
δ
max
, of
the secondary emission yield (SEY) of the vacuum chamber of 1.3. The results from the
simulation show that when the bunch intensity is greater than
€
N
b,th
≅1.25×10
11
, the
59
electron cloud density,
€
ρ
e
, grows exponentially in time and reaches a steady-state value,
which is 10
4
times larger than when the bunch intensity is less than this threshold.
In the following sections, we would like to investigate the beam instabilities
induced by the electron cloud and effects of other parameters on beam instabilities that
can be controlled during the experiment.
5.2 Parametric study of the beam instabilities
The numerical parameters for the FNAL Main Injector are summarized in table
5.1. The simulation parameters such as size of the simulation system are presented is
Appendix B. Starting from these parameters, changes to individual parameters made
later in this study will be pointed out and addressed.
Table 5.1 Simulation parameters for the FNAL Main Injector
Ring circumference (m) 3319.419
Beam energy (GeV) 8
Relativistic beam factor 8.526312
Bunch population
€
3×10
11
RMS bunch length (m) 0.75
Average beta function (m) 25
Normalized tr. Emittance (95%) (m-rad)
€
40π
RMS relative momentum spread 10
-3
Transverse RMS bunch size (
€
σ
x
,σ
y
) (mm) (5,5)
Dipole magnet field (T) 0.1
Synchrotron tune
€
9.7×10
−3
Momentum compaction factor
€
8.9×10
−3
Horizontal tune 26.42
Vertical tune 25.41
Horizontal chromaticity -5
Vertical chromaticity -8
Electron cloud density (cm
-3
)
€
7.252×10
6
We performed a parameter study for dependence of the beam spot size growth on
(a) electron cloud density; (b) synchrotron tune; (c) ratio of electron cloud density to
synchrotron tune; (d) dipole magnetic field B; (e) beam charge.
60
(a) Electron cloud density
€
ρ
e
The beam dynamics is directly related to the wakefield generated by the beam and
electron cloud interaction. The amplitude of the wakefield is proportional to the electron
cloud density. Figure 5.2 and 5.3 show the transverse proton beam spot size evolution
with respect to different electron cloud densities. In both the horizontal and vertical
planes, there is an electron cloud density threshold for spot size growth, which is around
10
6
cm
-3
. The spot size curves for the electron cloud density of
€
7.25×10
4
cm
-3
and
€
7.25×10
5
cm
-3
show no obvious growth in either plane. When the electron cloud density
is above the threshold, as illustrated in the blue curve in the figures with electron cloud
density of
€
7.25×10
6
cm
-3
, spot size growth is observed in both horizontal and vertical
planes.
Figure 5.2 Horizontal spot size of Main Injector at FNAL with different electron cloud
densitites
61
Figure 5.3 Vertical spot size of Main Injector at FNAL with different electron cloud
densities
Oscillations are seen in figure 5.2 because the parameter set in table 5.1 is not
perfectly matched. The matching condition balances transverse pressure (i.e., emittance)
with external focusing (i. e., betatron wave number) via
€
σ
x,y
=ε
x,y
β
x,y
, where
€
σ
x,y
is the
beam transverse spot size,
€
ε
x,y
is emittance and
€
β
x,y
is the betatron function [2].
(b) Synchrotron tune Qs
Synchrotron tune Qs describes the number of synchrotron oscillations during the
time that the beam propagates through one turn of the circular machine. It can be
controlled by adjusting the RF power in the ring. The synchrotron oscillation enables the
beam particles to travel back and forth in the longitudinal direction. When the beam
enters the electron cloud, the head of the beam interacts with the electron cloud and
produces a wakefield. Then the tail of the beam interacts with the wake and becomes
unstable. But with the existence of synchrotron oscillations, particles at the tail can travel
62
to the head to prevent further increases in transverse oscillations of the tail. In summary,
synchrotron oscillations stabilize the beam size through head and tail coupling. The
higher the synchrotron oscillation frequency is (in other words, higher Qs), the more
phase coupling between head and tail. This results in more stable beams and less spot size
growth. This is shown in figure 5.4 and figure 5.5. The Qs of the Main Injector is 0.0097.
We vary the value from 0.00005 to 0.005. Despite the oscillation in the spot size growth
due to beam mismatch, as Qs increases, the spot size growth decreases in both the
horizontal and vertical planes.
Figure 5.4 Horizontal spot size of Main Injector at FNAL with different synchrotron
tunes
63
Figure 5.5 Vertical spot size of Main Injector at FNAL with different synchrotron tunes
(c) Ratio of electron cloud density to synchrotron tune:
€
ρ
e
Q
s
In the simulations for figures 5.6 and 5.7, the combined effect of the electron
cloud density and the synchrotron tune is examined by varying these parameters but
keeping their ratio constant. As discussed before, the electron cloud and the synchrotron
tune have opposite effects on the beam dynamics. The two macro-particle model for
strong head-tail instability predicts that if
€
ρ
e
Q
s
is constant then the beam spot size
growth rate should be unchanged. This model captures some but not all of the physics
involved. Our simulation shows that the combined effect is actually quite complex. The
growth rate, g, instead of being a function of only
€
ρ
e
Q
s
, is
€
g = g(ρ
e
,
ρ
e
Q
s
). If the electron
cloud density is low enough, spot size growth will not be observed even Qs is small. In
our case,
€
ρ
e
~ 10
4
cm
-3
is low enough and does not produce a large enough wake to excites
64
the tail instability. On the other hand, with the increase of the electron cloud density, we
start to observe spot size growth. Qs also has an effect as shown in the red and blue
curves. Although the blue curve’s
€
ρ
e
is 10 times higher than
€
ρ
e
of the red curve, with 10
times higher Qs the spot size growth is suppressed. Qs damps the spot size growth by the
phase mixing of the head and the tail of the beam. It takes
€
τ
s
2 for the effect of the
synchrotron oscillation to show up where
€
τ
s
is the period of synchrotron oscillation. The
spot size in the blue curve initially increases faster than the red curve due to a higher
electron cloud density and is later damped by synchrotron oscillations.
Figure 5.6 Horizontal spot size of the Main Injector at FNAL with constant
€
ρ
e
Q
s
65
Figure 5.7 Vertical spot size of the Main Injector at FNAL with constant
€
ρ
e
Q
s
(d) Dipole magnetic field B
Adding a dipole magnet field perpendicular to the beam propagation direction is
an efficient way to control the beam spot size growth. The electron cloud particles move
toward the positively charged beam as it passes through the cloud. For the electron cloud
to be sucked in by the beam, there exists a maximum radius that within it the electron
cloud particles have enough time to reach the center of the beam before the beam passes
by. The maximum suck-in radius is [2]
€
r
0
=
ω
pb
σ
r
σ
z
c
(5-1)
Where
€
ω
pb
is the electron plasma frequency,
€
σ
r
is the radius of the beam,
€
σ
z
is the
bunch length and c is the speed of light.
66
Since the electron cloud particles have a drift velocity in the z direction, with the
presence of the dipole magnet field in the y direction, the electrons not only feel an
electric field toward the center of the beam but also experience a strong Lorentz force and
perform the cyclotron motion in the x-z plane as shown in figure 5.8. The Larmor radius
which is the radius of the cyclotron motion is derived as
€
r
l
=
mv
eB
(5-2)
Where v is the velocity of the electrons and B is the dipole magnet field.
If the Larmor radius is much smaller than the maximum suck-in radius, the
electron cloud particles will not reach the beam and the line density of the electron cloud
at the beam center will be lowered. It will result in fewer electron cloud induced beam
instability effects.
Figure 5.8 Motion of electron cloud particles in the presence of the dipole magnetic field
in y direction [2].
67
Figures 5.9 and 5.10 illustrate the results of the beam spot size growth in the Main
Injector with and without the dipole magnetic field. In the simulation we assume that
there are 72 dipole magnets along the ring with a magnetic field of 0.1T. Each bending
magnet is 12m, and is separated by a straight section of 34.1m. In the presence of B
y
, the
growth in the horizontal spot size is smaller compared to that without B
y
. In the vertical
direction, since there is not much growth even without a dipole magnet, its effect is not
obvious. The spot size evolution is the same as without the magnet.
Figure 5.9 Horizontal spot size of the Main Injector at FNAL with and without a dipole
magnetic field
68
Figure 5.10 Vertical spot size of the Main Injector at FNAL with and without a dipole
magnetic field
(e) Beam charge: N
b
Beam charge is also an important factor for spot size growth. Higher bunch
intensities excite larger wakes, and in turn cause the beam to be more unstable. Figures
5.11 and 5.12 show that spot size growth for different values of the beam charge.
€
6×10
10
is the current bunch population at the Main Injector, and
€
3×10
11
is the proposed new
bunch population. We also pick a value at
€
1×10
11
for comparison. For the horizontal spot
size, bunch population of
€
3×10
11
reaches larger final spot size. For the other two cases
with lower bunch population, there is no obvious spot size growth. In the vertical plane,
there is no obvious spot size growth for all three cases.
69
Figure 5.11 Horizontal spot size of the Main Injector at FNAL with different beam
charge. (The blue curve is behind the red curve.)
Figure 5.12 Vertical spot size of the Main Injector at FNAL with different beam charge.
70
We can summarize the results of the parameter study as follows: beam bunch
population as important effect on the spot size growth. Higher bunch population also
leads to higher electron cloud density, which further increase the spot size growth.
Synchrotron oscillation in some degree mitigates the ecloud instability effect by coupling
the head and tail beam particles. The dipole magnetic field is crucial to keep the beam on
the curved trajectory, and at the same time also has an effect of suppressing the spot size
growth.
5.3 Beam spot size growth before and after the upgrade
Finally, the simulation based on the parameters of the beam and the electron cloud
before and after the upgrade is performed to test the effect of increasing the bunch
population. Before the upgrade, the bunch population is
€
6×10
10
and the corresponding
electron cloud density is
€
5×10
−5
nC/m. After the upgrade, the bunch population is
increased to
€
3×10
11
and the resulting electron cloud density is estimated to be 8nC/m
[1][3]. Currently the machine is operated with a bunch population of
€
6×10
10
and no
instabilities are observed from the experiment. The simulation results illustrated in Figure
5.13 and 5.14 show no spot size growth at N
b
of
€
6×10
10
, which is in agreement with the
experiment observation. After upgrade, with a higher beam charge and the resulting
higher electron cloud density, there is spot size growth in both horizontal and vertical
directions.
71
Figure 5.13 horizontal beam spot size before and after the upgrade
Figure 5.14 vertical beam spot size before and after the upgrade
72
Next we model the beam for other energy conditions available in the Main
Injector ring. Figures 5.15 and 5.16 show the evolution of the transverse spot size. For
these cases, there are
€
N
b
=9.5×10
10
protons per bunch. The cases compared here [3] are:
(1) bunch length of
€
5×10
−9
s and momentum of 10 GeV; (2) bunch length of
€
2×10
−9
s
and momentum of 20 GeV; (3) bunch length of
€
3×10
−9
s and momentum of 60 GeV. The
bunch length is the full width of 95% of the beam. The electron cloud density is
€
10
12
m
−3
.
For all three cases, the transverse spot size evolutions are very similar and no obvious
spot size growth is observed. After upgrade, the bunch population will be increased to
€
3×10
11
. Figures 5.17 and 5.18 show the effect of increased bunch population on the
beam spot size evolution on the above three cases with the same electron cloud density as
used before. The beam of case (1) has the highest spot size growth in both planes. Case
(2) has moderate spot size growth in the horizontal plane and no growth in the vertical
plane. For Case (3) parameter with upgraded bunch population, no spot size growth in
both planes. In conclusion, increasing bunch population, longer bunch length and lower
beam energy induce more spot size growth. In general the emittance growth appears to be
modest for all upgrade cases except when the energy of the beam is below 20GeV.
In this chapter, we have presented results of beam spot size evolution with
different parameters. And possible explanation of the trends is provided. In the future at
Fermilab, Upgraded beams with different bunch population, bunch length, energy level
and synchrotron tune will be used in the operation. The emmittance of the beam will be
measured and that will provide opportunity for benchmarking between experimental
results and simulation results on beam dynamics taking the connection between beam
spot size and emmittance.
73
Figure 5.15 Horizontal spot size of proton beam
€
N
b
=9.5×10
10
Figure 5.16 Vertical spot size of proton beam
€
N
b
=9.5×10
10
74
Figure 5.17 Horizontal spot size of proton beam
€
N
b
=3×10
11
Figure 5.18 Vertical spot size of proton beam
€
N
b
=3×10
11
75
Chapter 6: Electron cloud effects on the electron beam at the ERL
6.1 Review of observed electron cloud effects on electron beam
The electron cloud effect was first discovered decades ago on proton beams [1].
Ever since then many circular accelerators around the world have also reported electron
cloud induced instabilities on positively charged beams, both positron and proton beams.
In 1997 J. Galayda proposed the idea that the electron cloud can also affect an
electron beam’s quality under the right conditions [2]. There are some experimental
observations that supported this idea. In the early 90’s vertical beam size growth due to
the electron cloud effect was observed on the positron beam at the Photon Factory storage
ring at KEK. A similar experiment was conducted on the electron beam and coupled-
bunch instabilities of betatron sidebands are observed but no beam size increase [3]. In
1999 at the Advanced Photon Source (APS) at Argonne National Laboratory, the electron
cloud build up and amplification were observed with the electron beam, but at a more
modest level than that with the positron beam. The electron cloud was measured with
compact electron energy retarding field analyzers (RFAs). The results are shown in figure
6.1[4]. Part (a) shows the electron cloud generated from ten electron bunches with
various spacings. Compared to the positron beam case the peak is less sharp. A possible
explanation is that when the low energy secondary electrons are created, instead of being
pulled to the center of the beam as with a positron beam, they are immediately pushed
back to the chamber wall so they have much less energy when they hit the wall. Part (b)
shows the build up and saturation of the electron cloud. An increase in the vacuum
chamber pressure was also observed.
76
Figure 6.1 (a) The measured RFA data of ten electron bunches with varied spacing; (b)
Electron cloud buildup and saturation [4]
As for the electron cloud effect on the beam, although the horizontal coupled-
bunch instability observed for the positron beam was not detected for the electron beam,
the reduced beam lifetime took place at some bunch fill patterns. For instance, when
injecting nine bunch trains of 85 mA composed of four bunches each, with
€
2λ
rf
spacing
between bunch trains the beam life time was reduced by half and the vacuum pressure
doubled compared to that with
€
12λ
rf
spacing (
€
λ
rf
is rf wavelengths).
In summary, there have been much less experimental observation and analytical
or simulation studies on the electron cloud effect on negatively charged beams. But under
77
the right circumstances, electron cloud can be built up with the passage of the electron
beam and adversely affect the beam quality. In the rest of this chapter a detailed analysis
using QuickPIC on the electron beam and electron cloud interaction at ERL is presented.
6.2 Electron cloud effect of the Cornell Energy Recovery Linac
An Energy Recovery Linac (ERL) based synchrotron light facility has been
proposed at Cornell University. In contrast to the prevailing X-ray source based on the
electron storage ring, where the high energy electron beams circulates multiple times in
the ring with increased emmittance and decreased X-ray brightness, the ERL is able to
produce a narrow electron beam with extremely low emmittance and brighter X-rays [5].
The proposed configuration is presented in figure 6.2 [5]. Component (6) is the
existing Cornell Electron Storage Ring (CESR). Two linacs (component (2) and (4)) will
be constructed and connected to CESR with arc (5) and (7) to import and export the beam
respectively. The two linacs share the same pipe and are connected by a return loop to
reduce the project cost.
Figure 6.2 proposed design of ERL
Cornell
Electro
n
Storage
Ring
Tunnel
78
The 10Mev electron beam is injected at (1) to the first linac and accelerated to
approximately 2.5GeV. After circulating through the turn around loop (3) the beam enters
the second linac and is further accelerated to 5GeV. The high-energy low emmittance
beam circulates clockwise in CESR, and approximately 0.1% of the energy will be given
up to the production of X-rays. Upon exiting the storage ring the beam goes back to the
two linacs to be decelerated to 10MeV. The energy is captured by the electromagnetic
field of the linac and will be reused.
The low emmittance and high peak current of the electron beam raise concerns of
the impact of possible electron clouds on the beam quality. The ERL is not a circular
machine, but completes a loop. It has no synchrotron tune or chromaticity, so it behaves
more like a linac. Thus we treat it as such. Table 6.1 shows the simulation parameters of
the ERL with QuickPIC. In the simulation model [6], the simulation box is rectangular
even though the real pipe shape is round in the experiment. There have been several
design sets for the ERL upgrade with a range of emmittance choices. For this study we
used normalized emmittances of 2.9304mm.mrad; more recent plans call for the
emmittances to be lowered to 0.3mm.mrad.
Table 6.1 Simulation parameter for the ERL
Horizontal spot size (µm)
€
σ
x
47.43
Vertical Betatron Tune
€
Q
x
63.66
Vertical spot size (µm)
€
σ
y
47.43
Synchrotron tune
€
Q
s
0
Bunch length (µm)
€
σ
z
600
E-Cloud density (cm-3)
€
ρ
e
2e7
Horizontal box size (mm) 0.5 Simu. box length (mm) 6
Vertical box size (mm) 0.5 Total beam particles 256
3
Electron bunch population
€
N
b
5e8 Total ecloud particles/cell 2*2
Horizontal emittance (mm.mrad)
€
ε
x
2.9304 Lorentz factor
€
γ
9768
Vertical emittance (mm.mrad)
€
ε
y
2.9304
Chromaticity x and y
€
ΔQ
x
,ΔQ
y
0, 0
Momentum spread
€
δp p
2e-4 Phase slip factor
€
η
-1.048e-8
Horizontal Betatron tune
€
Q
x
63.66 Radius (km) R 0.47
79
The major impact of the electron cloud on the ERL electron beam discovered
from the QuickPIC simulation is the coherent betatron tune shift and tune spread. Here
we examine the tune shift of e+ and e- beams together, perhaps for the first time. The
result of the vertical tune shift is shown in figure 6.3 for both e- (a) and e+ (b) beams. In
both graphs, the solid blue curves correspond to electron cloud density of
€
1×10
−2
cm
−3
and the dashed red curves correspond to electron cloud density of
€
2×10
7
cm
−3
. An
electron cloud density of
€
1×10
−2
cm
−3
is used as a base case since this density is low
enough that electron cloud almost has no effect on the beam. Both the electron beam and
the positron beam case use exactly the same parameters except the charge of the beam is
reversed. The tune curves are obtained by taking the fast Fourier transform (FFT) of the
centroid motion of the beam over 50 turns with 2000 simulation time steps per turn. Fifty
turns are simulated in order to obtain enough data to achieve sufficient tune shift
resolution. The resolution of the simulation is 256 cells in all three spatial directions of
the simulation box.
(a)
80
(b)
Figure 6.3 Vertical tune shifts of e+ and e- beams in ERL. (a) Electron beam; (b) Position
beam
As the betatron tune of the positron beam is shifted in the positive direction under
the influence of the electron cloud, the tune shift of the electron beam occurs in the
opposite direction. But the absolute values of the tune shifts in both cases are
approximately the same, about 0.06. The tune shift can be estimated with equation (6-1)
assuming the cloud size is much larger than the beam size [7],
€
ΔQ =
e
2
n
cl
R
0
2
4ε
0
m
p
c
2
Qγ
(6-1)
Here is the electron cloud density; is the radius of the circumference, which
for the ERL case becomes the beam path length divided by 2π; Q is the betatron tune; is
Lorentz factor. The tune shifts in both cases are around 0.065 from equation (6-1). The
simulation result of 0.06 is in good agreement with the analytic result.
81
In order to examine the beam dynamics of the ERL electron beam, the centroid
motion and the spot size in the transverse plane with electron cloud density of
€
2×10
7
cm
−3
are plotted in figure 6.4. Despite the spot size oscillation at the beginning of
the simulation due to the mismatch of beam conditions there is no obvious spot size
growth. In fact, the evolution of the spot size of the electron beam is quite similar to that
of the positron beam with the same parameters. In the QuickPIC simulation that
generated figure 6.3 and figure 6.4, the resolution of the simulation box is 256 cells in
each spatial direction. With different resolution, the beam centroid oscillations are not
exactly the same as shown in figure 6.5, but the tune shift converges to the theoretical
value. One possible explanation is that the beam center shifts are very small, much less
than a cell. But the particles oscillations are large compared to a cell and thus well
resolved in the simulation. In spite of the tune shift, there is no beam instability observed
with the ERL parameters, neither for electron nor for positron beams. If beam instability
exists, the centroid shift would be large enough to be resolved by the cell size.
82
Figure 6.4 Electron beam centroid motion and spot size in the transverse plane with
simulation resolution of 256:256:256 cells. (a) Horizontal beam centroid motion; (b)
Vertical beam centroid motion; (c) Horizontal spot size; (d) Vertical spot size (Beam
center positions are determined by the distance between the beam center and the
boundary of the simulation box. The centers of the beam is oscillating at the betatron
frequencies, which is not visible in these graphs)
83
Figure 6.5 Electron beam centroid motion and spot size in the transverse plane with
simulation resolution of 128:128:128 cells. (a) Horizontal beam centroid motion; (b)
Vertical beam centroid motion; (c) Horizontal spot size; (d) Vertical spot size (Beam
center positions are determined by the distance between the beam center and the
boundary of the simulation box. The centers of the beam is oscillating at the betatron
frequencies, which is not visible in these graphs)
In order to study the electron beam instability induced by electron cloud, a
simulation with increased electron cloud density was performed. When the density is
below
€
7×10
9
cm
−3
, there is no obvious spot size growth. When the cloud density is
increased to
€
10
10
cm
−3
, beam size growth and beam particle loss occur. This is therefore
approximately the density threshold of the instability. Figure 6.6 shows the beam
evolution in the electron cloud of
€
ρ
e
=2×10
11
cm
−3
. With the same simulation parameters
but for positron beam, the electron cloud density threshold of instability is
€
10
11
cm
−3
. It is
interesting that in this case the positron beam is more stable than the electron beam.
84
0m 1.5m 3m
Figure 6.6 Snap shot of the electron beam in the x-z plane in the first three time steps
with an electron cloud density of
€
ρ
e
= 2×10
11
cm
−3
85
Chapter 7: Future Work
7.1 Modeling real pipe shape in QuickPIC
The current beam pipe geometry implemented in QuickPIC has rectangular
boundaries. The conducting boundary condition is applied to the rectangular pipe. The
electric space charge field inside the pipe is closely related to the shape of the pipe,
especially at the places where the beam is close to the boundary. So when the beam size
is comparable to the beam pipe size, the pipe shape implemented in the simulation can
greatly affect the accuracy of the resulting beam dynamics.
The rectangular boundaries may not be the optimal approximation for the real
beam pipe shape. Many circular machines have elliptical cross section, such as SPS at
CERN and the Main Injector at FNAL. Taken FNAL for example, the semi-axes of the
beam pipe are 6.15cm and 2.45cm respectively, and the RMS bunch sizes are 5mm in
each of the transverse directions [1]. Because the cross section of the pipe is not much
larger than that of the beam, implementing a more realistic elliptical boundary condition
will lead to more accurate modeling of beam size and emmittance evolution.
The elliptical boundary should be included in both the 2D field solver and 3D
particle pusher. For the 2D field solver, only the field on the grid points inside the ellipse
is calculated and the conducting boundary condition is applied at the edge of the ellipse.
For the 3D particle pusher, the particles are absorbed when they hit the edge of the ellipse
instead of the outer simulation box.
7.2 benchmark with WARP
Benchmarking among simulation codes is very important for code validation and
future improvement. In the past, QuickPIC has been benchmarked with the HEAD-TAIL
86
code developed at CERN [2]. HEAD-TAIL uses a multi-kick method to model the
interaction between the electron cloud, and QuickPIC models the interaction
continuously along the circular pipe. Thus HEAD-TAIL may not be the best model for
QuickPIC to benchmark against.
Recently, the benchmark between QuickPIC and Warp codes has been initiated.
In comparison to HEAD-TAIL, Warp code shares more similarity with QuickPIC. The
Warp code was originally developed for heavy-ion driven inertial fusion energy studies
[3]. Warp is an explicit, three dimensional particle-in-cell code [4]. Both QuickPIC and
Warp have implemented the quasi-static approximation and use an FFT solver for the
electric field calculation. The leap-frog method is used for beam particle updates in both
codes. Based on all the similarities between QuickPIC and Warp, the results from the two
codes with identical initial condition and simulation parameters should be very similar.
A preliminary comparison of the results from the two codes are presented below
with the parameter used listed in table 7.1.
87
Table 7.1 Simulation parameter for comparison of Warp vs QuickPIC
The results from Warp and QuickPIC for the spot size growth of the first 650 time
steps in both transverse planes are presented in figure 7.1. Both simulations were
performed with 64, 128 and 256 cells in three spatial directions of the simulation box.
QuickPIC also ran with 512 cells. A higher number of cells gives higher resolution, and
leads to more accurate results with more computational cost. As we increase the number
of cells of the simulation domain, the results tend to converge for both codes.
For the purpose of the benchmark, Warp is executed in the quasi-static mode. The
initial beam particle profile including the position and velocity in the x, y and z direction
for every simulation beam particles are identical for QuickPIC and Warp.
As shown in figure 7.1, overall, the trend of the spot size evolution is similar for
both codes. Spot size growth is greater in the vertical plane than in the horizontal plane.
RMS Horizontal spot size (mm) 0.623
RMS Vertical spot size (mm) 0.164
RMS Bunch Length (mm) 13
Horizontal Box Size (mm) 20
Vertical Box Size (mm) 10
Bunch Population 10
11
Horizontal Emittance (mm.mrad) 141
Vertical Emittance (mm.mrad) 8.806
Momentum Spread 0
Radius (km) 0.3501408
Horizontal Betatron Tune 21.649
Vertical Betatron Tune 19.564
Synchrotron Tune 0
Electron Cloud Density (cm
-3
) 10
8
Simulation Box Length (mm) 78
Total Beam Particles
Total Ecloud Particles in Each Cell
Relativistic Factor 5871
Chromaticity x and y 0, 0
Phase Slip Factor
3D Time Step (Second)
88
The spot size grows more with the increase in the resolution. Quantitatively, the results
from QuickPIC and Warp do not exactly converge. In the x direction, QuickPIC gives
greater spot size increase. In the y direction, QuickPIC presents more growth at the
beginning (eg. the first 100 time step or so), but later on, Warp result reaches a higher
plateau. Further simulation was performed using a 3D time step that is ¼ of that listed in
table 7.1. For both code, the results changed by a few percent which is not enough to
explain the difference between the two codes.
(a)
89
(b)
(c)
90
(d)
Figure 7.1 results of transverse spot size growth from QuickPIC and Warp. (a) Horizontal
spot size growth from QuickPIC; (b) Vertical spot size growth from QuickPIC; (c)
Horizontal spot size growth from Warp; (d) Vertical spot size growth from Warp
The reason for the discrepancy between the results from QuickPIC and Warp is
not completely clear. Some part of the discrepancy may be due to the difference between
the physical model and the corresponding algorithm implemented in the two codes.
Consider for example that the forces and motion in the longitudinal z direction are
ignored in Warp, and the beam particle pusher in the transverse plane is different.
In order to fully understand the differences and provide information for future
code improvement and upgrade, further benchmark simulation will be performed. The
discrepancies occurred at the very beginning of the run. Thus proposed a way to
investigate the discrepancy is to run the simulation for just a few time steps with a
relatively smaller system with fewer beam particles and electron cloud particles.
Following the major subroutines such as the field solver and the beam pusher, the profile
91
of the updated field and particle distribution would be produced. This should allow
determination of the subroutine which introduces the difference by comparing the output.
7.3 Implementation of adaptive mesh refinement into QuickPIC
Computer simulation is of great importance to study the beam-electron cloud
instability and to explore various ways to reduce the instability effect. Particle-in-cell
code such as QuickPIC has been a good candidate among all the simulation methods to
model the interaction between the beam particles and the electron cloud with relatively
few assumptions. In order to accurately simulate thousands of turns of high energy beam
propagation inside the pipe, the cost of time and memory of PIC code usually is high.
One of the reasons is the challenge to resolve different scales in space and time. The
quasi-static approximation was implemented to resolve the disparity of the time scale of
the evolution of the beam and the electron cloud. In addition the pipelining method was
used to enable the program to use more processors with increased resolution and the real
computing time does not grow proportionally. Now we propose another approach,
adaptive mesh refinement, to be added into QuickPIC to improve both the efficiency and
the accuracy of the simulation.
As shown in Figure 7.1, increasing the number of grids uniformly in the
simulation domain results in higher resolution and more accurate simulation results. But
it also requires more memory and more time for calculation. A close look at the
distribution of the beam particles and electron cloud may indicate that it is not necessary
to use finer grids everywhere in the simulation box. A typical snapshot of the beam and
the electron cloud is shown below in Figure 7.2. The beam is usually positioned at the
center of the simulation box, and the electron cloud is sucked in by the beam to the center
92
as well. Since there is more detailed physics near the center of the simulation box, the
fine grid should be used. Near the boundary of the box, the coarse mesh grid can be used
as there are very few beam particles there. This technique of using higher resolution
where more detail is needed is called mesh refinement. As time evolves, the distribution
of the particles and field change as well. The regime where we want to “zoom in” within
the simulation domain should be adjusted correspondingly. This is the method of
adaptive mesh refinement.
Figure 7.2 A typical snap shot of beam and electron cloud distributions in the x-z plane.
Left: beam; right: electron cloud
Figure 7.3 gives an example of the grid structure of adaptive mesh refinement [5].
The finer grid is patched inside the coarse grids where more detail exists. At different
places within the simulation domain, different levels of resolution can be used. In the
process of implementing the adaptive mesh refinement, there are some special issues that
need to be addressed. For the field calculation, when the finer grid is patched inside the
coarse grid, a spurious image charge can be created at the interface of the two grid
systems. This leads to a spurious self-force on the macroparticles near the boundary of
93
the patch of the finer grid [5]. Implementing mesh refinement into a PIC code can
sometimes create violations of Gauss’ law in that the total charge is not conserved. The
effects may become quite large after accumulation of many time steps. Some method can
be used to mitigate the spurious charge effect, i.e., defining a transition region around the
finer patch in which the field is obtained from underlying coarse patch [6].
Figure 7.3 Grid structure of the adaptive mesh refinement method [7]
7.4 Simulating electron cloud build-up in the QuickPIC model
In the current version of QuickPIC, the electron cloud formation process is not
modeled. Instead, for each 3D time step, the beam interacts with a uniformly distributed
electron cloud. And the density of the electron cloud is either the average or the
maximum value from experimental observation or other simulation code. In reality, due
to the complex mechanism of the primary and secondary electron formation, and
electromagnetic field, the distribution of the electron cloud particles are known to be non-
uniform. In order to achieve more accurate results, the formation of the electron cloud
should be added into QuickPIC.
94
A more realistic ecloud build-up module should be used instead of the uniform
distribution initialization. In the module, the electron cloud particles are represented by
the macroparticles. The electron particles and the beam dynamics are coupled. The
primary electron particles are created by the interaction of the beam and the medium. The
secondary emission is affected by the beam field, electron space charge field, external
magnetic field, secondary emission yield factor, etc. Since there are already several
existing electron cloud formation models such as ECLOUD [8] and POSINST [9], one
can also import the interface of the existing model into QuickPIC.
95
Reference
Chapter 1
[1] C. Joshi, et al., “Plasma Accelerators”, Scientific American, 2006
[2] I. Blumenfeld, et al., “Energy doubling of 42GeV electrons in a metre-scale plasma
wakefield accelerator”, Nature 445, 741-744 (15 February 2007)
[3] A. Ghalam, et al., “Simulation of electron-cloud instability in circular accelerators
using plasma models”, in Proceeding of Advanced Accelerator Concept: 10th Workshop,
2002
[4] G. Budker, et al., in Proceedings of the International Symposium on Electron and
Positron Storage Rings, Saclay, France, 1966 (Saclay, Paris, 1966), Article No. VIII-6-1
[5] G. Rumolo and F. Zimmermann, “Practical User Guide for ECloud”, CERN-SL-
Note-2002-016 (AP).
[6] M. Venturini, “Modelling of E-Cloud Build-up in Grooved Vacuum Chambers Using
POSINST”, in Proceedings of PAC07, Albuquerque, NM.
[7]G. Arduini, “Present understanding of electron cloud effects in the large hadron
collider”, in Proceedings of the 2003 Partical Accelerator Conference.
[8]V. Dudnikov, “Some features of transverse instability of partly compensated proton
beams”, in Proceedings of the 2001 Particle Accelerator Conference, Chicago
[9] H. Fukuma, “Electron cloud effects at KEKB”, in Proceedings of E’Cloud02
[10] F. Zimmermann, “Review of single bunch instabilities driven by an electron cloud”,
Physical Review Special Topics-Accelerators and Beams 7, 124801 (2004)
[11] J. Wei, et al., “Electron-cloud mitigation in the spallation neutron source ring”, in
Proceedings of the 2003 Particle Accelerator Conference.
[12] A. Chao, “Physics of collective beam instabilities in high energy accelerators”,
Wiley & Sons, New York, 1993
[13] H. Qin, et al., “Nonlinear perturbative particle simulation studies of the electron-
proton two-stream instability in high intensity proton beams”, Physical Review Special
Topics-Accelerators and Beams, Volume 6, 014401 (2003)
[14]G. Rumolo, et al., “Electron cloud simulations: beam instabilities and wakefields”,
Phys. Rev. ST Accel. Beams 5, 121002 (2002)
96
[15] C. Huang, et al., “QuickPIC: A highly efficient particle-in-cell code for modeling
wakefield accleration in plasmas”, Journal of computational Physics, 217 (2006) 658-679
[16] R. Macek, AIP Conference Proceedings, 448, 116 (1998)
[17] A. Ghalam, et al., “Three-dimensional continuous modeling of beam-electron cloud
interaction: Comparison with analytic models and predictions for the present and future
circular machines”, Physics of Plasma 13, 056710 (2006)
Chapter 2
[1]Chengkun Huang’s master thesis, “Development of a Novel PIC code for studying
Beam-Plasma Interaction”, UCLA, 2003
[2]A. Ghalam’s PhD thesis, “High fidelity 3-dimensioanl models of beam-electron cloud
interactions in circular acclerators”, USC, 2006
[3] C. Huang, et al., “QuickPIC: A highly efficient particle-in-cell code for modeling
wakefield accleration in plasmas”, Journal of computational Physics, 217 (2006) 658-679
[4]G. Rumolo, et al., “Electron cloud effects on beam evolution in a circular accelerator”,
Phys. Rev. ST Accel. Beams 6, 081002 (2003)
[5] S. Deng, “Developing a multi-timescale PIC code for plasma accelerators”, in
Proceedings of 2005 PAC, Knoxville, Tennessee
Chapter 3
[1] C. Huang, et al., “QuickPIC: A highly efficient particle-in-cell code for modeling
wakefield accleration in plasmas”, Journal of computational Physics, 217 (2006) 658-679
[2] B. Feng, et al., “Enhancing Plasma Wakefield and E-cloud Simulation Performance
Using a Pipelining Algorithm”, Advanced Accelerator Concept: 12
th
Workshop, 2006
[3]MPI standard user’s manual
Chapter 4
[1]B. Feng, et al., “Long time simulation of LHC beam propagation in electron clouds”,
in Proceedings of 2005 Particle Accelerator Conference, Knoxville, Tennessee
[2] E. Benedetto, et al., “Simulation of Transverse Single Bunch Instabilities and
Emittance Growth Caused by Electron Cloud in LHC and SPS”, 2004
Chapter 5
97
[1] M. Furman, “A preliminary assessment of the electron cloud effect for the FNAL
main injector upgrade”, LBNL-57634/CBP-Note-xxx
[2] A. Ghalam’s PhD thesis, “High fidelity 3-dimensioanl models of beam-electron cloud
interactions in circular acclerators”, USC, 2006
[3] private communication with Panagiotis Spentzouris from FNAL
Chapter 6
[1] G. Budker, et al., in Proceedings of the International Symposium on Electron and
Positron Storage Rings, Saclay, France, 1966 (Saclay, Paris, 1966), Article No. VIII-6-1
[2] K. Harkay, et al., “Electron cloud observations: A retrospective”, in Proceedings of
ECLOUD04
[3] M. Izawa, et al., “The vertical instability in a position bunched beam”, Physical
Review Letters, Volume 74, Number 25, 1995
[4] K. C. Harkay, et al., “Properties of the electron cloud in a high-energy positron and
electron storage ring”, Phys. Rev. ST Accel. Beams 6, 034402 (2003)
[5] G. H. Hoffstaetter, “Toward the ERL, a Brighter X-ray source”
[6] G. H. Hoffstaetter, “ERL upgrade of an existing X-ray facility: CHESS at CESR”,
ERL-05-08
[7] G. Rumolo, et al., “Electron cloud effects on beam evolution in a circular
accelerator”, Physical Review Special Topics – Accelerators and Beams, Volume 6,
081002 (2003)
Chapter 7
[1] M. A. Furman, “A preliminary assessment of the electron cloud effect for the FNAL
main injector upgrade”, LBNL-57634/CBP-Note-xxx
[2] A. Ghalam’s PhD thesis, “High fidelity 3-dimensioanl models of beam-electron cloud
interactions in circular acclerators”, USC, 2006
[3] A. Friedman, “Overview of the WARP code and studies of transverse resonance
effects”
[4]D. Grote, A Friedman, “The WARP code: modeling high intensity ion beams”
98
[5] J.-L. Vay, et al., “Mesh refinement for particle-in-cell plasma simulations:
Applications to and benefits for heavy ion fusion”, Laser and Particle Beams (2002), 20,
569-575
[6] J.-L. Vay, et al., “Application of adaptive mesh refinement to PIC simulations in
inertial fusion”, presentation of 15
th
International Symposium on Heavy Ion Inertial
Fusion, Princeton, New Jersey, July 2004
[7] Parallel implicit adaptive mesh refinement (AMR),
http://utias.utoronto.ca/~groth/research_amr.html
[8] G. Rumolo and F. Zimmermann, “Practical User Guide for ECloud”, CERN-SL-
Note-2002-016 (AP).
[9] M. Venturini, “Modeling of E-Cloud Build-up in Grooved Vacuum Chambers Using
POSINST”, in Proceedings of PAC07, Albuquerque, NM.
99
Appendix A Pipeswitch Subroutine
With the implementation of the pipelining algorithm, the computation is done
by multiple subgroups except the initialization part, which includes the broadcasting
of the information from the input deck and the initialization of the simulation box and
the beam. The initialization part is done with all the processors functioning as one big
group. The subroutine PIPESWITCH in file p2lib.f is for switching between the mode
of single big group and multiple subgroups. Every time the subroutine is called, it
switches from whatever the current mode to the other mode. The code of
PIPESWITCH is included here:
subroutine PIPESWITCH
c this subroutine switch big group and subgroup information
c Written by Bing Feng
c Modified by Chengkun Huang
c Date: 08/31/05
implicit none
c common block for parallel processing
integer nproc, lgrp, mreal, mint, mcplx, mdouble, lworld
common /pparms/ nproc, lgrp, mreal, mint, mcplx, mdouble, lworld
c common block for pipelining processing
c lnproc = number of processors in original lgrp
c idlproc = processor id in original lgrp
c llgrp = store the original lgrp
c pipekey = processors are divided into subgroups according to the pipekey value
c gflag = (.true.,.false.) is subgroup or not
integer lnproc, llgrp, idlproc, pipekey
logical gflag
common /pipeparms/ lnproc, llgrp, idlproc, pipekey, gflag
c local variables
integer tnproc, tlgrp
tnproc = lnproc
lnproc = nproc
nproc = tnproc
tlgrp = llgrp
llgrp = lgrp
lgrp = tlgrp
100
gflag = .not.(gflag)
return
end
101
Appendix B Code Correction
Betatron oscillation and chromaticity effects have been added to QuickPIC to
model circular accelerators. In the code, the betatron oscillation effect was treated as
an external electric field, and the chromaticity effects were equivalent to an external
B field. The functions implemented in the code are in the simulation units.
During this Ph. D. research, bugs have been found in this part of the code and
corrections have been made. Previously only positively charged beams are considered.
In the correction, we added a term “sign(1.0, qtmh)” and “sign(1.0,qme)” to include
the sign of the beam charge. The updated code is valid for both positively and
negatively charged beams. Also in the previous code, the product of the betatron tune
(Q
x,y
)and chromaticity (Chrmx/Chrmy) in x & y were reversed in the external B field
(bxyze(:,:,:,:,:) equation. When the betatron tune and chromaticity are not equal in x
and y directions, an error will be introduced to the field calculation. In addition to that,
a minus sign was missed in the equation of B
x
. Finally, the equation needs to be
written in the simulation units when implemented in the code. By doing so, the
previous version left out a factor of “m_ratio
-2
”. “m_ratio” is defined as the beam
particle mass over the electron mass. In the case of positron and electron beams, this
term has no effect because it is equal to one. But in the case of proton beams, missing
this term results in much larger B field in x and y direction. The original piece of code
and the corrections are list below:
Original (pbpush3lib.f90; new_pbeps3.f90):
dx = qtmh*(dx - kx*(part(1,j,m)-nx/2))
dy = qtmh*(dy - ky*(part(2,j,m)-ny/2))
bxyze(1,j,k,:,:) = 2*Qx*Chrmx/R0**2*gamma**3*(real(k-1)-nyh)*dz
102
bxyze(2,j,k,:,:) = 2*Qy*Chrmy/R0**2*gamma**3*(real(j-1)-nxh)*dz+gamma**3*dz/(R0*dx)
After correction (pbpush3lib.f90; new_pbeps3.f90):
dx = qtmh*(dx - sign(1.0,qtmh)*kx*(part(1,j,m)-nx/2))
dy = qtmh*(dy - sign(1.0,qtmh)*ky*(part(2,j,m)-ny/2))
bxyze(1,j,k,:,:)=-2*sign(1.0,qme)*Qy*Chrmy/R0**2*gamma**3*(real(k-1)-nyh)*dz/(m_ratio**2)
bxyze(2,j,k,:,:)=2*sign(1.0,qme)*Qx*Chrmx/R0**2*gamma**3*(real(j-1)-
nxh)*dz/(m_ratio**2)+dispersion*sign(1.0,qme)*gamma**3*dz/(R0*dx)/(m_ratio**2)
103
Appendix C Input Deck for the Simulations
1) Pipelining algorithm fidelity check
Inputdeck Description------------------
&Input_File
Version = 040202
/
--------------Simulation System-----------------------
Simulation system (in unit of micron) = BOX_X * BOX_Y * BOX_Z
Total grids = (2^INDX) * (2^INDY) * (2^INDZ)
Total beam particles = NPX * NPY * NPZ
Total plasma particles in each slice = NP2 * NP2
&Simulation_Sys
BOX_X=18000, BOX_Y=18000, BOX_Z=2000000,
INDX = 6, INDY = 6, INDZ = 6
NPX = 64, NPY = 64, NPZ = 128
NP2 = 128
/
--------------Boundary Condition----------------------
Choose between 'periodic' and 'conducting'.
&Boundary
SBOUNDARY = 'conducting'
/
-------------Beam Parameters------------------------
(Center_X, Center_Y, Center_Z) = Position of the
center of the beam
(SIGMA_X, SIGMA_Y, SIGMA_Z) = Spot Size of the beam,
in unit of micron
BEAM_CHARGE = Number of electrons in the beam, peak
charge density is BEAM_CHARGE/[(2*PI)^(3/2)]/(SIGMA_X*
SIGMA_Y*SIGMA_Z)
(EMITTANCE_X, EMITTANCE_Y) = Normalized emittance of the
beam in unit of mm*mrad, thermal velocity of the beam =
emittance/(gamma*sigma)
ENERGY_DIFF = DELTA_GAMMA/GAMMA, logitudinal thermal
velocity of the beam is DELTA_BETA_Z = ENERGY_DIFF/
(GAMMA*GAMMA)
Beam centroid is described by parabolic function
Centroid_C2*(Z-Z0)^2+Centroid_C1*(Z-Z0)+Centroid_C0
Here Z and Z0 are in unit of micron, the code wil convert
Centroid_C2(1&0) into the unit in the slmulation
GAMMMA = Lorentz factor
VDX(Y&Z) = drift velocity of the beam, in unit of c
104
FST = Time for free-streaming before the beam enters the plasma.
In unit of 1/Omega_p.
&Beam
Center_X = 9000, Center_Y = 9000, Center_Z =1000000
SIGMA_X = 884, SIGMA_Y = 884, SIGMA_Z = 115000
BEAM_CHARGE = 11E10,
EMITTANCE_X = 10415,EMITTANCE_Y = 9610,ENERGY_DIFF = 4.68E-4
Centroid_C2X = 0, Centroid_C1X = 0.0, Centroid_C0X = 0
Centroid_C2Y = 0, Centroid_C1Y = 0.0, Centroid_C0Y = 0
GAMMA = 881025
VDX = 0.0, VDY = 0.0, VDZ = 0.0
/
------------Twiss Parameters-------------------------
Twiss parameters of the beam, if USE_TWISS_PARAS is
set to true, SIGMA_X, SIGMA_Y, Centroid_C[2/1/0][X/Y] ,
FST are ignored.
unit of BETA is meter.
&Twiss
USE_TWISS_PARAS=.false.
ALPHA_X = -0.71, ALPHA_Y = -0.71
BETA_X = 4.21, BETA_Y = 4.21
/
------------Plasma Parameters-----------------------
Plasma density in unit of cm-3
VT2X(Y) = thermal velocity of the plasma electrons, in unit of c
VD2X(Y) = drift velocity of the plasma electrons, in unit of c
Non_Neutral_Factor = - Ion density/electron density,
Non_Neutral_Factor = 1 for neutral plasma
Non_Neutral_Factor = 0 for pure electron cloud
Effective only when conducting boundary condition
is set.
&Plasma
PLASMA_DENSITY=6e5
VT2X=0.0, VT2Y=0.0
Non_Neutral_Factor = 0.0
/
------------Simulation time---------------------------
TEND = Total time, DT = TimeStep
In unit of 1/Omega_p.
&Simulation_time
TEND =1.946213320304640e+06, DT = 1.90059894561,
/
105
------------ Diagnostic -------------------------------
DFPSI, DFPHI, DFB, DFP, DFBC are the intevals in unit of timestep
for dumping PSI, PHI, beam and plasma density, beam centroid,
respectively.
PHISLICE, PSISLICE, QEBSLICE, QEPSLICE specify whether to dump
data of the whole 3D space or just one slice of it.
PHI(PSI,QEB,QEP)X0, if not zero, specify which Y-Z slice to dump.
PHI(PSI,QEB,QEP)Y0, if not zero, specify which X-Z slice to dump.
PHI(PSI,QEB,QEP)Z0, if not zero, specify which X-Y slice to dump.
BC_DIAG_RES specify the number of slices along Z direction for
beam centroid calculation.
&Potential_Diag
DFPHI=100000, PHISLICE=.true. , PHIX0=0 , PHIY0=0, PHIZ0=500000,
DFPSI=100000, PSISLICE=.true. , PSIX0=0 , PSIY0=0, PSIZ0=500000
/
&Beam_Diag
DFB=4000000, QEBSLICE=.true. , QEBX0=9000 , QEBY0=9000, QEBZ0=0,
DFBC=1000000000, BC_DIAG_RES=128
/
------------ Diagnostic -------------------------------
DUMP_PHA: switch to turn on phase space diagnostics
DFPHA: intevals in unit of timestep for dumping phase space
DSAMPLE : spacing of sampling
&Beam_Phase_Space_Diag
DUMP_PHA_BEAM = .true. , DFPHA_BEAM=300000, DSAMPLE_BEAM =
1
/
&Plasma_Diag
DFP=4000000, QEPSLICE=.true. , QEPX0= 0, QEPY0=9000, QEPZ0=0
/
&Plasma_Phase_Space_Diag
DUMP_PHA_PLASMA = .true. , DFPHA_PLASMA =100000,
DSAMPLE_PLASMA = 1
/
------------ Corrections -------------------------------
To turn on a specific correction, set it to .true.
DOVPAR: include parallel velocity of plasma electron
DOREL: use relativistic pusher for plasma electron
DOCURRENT: in parallel current of plasma electron
DOPARPUSH : push beam in Z.
&Corrections
106
DOVPAR=.false., DOREL=.false., DOCURRENT=.false.
DOPARPUSH =.false.
emf=1
/
------------ Restart file -------------------------------
READ_RST_FILE specify a restart run and RST_TIMESTEP
which timestep to begin the restart run
DUMP_RST_FILE control restart file dumping and DFRST
is the dumping frequency
&Restart_File
READ_RST_FILE = .true., RST_TIMESTEP = 0
DUMP_RST_FILE = .true., DFRST=5000
/
------------Optimization Coefficencies-------------
INTERNAL DATA. DO NOT CHANGE!
&Optimization
INORDER = 1, POPT = 1, DOPT = 2, DJOPT = 1
SORTIME_2D = 25, SORTIME_3D = 25
/
----------Circular Accelerator Parameters-----------------------------------
Q is the tune shift of the Accelerator
Chrm is the chromaticity of the acclerator
R0 is the circumference of the accelerator(in Km)
&Circular_Accelerator_Parameters
Qx=64.28
Qy=59.31
Chrmx=2
Chrmy=2
R0=4.24291
gamt=23
eta=3.47E-4
Qs=0.0059
ecl = 1
omx = 0
omy = 0
omz = 0
LS = 83.3
LB = 45
single_kick = 0
num_kick = 1
cent_dump = 50
/
107
2) Fermi lab Main Injector
Inputdeck Description------------------
&Input_File
Version = 040202
/
--------------Simulation System-----------------------
Simulation system (in unit of micron) = BOX_X * BOX_Y * BOX_Z
Total grids = (2^INDX) * (2^INDY) * (2^INDZ)
Total beam particles = NPX * NPY * NPZ
Total plasma particles in each slice = NP2 * NP2
&Simulation_Sys
indexgroups = 2
BOX_X=60000, BOX_Y=49000, BOX_Z=8000000,
INDX = 6, INDY = 6, INDZ = 6
NPX = 64, NPY = 64, NPZ = 128
NP2 = 128
/
--------------Boundary Condition----------------------
Choose between 'periodic' and 'conducting'.
&Boundary
SBOUNDARY = 'conducting'
/
-------------Beam Parameters------------------------
(Center_X, Center_Y, Center_Z) = Position of the
center of the beam
(SIGMA_X, SIGMA_Y, SIGMA_Z) = Spot Size of the beam,
in unit of micron
BEAM_CHARGE = Number of electrons in the beam, peak
charge density is BEAM_CHARGE/[(2*PI)^(3/2)]/(SIGMA_X*
SIGMA_Y*SIGMA_Z)
(EMITTANCE_X, EMITTANCE_Y) = Normalized emittance of the
beam in unit of mm*mrad, thermal velocity of the beam =
emittance/(gamma*sigma)
ENERGY_DIFF = DELTA_GAMMA/GAMMA, logitudinal thermal
velocity of the beam is DELTA_BETA_Z = ENERGY_DIFF/
(GAMMA*GAMMA)
Beam centroid is described by parabolic function
Centroid_C2*(Z-Z0)^2+Centroid_C1*(Z-Z0)+Centroid_C0
Here Z and Z0 are in unit of micron, the code wil convert
Centroid_C2(1&0) into the unit in the slmulation
GAMMMA = Lorentz factor
VDX(Y&Z) = drift velocity of the beam, in unit of c
FST = Time for free-streaming before the beam enters the plasma.
108
In unit of 1/Omega_p.
&Beam
Center_X = 30000, Center_Y = 24500, Center_Z =4000000
SIGMA_X = 5000, SIGMA_Y = 5000, SIGMA_Z = 750000
BEAM_CHARGE = 3E11,
EMITTANCE_X = 17478.72,EMITTANCE_Y = 17478.72,ENERGY_DIFF =
1E-3
Centroid_C2X = 0, Centroid_C1X = 0.0, Centroid_C0X = 0
Centroid_C2Y = 0, Centroid_C1Y = 0.0, Centroid_C0Y = 0
GAMMA = 17478.72, m_ratio = 1836,
VDX = 0.0, VDY = 0.0, VDZ = 0.0
/
------------Twiss Parameters-------------------------
Twiss parameters of the beam, if USE_TWISS_PARAS is
set to true, SIGMA_X, SIGMA_Y, Centroid_C[2/1/0][X/Y] ,
FST are ignored.
unit of BETA is meter.
&Twiss
USE_TWISS_PARAS=.false.
ALPHA_X = -0.71, ALPHA_Y = -0.71
BETA_X = 4.21, BETA_Y = 4.21
/
------------Plasma Parameters-----------------------
Plasma density in unit of cm-3
VT2X(Y) = thermal velocity of the plasma electrons, in unit of c
VD2X(Y) = drift velocity of the plasma electrons, in unit of c
Non_Neutral_Factor = - Ion density/electron density,
Non_Neutral_Factor = 1 for neutral plasma
Non_Neutral_Factor = 0 for pure electron cloud
Effective only when conducting boundary condition
is set.
&Plasma
PLASMA_DENSITY=1.055e7
VT2X=0.0, VT2Y=0.0
Non_Neutral_Factor = 0.0
/
------------Simulation time---------------------------
TEND = Total time, DT = TimeStep
In unit of 1/Omega_p.
&Simulation_time
TEND = 2.032307539949392e+06, DT = 2.39658908012900,
/
109
------------ Diagnostic -------------------------------
DFPSI, DFPHI, DFB, DFP, DFBC are the intevals in unit of timestep
for dumping PSI, PHI, beam and plasma density, beam centroid,
respectively.
PHISLICE, PSISLICE, QEBSLICE, QEPSLICE specify whether to dump
data of the whole 3D space or just one slice of it.
PHI(PSI,QEB,QEP)X0, if not zero, specify which Y-Z slice to dump.
PHI(PSI,QEB,QEP)Y0, if not zero, specify which X-Z slice to dump.
PHI(PSI,QEB,QEP)Z0, if not zero, specify which X-Y slice to dump.
BC_DIAG_RES specify the number of slices along Z direction for
beam centroid calculation.
&Potential_Diag
DFPHI=1000000, PHISLICE=.true. , PHIX0=0 , PHIY0=0, PHIZ0=4000000,
DFPSI=1000000, PSISLICE=.false. , PSIX0=0 , PSIY0=0, PSIZ0=0
/
&Beam_Diag
DFB=1000000, QEBSLICE=.true. , QEBX0=30000 , QEBY0=24500,
QEBZ0=0,
DFBC=1000000000, BC_DIAG_RES=128
/
------------ Diagnostic -------------------------------
DUMP_PHA: switch to turn on phase space diagnostics
DFPHA: intevals in unit of timestep for dumping phase space
DSAMPLE : spacing of sampling
&Beam_Phase_Space_Diag
DUMP_PHA_BEAM = .true. , DFPHA_BEAM=300000, DSAMPLE_BEAM =
1
/
&Plasma_Diag
DFP=100000, QEPSLICE=.true. , QEPX0= 30000, QEPY0=24500, QEPZ0=0
/
&Plasma_Phase_Space_Diag
DUMP_PHA_PLASMA = .true. , DFPHA_PLASMA =100000,
DSAMPLE_PLASMA = 1
/
------------ Corrections -------------------------------
To turn on a specific correction, set it to .true.
DOVPAR: include parallel velocity of plasma electron
DOREL: use relativistic pusher for plasma electron
DOCURRENT: in parallel current of plasma electron
DOPARPUSH : push beam in Z.
110
&Corrections
DOVPAR=.false., DOREL=.false., DOCURRENT=.false.
DOPARPUSH =.false.
emf=1
spacecharge = 1, dispersion = 1
/
------------ Restart file -------------------------------
READ_RST_FILE specify a restart run and RST_TIMESTEP
which timestep to begin the restart run
DUMP_RST_FILE control restart file dumping and DFRST
is the dumping frequency
&Restart_File
READ_RST_FILE = .true., RST_TIMESTEP = 0
DUMP_RST_FILE = .true., DFRST=5000
/
------------Optimization Coefficencies-------------
INTERNAL DATA. DO NOT CHANGE!
&Optimization
INORDER = 1, POPT = 1, DOPT = 2, DJOPT = 1
SORTIME_2D = 25, SORTIME_3D = 25
/
----------Circular Accelerator Parameters-----------------------------------
Q is the tune shift of the Accelerator
Chrm is the chromaticity of the acclerator
R0 is the circumference of the accelerator(in Km)
&Circular_Accelerator_Parameters
Qx=26.42
Qy=25.41
Chrmx=-5
Chrmy=-8
R0=0.52830194
gamt=23
eta=-2.13E-3
Qs=0.0097
ecl = 1
omx = 0
omy = 577.5
omz = 0
LS = 46.10
LB = 12
single_kick = 0
num_kick = 1
111
cent_dump = 60
/
112
(3) ERL at Cornell
Inputdeck Description------------------
&Input_File
Version = 040202
/
--------------Simulation System-----------------------
Simulation system (in unit of micron) = BOX_X * BOX_Y * BOX_Z
Total grids = (2^INDX) * (2^INDY) * (2^INDZ)
Total beam particles = NPX * NPY * NPZ
Total plasma particles in each slice = NP2 * NP2
&Simulation_Sys
indexgroups = 3
BOX_X=500, BOX_Y=500, BOX_Z=6000,
INDX = 8, INDY = 8, INDZ = 8
NPX = 128, NPY = 64, NPZ = 128
NP2 = 512
/
--------------Boundary Condition----------------------
Choose between 'periodic' and 'conducting'.
&Boundary
SBOUNDARY = 'conducting'
/
-------------Beam Parameters------------------------
(Center_X, Center_Y, Center_Z) = Position of the
center of the beam
(SIGMA_X, SIGMA_Y, SIGMA_Z) = Spot Size of the beam,
in unit of micron
BEAM_CHARGE = Number of electrons in the beam, peak
charge density is BEAM_CHARGE/[(2*PI)^(3/2)]/(SIGMA_X*
SIGMA_Y*SIGMA_Z)
(EMITTANCE_X, EMITTANCE_Y) = Normalized emittance of the
beam in unit of mm*mrad, thermal velocity of the beam =
emittance/(gamma*sigma)
ENERGY_DIFF = DELTA_GAMMA/GAMMA, logitudinal thermal
velocity of the beam is DELTA_BETA_Z = ENERGY_DIFF/
(GAMMA*GAMMA)
Beam centroid is described by parabolic function
Centroid_C2*(Z-Z0)^2+Centroid_C1*(Z-Z0)+Centroid_C0
Here Z and Z0 are in unit of micron, the code wil convert
Centroid_C2(1&0) into the unit in the slmulation
GAMMMA = Lorentz factor
m_ratio = mass_particle/mass_electron
VDX(Y&Z) = drift velocity of the beam, in unit of c
113
FST = Time for free-streaming before the beam enters the plasma.
In unit of 1/Omega_p.
&Beam
Center_X = 250, Center_Y = 250, Center_Z =3000
SIGMA_X = 47.43, SIGMA_Y = 47.43, SIGMA_Z = 600
BEAM_CHARGE = -5E8,
EMITTANCE_X = 2.9304,EMITTANCE_Y = 2.9304,ENERGY_DIFF = 2E-4
Centroid_C2X = 0, Centroid_C1X = 0.0, Centroid_C0X = 0
Centroid_C2Y = 0, Centroid_C1Y = 0.0, Centroid_C0Y = 0
GAMMA = 9768, m_ratio = 1,
VDX = 0.0, VDY = 0.0, VDZ = 0.0
/
------------Twiss Parameters-------------------------
Twiss parameters of the beam, if USE_TWISS_PARAS is
set to true, SIGMA_X, SIGMA_Y, Centroid_C[2/1/0][X/Y] ,
FST are ignored.
unit of BETA is meter.
&Twiss
USE_TWISS_PARAS=.false.
ALPHA_X = -0.71, ALPHA_Y = -0.71
BETA_X = 4.21, BETA_Y = 4.21
/
------------Plasma Parameters-----------------------
Plasma density in unit of cm-3
VT2X(Y) = thermal velocity of the plasma electrons, in unit of c
VD2X(Y) = drift velocity of the plasma electrons, in unit of c
Non_Neutral_Factor = - Ion density/electron density,
Non_Neutral_Factor = 1 for neutral plasma
Non_Neutral_Factor = 0 for pure electron cloud
Effective only when conducting boundary condition
is set.
&Plasma
PLASMA_DENSITY=2e7
VT2X=0.0, VT2Y=0.0
Non_Neutral_Factor = 0.0
/
------------Simulation time---------------------------
TEND = Total time, DT = TimeStep
In unit of 1/Omega_p.
&Simulation_time
TEND = 1.264466651587330e+05, DT = 1.26446665158733,
/
114
------------ Diagnostic -------------------------------
DFPSI, DFPHI, DFB, DFP, DFBC are the intevals in unit of timestep
for dumping PSI, PHI, beam and plasma density, beam centroid,
respectively.
PHISLICE, PSISLICE, QEBSLICE, QEPSLICE specify whether to dump
data of the whole 3D space or just one slice of it.
PHI(PSI,QEB,QEP)X0, if not zero, specify which Y-Z slice to dump.
PHI(PSI,QEB,QEP)Y0, if not zero, specify which X-Z slice to dump.
PHI(PSI,QEB,QEP)Z0, if not zero, specify which X-Y slice to dump.
BC_DIAG_RES specify the number of slices along Z direction for
beam centroid calculation.
&Potential_Diag
DFPHI=10000, PHISLICE=.true. , PHIX0=250 , PHIY0=250, PHIZ0=0,
DFPSI=10000, PSISLICE=.true. , PSIX0=250 , PSIY0=250, PSIZ0=0
/
&Beam_Diag
DFB=10000, QEBSLICE=.true. , QEBX0=250 , QEBY0=250, QEBZ0=3000,
DFBC=1000000000, BC_DIAG_RES=128
/
------------ Diagnostic -------------------------------
DUMP_PHA: switch to turn on phase space diagnostics
DFPHA: intevals in unit of timestep for dumping phase space
DSAMPLE : spacing of sampling
&Beam_Phase_Space_Diag
DUMP_PHA_BEAM = .true. , DFPHA_BEAM=300000, DSAMPLE_BEAM = 1
/
&Plasma_Diag
DFP=1000, QEPSLICE=.true. , QEPX0= 0, QEPY0=0, QEPZ0=0
/
&Plasma_Phase_Space_Diag
DUMP_PHA_PLASMA = .true. , DFPHA_PLASMA =100000, DSAMPLE_PLASMA
= 1
/
------------ Corrections -------------------------------
To turn on a specific correction, set it to .true.
DOVPAR: include parallel velocity of plasma electron
DOREL: use relativistic pusher for plasma electron
DOCURRENT: in parallel current of plasma electron
DOPARPUSH : push beam in Z.
&Corrections
DOVPAR=.false., DOREL=.false., DOCURRENT=.false.
115
DOPARPUSH =.false.
emf=1
spacecharge = 0, dispersion = 0
/
------------ Restart file -------------------------------
READ_RST_FILE specify a restart run and RST_TIMESTEP
which timestep to begin the restart run
DUMP_RST_FILE control restart file dumping and DFRST
is the dumping frequency
&Restart_File
READ_RST_FILE = .true., RST_TIMESTEP = 0
DUMP_RST_FILE = .true., DFRST=5000
/
------------Optimization Coefficencies-------------
INTERNAL DATA. DO NOT CHANGE!
&Optimization
INORDER = 1, POPT = 1, DOPT = 2, DJOPT = 1
SORTIME_2D = 25, SORTIME_3D = 25
/
----------Circular Accelerator Parameters-----------------------------------
Q is the tune shift of the Accelerator
Chrm is the chromaticity of the acclerator
R0 is the circumference of the accelerator(in Km)
&Circular_Accelerator_Parameters
Qx=63.66
Qy=63.66
Chrmx=0
Chrmy=0
R0=0.47746482927569
gamt=23
eta=-1.048e-8
Qs=0
ecl = 1
omx = 0
omy = 0
omz = 0
LS = 83.3
LB = 45
single_kick = 0
num_kick = 1
cent_dump = 1
/
116
(4) Input deck for benchmarking with WARP
Inputdeck Description------------------
&Input_File
Version = 040202
/
--------------Simulation System-----------------------
Simulation system (in unit of micron) = BOX_X * BOX_Y * BOX_Z
Total grids = (2^INDX) * (2^INDY) * (2^INDZ)
Total beam particles = NPX * NPY * NPZ
Total plasma particles in each slice = NP2 * NP2
&Simulation_Sys
BOX_X=20000, BOX_Y=10000, BOX_Z=78000,
INDX = 8, INDY = 8, INDZ = 8
NPX = 128, NPY = 128, NPZ = 128
NP2 = 512
/
--------------Boundary Condition----------------------
Choose between 'periodic' and 'conducting'.
&Boundary
SBOUNDARY = 'conducting'
/
-------------Beam Parameters------------------------
(Center_X, Center_Y, Center_Z) = Position of the
center of the beam
(SIGMA_X, SIGMA_Y, SIGMA_Z) = Spot Size of the beam,
in unit of micron
BEAM_CHARGE = Number of electrons in the beam, peak
charge density is BEAM_CHARGE/[(2*PI)^(3/2)]/(SIGMA_X*
SIGMA_Y*SIGMA_Z)
(EMITTANCE_X, EMITTANCE_Y) = Normalized emittance of the
beam in unit of mm*mrad, thermal velocity of the beam =
emittance/(gamma*sigma)
ENERGY_DIFF = DELTA_GAMMA/GAMMA, logitudinal thermal
velocity of the beam is DELTA_BETA_Z = ENERGY_DIFF/
(GAMMA*GAMMA)
Beam centroid is described by parabolic function
Centroid_C2*(Z-Z0)^2+Centroid_C1*(Z-Z0)+Centroid_C0
Here Z and Z0 are in unit of micron, the code wil convert
Centroid_C2(1&0) into the unit in the slmulation
GAMMMA = Lorentz factor
VDX(Y&Z) = drift velocity of the beam, in unit of c
FST = Time for free-streaming before the beam enters the plasma.
In unit of 1/Omega_p.
117
&Beam
Center_X = 10000, Center_Y = 5000, Center_Z =39000
SIGMA_X = 623, SIGMA_Y = 164, SIGMA_Z = 13000
BEAM_CHARGE = 10E10,
EMITTANCE_X = 141,EMITTANCE_Y = 8.806,ENERGY_DIFF = 0
Centroid_C2X = 0, Centroid_C1X = 0.0, Centroid_C0X = 0
Centroid_C2Y = 0, Centroid_C1Y = 0.0, Centroid_C0Y = 0
GAMMA = 5871,
VDX = 0.0, VDY = 0.0, VDZ = 0.0
/
------------Twiss Parameters-------------------------
Twiss parameters of the beam, if USE_TWISS_PARAS is
set to true, SIGMA_X, SIGMA_Y, Centroid_C[2/1/0][X/Y] ,
FST are ignored.
unit of BETA is meter.
&Twiss
USE_TWISS_PARAS=.false.
ALPHA_X = -0.71, ALPHA_Y = -0.71
BETA_X = 4.21, BETA_Y = 4.21
/
------------Plasma Parameters-----------------------
Plasma density in unit of cm-3
VT2X(Y) = thermal velocity of the plasma electrons, in unit of c
VD2X(Y) = drift velocity of the plasma electrons, in unit of c
Non_Neutral_Factor = - Ion density/electron density,
Non_Neutral_Factor = 1 for neutral plasma
Non_Neutral_Factor = 0 for pure electron cloud
Effective only when conducting boundary condition
is set.
&Plasma
PLASMA_DENSITY=1e8
VT2X=0.0, VT2Y=0.0
Non_Neutral_Factor = 0.0
/
------------Simulation time---------------------------
TEND = Total time, DT = TimeStep
In unit of 1/Omega_p.
&Simulation_time
TEND =4.146901416817140e+08, DT = 6.37984833356483,
/
------------ Diagnostic -------------------------------
DFPSI, DFPHI, DFB, DFP, DFBC are the intevals in unit of timestep
118
for dumping PSI, PHI, beam and plasma density, beam centroid,
respectively.
PHISLICE, PSISLICE, QEBSLICE, QEPSLICE specify whether to dump
data of the whole 3D space or just one slice of it.
PHI(PSI,QEB,QEP)X0, if not zero, specify which Y-Z slice to dump.
PHI(PSI,QEB,QEP)Y0, if not zero, specify which X-Z slice to dump.
PHI(PSI,QEB,QEP)Z0, if not zero, specify which X-Y slice to dump.
BC_DIAG_RES specify the number of slices along Z direction for
beam centroid calculation.
&Potential_Diag
DFPHI=100000, PHISLICE=.true. , PHIX0=0 , PHIY0=0, PHIZ0=0,
DFPSI=100000, PSISLICE=.true. , PSIX0=0 , PSIY0=0, PSIZ0=0
/
&Beam_Diag
DFB=32000, QEBSLICE=.true. , QEBX0=10000 , QEBY0=5000, QEBZ0=0,
DFBC=1000000000, BC_DIAG_RES=128
/
------------ Diagnostic -------------------------------
DUMP_PHA: switch to turn on phase space diagnostics
DFPHA: intevals in unit of timestep for dumping phase space
DSAMPLE : spacing of sampling
&Beam_Phase_Space_Diag
DUMP_PHA_BEAM = .true. , DFPHA_BEAM=300000, DSAMPLE_BEAM = 1
/
&Plasma_Diag
DFP=32000, QEPSLICE=.true. , QEPX0= 10000, QEPY0=5000, QEPZ0=0
/
&Plasma_Phase_Space_Diag
DUMP_PHA_PLASMA = .true. , DFPHA_PLASMA =100000, DSAMPLE_PLASMA
= 1
/
------------ Corrections -------------------------------
To turn on a specific correction, set it to .true.
DOVPAR: include parallel velocity of plasma electron
DOREL: use relativistic pusher for plasma electron
DOCURRENT: in parallel current of plasma electron
DOPARPUSH : push beam in Z.
&Corrections
DOVPAR=.false., DOREL=.false., DOCURRENT=.false.
DOPARPUSH =.false.
emf=0
119
/
------------ Restart file -------------------------------
READ_RST_FILE specify a restart run and RST_TIMESTEP
which timestep to begin the restart run
DUMP_RST_FILE control restart file dumping and DFRST
is the dumping frequency
&Restart_File
READ_RST_FILE = .true., RST_TIMESTEP = 0
DUMP_RST_FILE = .true., DFRST=5000
/
------------Optimization Coefficencies-------------
INTERNAL DATA. DO NOT CHANGE!
&Optimization
INORDER = 1, POPT = 1, DOPT = 2, DJOPT = 1
SORTIME_2D = 25, SORTIME_3D = 25
/
----------Circular Accelerator Parameters-----------------------------------
Q is the tune shift of the Accelerator
Chrm is the chromaticity of the acclerator
R0 is the circumference of the accelerator(in Km)
&Circular_Accelerator_Parameters
Qx=21.649
Qy=19.564
Chrmx=0
Chrmy=0
R0=0.3501408
gamt=23
eta=1.24E-3
Qs=0
ecl = 1
omx = 0
omy = 0
omz = 0
LS = 83.3
LB = 45
single_kick = 0
num_kick = 1
cent_dump = 10
/
Abstract (if available)
Abstract
Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Advanced simulation models and concepts for positron and electron acceleration in plasma wakefield accelerators
PDF
Plasma wakefield accelerators using multiple electron bunches
PDF
Physics of particle trapping in ultrarelativistic plasma wakes
PDF
Mitigation of ion motion in future plasma Wakefield accelerators
PDF
Particle-in-cell simulations of kinetic-scale turbulence in the solar wind
Asset Metadata
Creator
Feng, Bing (author)
Core Title
Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Physics
Publication Date
01/23/2009
Defense Date
10/17/2008
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
electron cloud,instabilities,OAI-PMH Harvest,particle-in-cell,pipelining algorithm,plasma
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Katsouleas, Thomas C. (
committee chair
), Dappen, Werner (
committee member
), Haas, Stephan (
committee member
), Muggli, Patric (
committee member
), Nakano, Aiichiro (
committee member
)
Creator Email
bfeng@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1954
Unique identifier
UC1170608
Identifier
etd-Feng-2509 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-147178 (legacy record id),usctheses-m1954 (legacy record id)
Legacy Identifier
etd-Feng-2509.pdf
Dmrecord
147178
Document Type
Dissertation
Rights
Feng, Bing
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
electron cloud
instabilities
particle-in-cell
pipelining algorithm
plasma