Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Integrated reservoir characterization for unconventional reservoirs using seismic, microseismic and well log data
(USC Thesis Other)
Integrated reservoir characterization for unconventional reservoirs using seismic, microseismic and well log data
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INTEGRATED RESERVOIR CHARACTERIZATION FOR
UNCONVENTIONAL RESERVOIRS USING SEISMIC, MICROSEISMIC
AND WELL LOG DATA
By
Debotyam Maity
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(PETROLEUM ENGINEERING)
August 2013
Copyright 2013 Debotyam Maity
ii
Dedication
I dedicate my dissertation work to my beloved parents, Malabika and
Nabendu Bikash Maity whose encouragement and support helped me every
step of the way
iii
Acknowledgements
First and foremost, I would like to thank my advisor Prof. Fred Aminzadeh. His deep
insights and understanding of exploration geophysics helped me immensely in my
research. He provided valuable ideas and suggestions which helped provide critical
insights and direction towards completion of my research work. I would like to thank him
for his valuable time and resources that have helped make my Ph. D. experience highly
productive. He has served as an excellent example to emulate and has shown the path to
success, both in the academia as well as in the industry.
I am also grateful to Prof. Iraj Ershaghi for his support and advice throughout my
academic program. I would also like to acknowledge Prof. Charles Sammis, Prof.
Behnam Jafarpour and Prof. Muhammad Sahimi for their valuable inputs at various
stages of my research. In addition, I would like to acknowledge other graduate students
for their valuable ideas and suggestions that have been instrumental in helping me with
my research. In particular, I would like to thank Hamid Reza Jahangiri, Reza Rastegar,
Parham Ghods, Tayeb Ayatollahy Tafti, Mohammad Javaheri, Mohammad Evazi,
Shahram Farhadnia Hasan Shojaie, Asal Rahimi Zeynal, Arman Khodabakhshnejad,
Qiaosi Chen and Kartik Suma Kumar. I would also like to thank all the staff members for
their support during my studies, in particular: Aimee Barnard, Idania Takimoto, Martin
Olekszyk and Juli Legat.
This work was supported partially by Department of Energy (ARRA
GRANT10379400 to University of Southern California). In addition, valuable support
iv
was provided by Ormat Inc., SR2020 Inc. and Gas Technology Institute (GTI). I would
like to acknowledge the developers of Matlab, Visual Studio, SGeMS and OpendTect for
access to their software platforms or for providing us with necessary academic licenses. I
would like to particularly thank Martin Karrenbach from SR2020 Inc. for his expert
suggestions and support for my research including access to in-house software packages
and expertise. I would like to acknowledge the support extended by Leon Thomsen in
helping understand microseismic data processing techniques, particularly phase detection
and rotation as precursors to improved autopicking workflows. I would like to thank Ezra
Zemach and Patrick Walsh from Ormat Inc. for providing access to their valuable
geophysical and well log datasets and helping understand the underlying framework for
characterization particular to the study area. Finally, Finally, I would like to thank Iraj
Salehi for providing valuable resources to improve our understanding of monitoring array
designs.
v
Publications produced from this work
1. Maity, D., Aminzadeh, F., Salehi, I. & Karrenbach, M. 2013. Integrated approach
towards multi-array microseismic survey design. To be presented at the SEG Annual
International Meeting, Houston, Texas, 26 September.
2. Maity, D. & Aminzadeh, F. 2013. Fracture Characterization in Unconventional
Reservoirs Using Active and Passive Seismic Data with Uncertainty Analysis through
Geostatistical Simulation. Paper SPE 166307-MS to be presented at the Annual
Technical Conference and Exhibition, New Orleans, Louisiana, 30 September.
3. Maity, D. & Aminzadeh, F. 2013. Novel Fracture Zone Identifier Attribute using
Geophysical and Well Log Data for Unconventional Reservoirs. Interpretation
(submitted).
4. Maity, D. & Aminzadeh, F. 2013. Novel Hybrid ANN Autopicking Workflow for
Passive Seismic Data. Geophys. Prospect. (Submitted).
5. Maity, D. & Aminzadeh, F. 2013. A New Approach towards Optimized Passive
Seismic Survey Design with Simultaneous Borehole and Surface Measurements. Oral
presentation given at Joint PSAAPG / SPE / PSSEPM / PCS-SEG Conference,
Monterey, California, 19 - 25 April.
6. Maity, D., Chen, Q. & Aminzadeh, F. 2013. Integrated fracture characterization and
associated error evaluation using geophysical data for unconventional reservoirs, Oral
presentation given at Joint PSAAPG / SPE / PSSEPM / PCS-SEG Conference,
Monterey, California, 19 - 25 April.
vi
7. Aminzadeh, F., Tafti, T. A. & Maity, D. 2012. An Integrated Methodology for
Subsurface Fracture Characterization Using Microseismic Data: A Case Study at the
NW Geysers. Computer and Geosciences 54: 39 – 49.
http://dx.doi.org/10.1016/j.cageo.2012.10.015.
8. Maity, D., Aminzadeh, F. and Karrenbach, M. 2012. A novel hybrid ANN autopicker
for hydrofrac data: a comparative study. SEG Annual International Meeting, Las
Vegas, NV, 1 – 5. http://dx.doi.org/10.1190/segam2012-0805.1.
9. Maity, D. and Aminzadeh, F. 2012. Framework for Time Lapse Fracture
Characterization Using Seismic, Microseismic & Well Log Data. SEG Annual
International Meeting, Las Vegas, NV, 1 – 6. http://dx.doi.org/10.1190/segam2012-
0387.1.
10. Maity, D. and Aminzadeh, F. 2012. Reservoir Characterization of an Unconventional
Reservoir by Integrating Microseismic, Seismic and Well-log Data. Paper SPE
154339-MS presented at the SPE Western Regional Meeting, Bakersfield, California,
21 - 23 March. http://dx.doi.org/10.2118/154339-MS.
11. Aminzadeh, F., Maity, D. and Tafti, T. A. 2011. Artificial neural network based
autopicker for micro-earthquake data. Expanded Abstracts, SEG Annual International
Meeting, San Antonio, Texas, 1623 – 1626. http://dx.doi.org/ 10.1190/1.3627514.
12. Maity, D. 2011. An integrated approach towards fracture characterization using
microseismic, seismic and well log data. Presented at the SPE Western Regional
Meeting Student Paper Contest (Ph. D. division), Anchorage, Alaska, 7 - 11 May.
(the paper was adjudged 2
nd
best at the contest)
vii
13. Aminzadeh, F., Tafti, T. A., Maity, D. et al. 2010. Analysis of microseismicity using
fuzzy logic and fractals for fracture network characterization. Presented at the AGU
fall meeting, San Francisco, California, 13 - 17 December.
14. Aminzadeh, F., Tafti, T. A. and Maity, D. 2010. Characterizing Fractures in Geysers
Geothermal Field Using Soft Computing. Transactions, GRC Annual Meeting,
Sacramento, California, 1193 – 1198.
viii
Abbreviations
Following list describes the significance of various abbreviations and acronyms used
throughout this document. Non-standard acronyms if used are not part of this list.
AI: Artificial Intelligence
AIC: Akaike Information Criteria
ANN: Artificial Neural Network
ART: Approximate Ray Tracer
AVO: Amplitude vs. Offset
cdf: cumulative distribution function
CFM: Continuous Fracture Modeling
COSGSIM: Sequential Gaussian Co-Simulation
CSP: Cross-well Seismic Profile
DFN: Discrete Fracture Network
DWT: Discrete Wavelet Transform
EGS: Enhanced Geothermal Systems
EPS: Edge Preserving Smoothening
FFT: Fast Fourier Transform
FMI: Formation Micro Imager
FZI: Fracture Zone Identifier
GA: Genetic Algorithm
ix
HP: High Pass
HTI: Horizontal Transverse Isotropy
LBNL: Lawrence Berkeley National Laboratory
LP: Low Pass
LSE: Least Squared Error
LVQ: Learning Vector Quantization
MEQ: Micro Earthquake
Pa: Pascal
pdf: probability density function
RBF: Radial Basis Function
RBFNN: Radial Basis Function Neural Network
S
1
: fast shear arrival
SCEC: Southern California Earthquake Center
SGeMS: Stanford Geostatistical Modeling Software
SGSIM: Sequential Gaussian Simulation
SNR: Signal to Noise Ratio
SRV: Stimulated Reservoir Volume
STA/LTA: ratio of Short Term Average to Long Term Average
SUDS: Seismic Unified Data System
USGS: U.S. Geological Survey
VSP: Vertical Seismic Profile
x
Table of Contents
Dedication ii
Acknowledgements iii
Publications produced from this work v
Abbreviations viii
List of Tables xiii
List of Figures xiv
Abstract xxv
Chapter 1: Introduction 1
1.1 Background ....................................................................................................3
1.2 Motivation ......................................................................................................7
1.3 Research objectives ........................................................................................9
1.4 Organization of manuscript ..........................................................................11
Chapter 2: Test beds and datasets 13
2.1 Study areas and objectives ...........................................................................13
2.2 The Geysers geothermal field.......................................................................13
2.3 Brawley geothermal field .............................................................................15
2.4 Data ..............................................................................................................18
2.5 Application of datasets .................................................................................19
Chapter 3: MEQ - Preliminary Data Analysis 21
3.1 Introduction ..................................................................................................21
3.2 Microseismic dataset ....................................................................................24
3.3 Available autopicking algorithms - STA/LTA & AIC .................................26
3.4 Results with STA/LTA & AIC .....................................................................28
3.5 Identified Bottlenecks ..................................................................................30
Chapter 4: Automated Phase Arrival Detection Using Artificial Neural Networks 32
4.1 Background ..................................................................................................32
4.2 Data preprocessing .......................................................................................36
4.3 Attribute analysis & ANN design for P phase autopicking..........................37
4.4 Mathematical description of attributes .........................................................43
4.5 S phase autopicking ......................................................................................47
xi
4.6 Autopicker results: sample comparative analysis ........................................52
4.7 Summary ......................................................................................................54
Chapter 5: Hypocenters and Preliminary Velocity Modeling 59
5.1 Introduction ..................................................................................................59
5.2 Hypoinverse: data preparation & background.............................................60
5.3 Hypoinverse results ......................................................................................66
5.4 Comparative analysis with contemporary picking algorithms .....................70
5.5 Limitations with algorithm & possible improvements .................................71
5.6 Optimization of MEQ survey design based on Rays and Moments ............73
5.7 SimulPS for velocity model generation .......................................................91
5.8 Summary ......................................................................................................98
Chapter 6: Conventional Seismic Data Analysis 100
6.1 Introduction ................................................................................................100
6.2 Data preparation .........................................................................................104
6.3 Attribute mapping ......................................................................................107
6.4 Log-seismic derived property prediction ...................................................112
6.5 Integrated attribute analysis .......................................................................117
6.6 Discontinuity mapping using filters ...........................................................119
6.7 Attribute limitations & ANN derived discontinuity mapping ....................122
6.8 Results & discussion ..................................................................................126
Chapter 7: Microseismic Derived Property Models 131
7.1 Introduction ................................................................................................131
7.2 Cokriged velocity models ...........................................................................132
7.3 Sequential Gaussian Co-Simulation ...........................................................140
7.4 Property estimation ....................................................................................149
7.5 Results & discussion ..................................................................................167
Chapter 8: Hybrid FZI Attributes 169
8.1 Introduction ................................................................................................169
8.2 Theoretical Background .............................................................................173
8.3 Mapped properties ......................................................................................177
8.4 Conclusion ..................................................................................................180
Chapter 9: Integrated Analysis 182
9.1 Introduction ................................................................................................182
9.2 3D FZI mapping and interpretation ............................................................184
9.3 Integrated analysis ......................................................................................191
9.4 Summary ....................................................................................................207
Chapter 10: Summary and Conclusions 209
Chapter 11: Future Work 212
xii
References 214
Appendix A 225
xiii
List of Tables
5-1: Issues with microseismic data collection scheme used in this study
and possible solutions ...................................................................................72
6-1: Examples of seismic attributes including their mathematical
description and possible use in reservoir characterization workflows .......109
7-1: Typical shear and compressional wave travel times (μs/m) .......................166
xiv
List of Figures
1-1: (a) MEQ event cloud mapping for fracture characterization and (b)
complex interactions with preexisting fractures leading to complex MEQ
cloud shapes (Maxwell et al., 2005) ....................................................................... 5
1-2: NW Geysers map view at reference depth (normal temperature zone)
showing (a) V
P
, (b) V
S
, (c) V
P
/V
S
and the properties at a second reference
depth (high temperature zone) with (d) V
P
, (e) V
S
and (f) V
P
/V
S
estimates
(Aminzadeh et al., 2012) ......................................................................................... 6
1-3: Integrated workflow for fracture characterization in unconventional
reservoir settings using two different geophysical data types (3D surface
seismic and microseismic) as well as well logs .................................................... 10
2-1: Current operational wells drilled by Ormat (Ormat Inc.) ..................................... 16
2-2: Integrated display of survey area with location of the study area (surface
footprint) including MEQ recording stations, well locations and 3D
surface seismic survey (receivers) patch used ...................................................... 18
3-1: Seismic sensor used for the local passive seismic monitoring array
(Source: www.eentec.com) ................................................................................... 24
3-2: Sample "xyz.dmx" file showing specified format and stored data ....................... 26
3-3: Sample single event file from dataset used in comparative analysis of
contemporary autopickers ..................................................................................... 27
3-4: AIC and STA/LTA based autopicker - preliminary results for a sample
triggered file (tagged as occurring on 12
th
April, 2009 at 12:48 hours.
Results over a data subset indicates possible issues with obtaining
minimum number of picks required for inversion for location/ velocity
unique to this particular dataset ............................................................................ 29
3-5: Comparison between manual (green) and AIC (red) picks from vertical
components with inserts showing magnified view close to the pick. Larger
inserts indicate higher phase arrival errors and consequently more severe
effects on inversion algorithms. The traces are normalized for ease of
evaluation .............................................................................................................. 31
xv
4-1: Integrated workflow for P and S phase arrival detection using ANN
autopicker ............................................................................................................ ..32
4-2: Full waveform modeling of MEQ event in complex velocity field showing
wave propagation effects ...................................................................................... 35
4-3: Pre-processing steps to improve SNR and remove other acquisition
artifacts .................................................................................................................. 37
4-4: Sample attribute maps used to choose ANN input nodes including the
original synthetic gather from one of the datasets used in our study .................... 38
4-5: Comparative analysis of actual trace with evaluated attributes for
improved attribute design and selection (A complete list of attributes
studied is provided in the accompanying text) ..................................................... 39
4-6: Sample supervised ANN implementation to design an autopicker. Design
elements include the network topology, activation functions and inputs. ............ 40
4-7: Sample ANN output classification and seismic trace with arrival match for
(a) high SNR and (b) low SNR scenarios ............................................................. 42
4-8: (a) Seismic stacked section overlaid on training attribute (A4:
AbsSumVar) and (b) ANN output probability map (autopicker
characteristic function) highlighting successful S phase detections ..................... 49
4-9: Sample composite gather with multiple vertical component signals and
ANN derived P phase arrivals (blue inserts) for reference. The traces are
stacked at arrivals for visualization (red inserts indicate actual arrival time
w.r.t. trigger) ......................................................................................................... 50
4-10: Sample composite gather with multiple horizontal component signals and
picked S phase arrivals (blue inserts). The traces are stacked at arrivals for
visualization (red inserts indicate actual arrival time w.r.t. trigger) ..................... 50
4-11: Sample synthetic gather with 3 component signals and picked P & S phase
arrivals. The noise increases from (a) SNR = 100, (b) SNR = 50, (c) SNR
= 30, (d) SNR = 10 and (e) SNR = 5 .................................................................... 51
4-12: Window definitions for QC computation (both P and S phase data) .................... 52
4-13: Results from the validation dataset. Plot (a) shows the variations in QF's
for P phase picks and plot (b) shows the variation in QF's for S phase
picks. QF's are combined using cut-offs to select picks ....................................... 56
xvi
4-14: Match between ANN and contemporary (LBNL) autopicker for both P and
S phase arrivals falling within specific window as per discussion. 1
implies perfect match while 0 implies no match. Plot (a) shows the
composite results for P phase picks (~ 136 traces) and plot (b) shows the
results for S phase detections (~221 traces) .......................................................... 57
4-15: (a) Picks based on ANN autopicking workflow and (b) comparative
analysis with manual picks for sample teleseismic dataset .................................. 58
5-1: Sample phase card selection (manual override window to bypass auto-
select for improved performance). Black traces are the vertical
components and blue ones are the horizontal components ................................... 61
5-2: Sample phase card used as input for hypocentral inversion for location .............. 62
5-3: Sample hypoinverse batch file with necessary commands for location
inversion. Three letter commands are based on hypoinverse syntax .................... 63
5-4: Event catalog generated from Hypoinverse inversion algorithm ......................... 65
5-5: MEQ spread generated using supplied catalog (Zapata Inc.) with both
hypocenters and epicenters mapped for reference. (STA/LTA picking
routine and Hypo71 location inversion algorithm) ............................................... 68
5-6: (a) MEQ event cloud with sample injection and production well locations
for reference. (b) Sample event subset with RMS errors plotted on the X-Y
axis ........................................................................................................................ 69
5-7: MEQ event cloud with compressional velocity (km/s) overlays at (a)
reference depth of 2.5 km and (b) reference longitude of -115.56. Event
clouds are concentrated around velocity anomalies indicative of triggered
events due to injection induced perturbations. ...................................................... 70
5-8: Figure indicating relocation results on the data subset used for testing. Old
events used AIC based autopicker and the new events are with ANN
autopicker .............................................................................................................. 71
5-9: Background on selected design elements with (1) showing workflow for
optimized ray coverage and (2) showing the workflow for optimized
moment tensor inversion. ...................................................................................... 74
5-10: Setup with surface, subsurface vertical & horizontal arrays with pseudo
sources (green inserts)........................................................................................... 77
5-11: Passive seismic survey optimization workflow design ......................................... 78
xvii
5-12: Sample runs with non-variant velocity model, surface receiver array (grey/
red dots) and source at central vertical (green dot). Subplot (a) shows
results with full weight for ray-trace focusing, (b) shows results for full
weight on moment tensor inversion and (c) shows results with equal
weight assigned to the two components. Red dotted locations are those
finally selected based on parameter.. .................................................................... 80
5-13: Sample runs with complex velocity model, surface receiver array and
sources at lateral offset. Subplot (a) shows results with full weight for ray-
trace focusing, (b) shows results for full weight on moment tensor
inversion and (c) shows results with equal weight assigned to the two
components. Red dotted locations are those finally selected based on
parameter............................................................................................................... 81
5-14: Sample runs with complex velocity model, vertical wellbore receiver array
and sources at lateral offset. Subplot (a) shows results with full weight for
ray-trace focusing, (b) shows results for full weight on moment tensor
inversion and (c) shows results with equal weight assigned to the two
components. Red dotted locations are those finally selected based on
parameter............................................................................................................... 84
5-15: Sample runs with complex velocity model, surface + vertical wellbore
receiver array and sources at lateral offset. Subplot (a) shows results with
full weight for ray-trace focusing, (b) shows results for full weight on
moment tensor inversion and (c) shows results with equal weight assigned
to the two components. Red dotted locations are those finally selected
based on parameter ............................................................................................... 85
5-16: Sample runs with complex velocity model, surface + horizontal wellbore
receiver array and sources at lateral offset. Subplot (a) shows results with
full weight for ray-trace focusing, (b) shows results for full weight on
moment tensor inversion and (c) shows results with equal weight assigned
to the two components. Red dotted locations are those finally selected
based on parameter ............................................................................................... 86
5-17: Sample runs with complex velocity model, vertical wellbore receiver array
and sources at lateral offset surrounding wellbore. Subplot (a) shows
results with full weight for ray-trace focusing, (b) shows results for full
weight on moment tensor inversion and (c) shows results with equal
weight assigned to the two components. Red dotted locations are those
finally selected based on parameter.. .................................................................... 87
5-18: Perturbations in the radii and dip of the plane of arcs to sample the entire
volume of interest (Thurber, 1983) ....................................................................... 94
xviii
5-19: Sample data point from Earthquake input file for SimulPS run ........................... 96
5-20: Spatial (X-Y) expanse of SimulPS 'finite node grid' with stations/ events
for reference .......................................................................................................... 96
5-21: Original sparse models (a) showing V
P
and (b) showing V
S
at a depth of 1
km. Dotted inserts show the actual spatial zone of interest based on 3D
surface seismic acquisition spread ........................................................................ 97
6-1: (a) 3D fracture porosity model using ANN and (b) Comparison of FMI log
and fracture ......................................................................................................... 101
6-2: (a) Most positive curvature co-rendered with coherence with identified
Karst, (b) Coherence and amplitude with identified faults, (c) vector
reflector convergence, Sobel filter similarity and amplitude showing
opposite convergence on sides of rotated fault blocks and (d) reflector
rotation showing faults and branches (Gupta et al. 2011) .................................. 102
6-3: Workflow for 3D seismic attribute studies and interpretation ............................ 103
6-4: Sample sectional view of seismic data with available well control and
sample seismic attribute maps ............................................................................ 104
6-5: Workflow for well to seismic tie and Time/ Depth modeling ............................ 107
6-6: Seismic attribute list with properties they measure (Barnes 2006)..................... 110
6-7: (a) Q factor, (b) instantaneous frequency and (c) coherency maps at 1.2
km depth for reference ........................................................................................ 111
6-8: Impedance maps generated using (a) colored inversion with 2 sonic logs
(actual) and (b) with all sonic logs (pseudo logs). The figure shows
appreciable match obtained between the two indicating the validity of the
pseudo logs generated ..................................................................................... …114
6-9: Sample cross-plots showing extracted log properties with seismic
attributes close to the well tracks. Attributes showing perceptible
relationships are used for property prediction workflows .................................. 114
6-10: (a) Impedance map from colored inversion, (b) density map and (c)
porosity map at selected depth horizon of 1.24 km. Density and porosity
maps are obtained using ANN property prediction workflow ............................ 115
6-11: Comparison showing actual log and seismic derived log property
estimates at two well locations which were used as test cases for validation
of results .............................................................................................................. 116
xix
6-12: Color blended multi-attribute map including Laplacian, edge preserving,
similarity and curvature attributes ...................................................................... 118
6-13: (a) shows the input step function, (b) is the noise added signal, (c) is the
result with regular smoothening and (d) is the result of EPS application
(Luo et al. 2001) .................................................................................................. 121
6-14: (a) Original seismic, (b) Laplacian edge enhancement and (c) Edge
preserving smoothening results. Note that these filters are run after
running the data through dip steered median filter ............................................. 121
6-15: (a) Sample attribute map with selected zones of interest (identified
discontinuities) for ANN training inputs and (b) Final ANN trained
discontinuity map indicating identifiable faults not seen easily on the
individual attribute maps..................................................................................... 123
6-16: (a) FZI
1
and (b) FZI
2
maps at a reference depth level. Note the subtle
differences where one map shows higher values and the other shows lower
values and vice versa .......................................................................................... 125
6-17: ANN derived discontinuity maps at (a) 800 m and (b) 1220 m. Confidence
levels (darker regions indicate higher confidence) of the trained ANN
networks at (a) 800 m and (b) 1220 m ................................................................ 128
6-18: Four sample production well tracks (a) Well A at 1220 m, (b) Well B at
1240 m, (c) Well C at 1420 m and (d) Well D at 760 m with discontinuity
maps at specified depths. Depths for display are selected based on known
high flow throughput observations (from perforated intervals) .......................... 129
6-19: Log derived (a) instantaneous frequency & (b) porosity (depth ≈ 1226 m) ....... 130
6-20: Seismic derived (a) FZI
1
and (b) FZI
2
attribute maps with well A co-
located (depth ≈ 1226 m) .................................................................................... 130
7-1: MEQ based property prediction workflow ......................................................... 132
7-2: Inversion uncertainty mapped at reference depths (0.5 km, 1.0 km & 1.5
km). Also shown are the relative well locations (green inserts represent
injection wells and black inserts represent production wells) ............................ 133
7-3: (a) V
P
and (b) V
S
estimates at a randomly selected depth level (1 km) for
reference. The data is extracted from the baseline velocity maps (section
5.7) using latitude/ longitude information and converted into In-line/
Cross-line format ................................................................................................ 134
xx
7-4: (a) V
P
and (b) V
S
estimates at selected depth level (1 km) for reference.
The results are obtained by cokriging impedance map with velocity map
using impedance as the secondary variable ........................................................ 135
7-5: Velocity (V
P
) vs. Impedance cross-plots at (a) 0.5 km and (b) 1.0 km
validating its potential use for co-kriging or co-simulation ................................ 136
7-6: (a) V
P
model using ordinary kriging & (b) V
P
model using cokriging ............... 138
7-7: Variogram models including those for (a) V
P
estimation (primary
variable), (b) V
S
estimates (primary variable) and (c) co-variogram (V
P
vs.
Impedance) .......................................................................................................... 139
7-8: pdf & cdf of normal score transformed V
P
......................................................... 142
7-9: pdf & cdf of normal score transformed V
S
......................................................... 142
7-10: Workflow used for stochastic phase velocity modeling using impedance as
secondary data ..................................................................................................... 143
7-11: Final V
P
realization selected for analysis at reference depths (0.5 km, 1.0
km, 1.5 km, 2.0 km, 2.5 km & 3.0 km). Also shown are the estimated
simulation errors based on the standard deviation observed at each node
over all realizations ............................................................................................. 146
7-12: Final V
S
realization selected for analysis at reference depths (0.5 km, 1.0
km, 1.5 km, 2.0 km, 2.5 km & 3.0 km). Also shown are the estimated
simulation errors based on the standard deviation observed at each node
over all realizations ............................................................................................. 147
7-13: Normalized error plots showing variations across the 100 (a) V
P
and (b)
V
S
realizations selected for analysis. The realization showing minimum
variation compared with the original (sparse) velocity model is finally
selected for analysis ............................................................................................ 148
7-14: (a) V
P
/V
S
ratio, (b) Lame's parameter ’μ’ and (c) Lame's parameter ‘λ’ at
three reference depths (0.5 km, 1.0 km & 1.5 km). Also shown are the
relative well locations (green inserts represent injection wells and black
inserts represent production wells) ..................................................................... 150
7-15: (a) Bulk Modulus and (b) Young's Modulus and (c) Poisson’s Ratio maps
at three reference depths (0.5 km, 1.0 km & 1.5 km). Also shown are the
relative well locations (green inserts represent injection wells and black
inserts represent production wells) ..................................................................... 151
xxi
7-16: (a) ϕ
1
, (b) ϕ
2
and (c) ϕ
3
as per discussion and (d) shows 3D seismic
derived porosity map (log-seismic mapping). We can easily observe major
features similar in the maps indicative of valid and usable inversion results ..... 155
7-17: (a) Relative extensional (V
E
) and (b) hydrostatic (V
K
) stress maps at
selected depth interval. Also shown are the relative well locations (green
inserts represent injection wells and black inserts represent production
wells) ................................................................................................................... 157
7-18: Graphical representation of fractures/ cracks using (a) Hudson's model and
(b) linear-slip model (infinite planes of weakness) ............................................. 159
7-19: (a) Tangential (Δ
T
) and (b) Normal (Δ
N
) weakness maps obtained for
sample reference depths of 0.5 km, 1.0 km & 1.5 km outlining structural
features not identifiable with velocity or Poisson’s ratios. We can clearly
observe anomalous weakness estimates in many zones of interest. Also
shown are the relative well locations (green inserts represent injection
wells and black inserts represent production wells) ........................................... 160
7-20: Estimated normalized overburden evaluated for three depths of reference
(0.5 km, 1.0 km and 1.5 km) including well locations (green inserts
represent injection wells and black inserts represent production wells) ............. 163
7-21: Estimated normalized pore pressure evaluated for three depths of
reference (0.5 km, 1.0 km and 1.5 km) including well locations (green
inserts represent injection wells and black inserts represent production
wells) ................................................................................................................... 163
7-22: Estimated normalized Minimum Principal Stress evaluated for three
depths of reference (0.5 km, 1.0 km and 1.5 km) including well locations
(green inserts represent injection wells and black inserts represent
production wells) ................................................................................................ 164
7-23: Estimated normalized fracture aperture evaluated for three depths of
reference (0.5 km, 1.0 km and 1.5 km) including well locations (green
inserts represent injection wells and black inserts represent production
wells) ................................................................................................................... 165
7-24: V
P
/V
S
cross-plot as a possible tool for fluid detection (dots). The lines
indicate the water lines for sandstone (black), Limestone (red) and
dolomite (blue). Zones are selected based on V
P
anomalies............................... 166
8-1: Sample FMI log for a producer providing well control and fracture control
data for modeling purposes. Blue sinusoids are interpreted as conductive
fractures and associated depths are used for training (Source: Internally
xxii
obtained from Ormat Inc. Original log and initial interpretations by
Schlumberger Ltd.) ............................................................................................. 170
8-2: Two control wells (used for fracture zone identification for ANN model
design) along with identified fractured and conductive intervals (blue
inserts) using FMI logs and well data. Also shown are impedance map
projections showing structural discontinuities and dipping beds close to
fractured intervals ............................................................................................... 171
8-3: (a) Sample training data, (b) Input properties, (c) LVQ training and (d)
RBF training results ............................................................................................ 172
8-4: A typical LVQ representation with 3 predefined classes for mapping
showing final prototype vector locations based on training as well as
originally selected input data vectors .................................................................. 174
8-5: A typical supervised ANN topology using back-propagation algorithm
where error computed at each iteration is propagated back in order to
update network properties (w, c & α). The final output is a linear
combination and provides the desired model...................................................... 176
8-6: (a) FZI
3
and (b) FZI
4
property maps at reference depths (0.5 km, 1.0 km &
1.5 km) including pruning based on effective normalized uncertainty
cutoffs. FZI
3
provides indication of fracture presence while FZI
4
provides
probability of fracture presence .......................................................................... 177
8-7: FZI
5
property maps at reference depths including pruning based on
effective normalized uncertainty cutoffs ............................................................ 178
8-8: Normalized fracture permeability (k
Fi
) property maps at reference depths
of 0.5 km, 1.0 km and 1.5 km ............................................................................. 179
9-1: Normalized velocities from MEQ and 3D seismic data compared at
reference in-line, cross-line and depth (1 km). Dotted inserts indicate
zones showing good match between the two separate data sets ......................... 183
9-2: FZI
4
map projected at (a) in-line and (b) cross-line associated with training
well (fracture log data from FMI logs used in ANN training) location.
Highlighted sections indicate locations of high FZI values which match
with known intervals showing fractures (blue inserts over well tracks) ............. 186
9-3: FZI
4
map projected at (a) in-line and (b) cross-line associated with testing
well (fracture log data from FMI logs used for model validation) location.
Highlighted sections indicate locations of high FZI values which match
with known intervals showing fractures (blue inserts over well tracks) ............. 187
xxiii
9-4: FZI
4
map projected in 3D along with well locations to map out zones of
interest along wells of interest. Two horizons of interest located at ~ 0.6
km and ~ 1.1 km (with variable depth ranging) can be identified by
mapping individual wells of interest ................................................................... 188
9-5: Extensional stress (V
E
), Normalized tangential weakness (∆
T
) and FZI
4
mapped along well 1 (used for training) at depth intervals of (a) 0.5 km,
(b) 0.6 km and (c) 1.2 km to validate mapped properties and indicate
fracture presence along wellbore at different depths .......................................... 190
9-6: Integrated maps showing FZI
4
and discontinuities (using Laplacian filter)
at depth horizons of (a) 0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km.
Blue inserts show the producer locations and the green inserts show the
injection well locations ................................................................................... …192
9-7: Integrated maps showing V
E
and discontinuities (using Laplacian filter) at
depth horizons of (a) 0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km. Blue
inserts show the producer locations and the green inserts show the
injection well locations ................................................................................... …195
9-8: Integrated maps showing V
K
and discontinuities (using Laplacian filter) at
depth horizons of (a) 0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km. Blue
inserts show the producer locations and the green inserts show the
injection well locations ................................................................................... …196
9-9: Integrated maps showing ∆
T
and discontinuities (using Laplacian filter) at
depth horizons of (a) 0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km. Blue
inserts show the producer locations and the green inserts show the
injection well locations ................................................................................... …197
9-10: Integrated maps showing ∆
N
and discontinuities (using Laplacian filter) at
depth horizons of (a) 0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km. Blue
inserts show the producer locations and the green inserts show the
injection well locations… ................................................................................... 198
9-11: Integrated maps showing FZI
4
attribute with discontinuities (using
Laplacian filter) at varying depth horizons. The arrows indicate potential
connectivity within the reservoir based on discontinuities and FZI spread
(green indicates potential connectivity while grey indicate non-
communicating boundaries) ................................................................................ 199
9-12: Integrated display of discontinuity, V
E
along with effective stress
gradients superimposed at reference depths of (a) 0.5 km, (b) 1.0 km, (c)
1.5 km and (d) 2.0 km with identified feature boundaries (white inserts)
and mapped properties…. ................................................................................... 202
xxiv
9-13: “Zones of interest” mapped at reference depths of (a) 0.5 km, (b) 1.0 km,
(c) 1.5 km and (d) 2.0 km using a combination of V
E
, V
P
, V
S
, ∆
T
and FZI
4
property values .................................................................................................... 203
9-14: V
E
, discontinuity gradient & edge maps plotted at sample well locations.
Purple arrows indicate major discontinuity boundaries with rapid gradient
change indicating presence of faults or bed boundaries. In cases (a), (b) &
(c), we observe high V
E
values at or close to the wellbore indicating
contribution of connected open fracture to the observed well throughputs.
Case (d) indicates minimal contribution from fractures with flow
dominated by associated faults ........................................................................... 205
9-15: FZI
4
, discontinuity gradient & edge maps plotted at sample well locations.
Dotted inserts indicate possible connecting flow pathways close to sample
wells which show high flow throughputs. It should be noted that in case
(b), the flow regime seems to be fracture dominated with no prominent
faulting close to wellbore ................................................................................ …206
xxv
Abstract
This study is aimed at an improved understanding of unconventional reservoirs which
include tight reservoirs (such as shale oil and gas plays), geothermal developments, etc.
We provide a framework for improved fracture zone identification and mapping of the
subsurface for a geothermal system by integrating data from different sources. The
proposed ideas and methods were tested primarily on data obtained from North Brawley
geothermal field and the Geysers geothermal field apart from synthetic datasets which
were used to test new algorithms before actual application on the real datasets. The study
has resulted in novel or improved algorithms for use at specific stages of data acquisition
and analysis including improved phase detection technique for passive seismic (and
teleseismic) data as well as optimization of passive seismic surveys for best possible
processing results. The proposed workflow makes use of novel integration methods as a
means of making best use of the available geophysical data for fracture characterization.
The methodology incorporates soft computing tools such as hybrid neural networks
(neuro-evolutionary algorithms) as well as geostatistical simulation techniques to
improve the property estimates as well as overall characterization efficacy. The basic
elements of the proposed characterization workflow involves using seismic and
microseismic data to characterize structural and geomechanical features within the
subsurface. We use passive seismic data to model geomechanical properties which are
combined with other properties evaluated from seismic and well logs to derive both
qualitative and quantitative fracture zone identifiers. The study has resulted in a broad
xxvi
framework highlighting a new technique for utilizing geophysical data (seismic and
microseismic) for unconventional reservoir characterization. It provides an opportunity to
optimally develop the resources in question by incorporating data from different sources
and using their temporal and spatial variability as a means to better understand the
reservoir behavior. As part of this study, we have developed the following elements
which are discussed in the subsequent chapters:
1. An integrated characterization framework for unconventional settings with adaptable
workflows for all stages of data processing, interpretation and analysis.
2. A novel autopicking workflow for noisy passive seismic data used for improved
accuracy in event picking as well as for improved velocity model building.
3. Improved passive seismic survey design optimization framework for better data
collection and improved property estimation.
4. Extensive post-stack seismic attribute studies incorporating robust schemes applicable
in complex reservoir settings.
5. Uncertainty quantification and analysis to better quantify property estimates over and
above the qualitative interpretations made and to validate observations independently
with quantified uncertainties to prevent erroneous interpretations.
6. Property mapping from microseismic data including stress and anisotropic weakness
estimates for integrated reservoir characterization and analysis.
7. Integration of results (seismic, microseismic and well logs) from analysis of
individual data sets for integrated interpretation using predefined integration
framework and soft computing tools.
1
Chapter 1: Introduction
With growing stress on conventional oil and gas resources, the world is increasingly
looking at unconventional energy resources including tight oil and gas reservoirs and
geothermal systems as a means to meet with today’s energy demand. Since most
unconventional reservoirs are harder to exploit, many novel techniques have been
developed or are being developed to tap into these resources. This growing interest in
unconventional reservoirs around the world has put an impetus on the acquisition and use
of microseismic data as a vital tool in reservoir development as well as for better
reservoir management over their entire development and production lifecycle. The
development of tight reservoirs such as shale gas, tight sands, geothermal exploitation
and CO
2
sequestration are a few prominent examples where microseismic data is
routinely collected as part of either injection monitoring (such as hydraulic fracturing
programs) or for broader reservoir imaging (such as in geothermal fields). Therefore, the
aim of microseismic monitoring is to detect and utilize induced seismicity resulting from
perturbations in the field, either for injection monitoring or for reservoir characterization
and development. The lower costs and progressively sophisticated post processing and
analysis techniques available for 3D seismic data has allowed improved reservoir
characterization techniques providing better understanding of the subsurface through
improved imaging. Some such techniques include seismic meta-attribute studies, full
waveform inversion and prestack analysis, etc., all of which have given engineers vital
tools for improved reservoir imaging.
2
However, not much work has been done on using an integrated approach towards
better reservoir characterization using improved techniques in use with seismic data
acquisition and analysis already developed within the oil and gas industry and using
microseismic data as a tool for geomechanical property prediction which has seen
considerable use with geothermal developments and more generally in the mining
industry. Therefore there exists tremendous scope in integration of techniques in use
within both conventional and unconventional oil and gas reservoirs as well as geothermal
fields and make best possible use of the available data. This becomes particularly
important as many unconventional developments do not find full 3D seismic imaging to
be feasible and may have limited subsurface understanding through available geophysical
and well-log data. Moreover, while seismic attributes (mathematically evaluated from
seismic traces) are known to work very well in conventional oil and gas reservoirs, they
have shown suboptimal results in unconventional settings due to both reservoir
complexity and resolution issues. Finally, while big oil and gas reservoirs can conduct
multiple time lapse seismic surveys to better understand the reservoir behavior with time,
it is extremely difficult to conduct multiple surveys in smaller unconventional settings
due to resource constraints. Our aim is to develop an integrated reservoir characterization
workflow which uses seismic as well as microseismic data in unison. Based on the
available test data from two test reservoirs (Chapter 2), a need for improved microseismic
data acquisition and analysis was felt. Moreover, improved interpretations require novel
integration techniques with 3D surface seismic/ well log data. Consequently, some
3
improvements have been designed and incorporated within our workflows as deemed
necessary.
1.1 Background
When it comes to geophysical tools, surface, shallow and deep surveys can all be used
either separately or in an integrated manner as they each have their own strengths and
weaknesses. Both active and passive seismic surveys have been used extensively in the
past for improved reservoir characterization (Vinegar et al. 1992). 3D high resolution
VSP and CSP surveys can help provide a valuable insight into the reservoir
characteristics close to the area of interest (Ramkhelawan et al. 2008), help monitor
injection programs (Cheng et al. 2009) and provide a more detailed look into stimulation
quality (Willis et al. 2009). They also provide a baseline map which allows for improved
reservoir characterization near the wellbores of interest. If conducted in a time lapse
fashion (pre and post hydrofrac or continuous monitoring of injection programs), it can
provide an insight into changes within the stimulated zone and how they interact with the
broader reservoir framework including interaction with other horizons of interest. Well-
designed surveys can provide high resolution velocity models necessary for hypocentral
inversion schemes used for microseismic data analysis. Time lapse VSP has also been
studied in combination with microseismic data to characterize induced fracture planes by
making use of the scattered energy estimates (Willis et al. 2009). Time lapse VSP has
also been used in an integrated manner with microseismic data to evaluate fracture
quality in tight gas reservoir stimulation programs (Willis et al. 2007). In general,
4
microseismic data acquisition schemes are used as a means of monitoring the injection
programs including hydraulic fracturing jobs as well as geothermal reservoirs and are
now being increasingly used to understand and validate the stimulation processes and
characterize the stimulated zones (SRV) to better understand the overall efficacy of the
stimulation or injection program in question. In recent years, microseismic data
monitoring has been used extensively in a number of applications including improved
methods to monitor hypocenters and their 4D development (Pettitt et al. 2009), fracture
mapping using joint inversion schemes involving microseismic and telemetric data (Du
et. al. 2008), completion diagnostics (Ajayi et al. 2011), discontinuity mapping using
MEQ event maps (Craig et al. 2010), hydrofrac diagnostics (Abbott et al. 2007), fracture
azimuth characterization (Sorrells et al. 1986), etc. There have been some attempts to
look at using passive seismic monitoring for time lapse reservoir characterization
(Maxwell et al. 2005). Figure 1-1 shows the results of fracture stimulation treatment
through interpretation of MEQ event cloud distribution. Considering the lower cost of
deployment, both surface and shallow borehole surveys are very cost effective solutions
for unconventional reservoir monitoring programs which have constraints on available
resources. While the use of complete waveforms as well as time lapse monitoring is well
understood, we see the possibility of using MEQ data as a potential tool for better
definition of the elastic rock properties. Moreover, the ability to tie them with other
observations, particularly those from 3D seismic data analysis and well logs presents us
with an exciting opportunity to better understand and classify the fractures and potentially
other reservoir properties.
5
Figure 1-1: (a) MEQ event cloud mapping for fracture characterization and (b) complex interactions with
preexisting fractures leading to complex MEQ cloud shapes (Maxwell et al., 2005)
Microseismic data has been used as a means to characterize the elastic properties
of reservoirs particularly in geothermal fields (Aminzadeh et al. 2012). Figure 1-2 shows
examples of such geomechanical properties derived for fracture zone characterization for
the Geysers geothermal field as a reference. Use of conventional seismic data (including
shallow and deep VSP and CSP surveys) along with well logs for reservoir
characterization is also well understood (Lu et al. 2003, Britze et al. 2004, etc.). We can
use the seismic and log data as a means to validate and constrain the said properties and
use temporal MEQ analysis and changes as a means to understand the time variant effects
of the hydraulic fracturing/ injection, or other processes involved within the framework
of unconventional reservoirs in their rock property estimates. This can provide a way to
better understand the regional elastic properties and their variation with time should allow
us to monitor the changes with a better understanding of the overall process in general
6
and also help identify possible improvements in future development of the field. All these
tools hold the potential for improved characterization of the stimulated and surrounding
zones of interest in unconventional reservoir systems and can allow us to model the
observed variations in seismicity as a function of the observed stress regime and other
properties of interest.
Figure 1-2: NW Geysers map view at reference depth (normal temperature zone) showing (a) V
P
, (b) V
S
,
(c) V
P
/V
S
and the properties at a second reference depth (high temperature zone) with (d) V
P
, (e) V
S
and (f)
V
P
/V
S
estimates (Aminzadeh et al., 2012)
7
1.2 Motivation
Unconventional reservoirs such as shale gas development in the recent years have
mandated the use of hydraulic fracturing as a means of stimulating such tight reservoirs
due to low matrix porosity and permeability conditions which prevail in those
environments. Moreover, geothermal field developments including EGS reservoirs
depend on either stimulating new fractures to create necessary permeability in the
subsurface or using naturally available fracture network as fluid flow pathways. Most
geothermal fields use geophysical data as a resource for monitoring injection processes to
check for induced seismicity as well as possible use of information from the data to
characterize the subsurface, particularly the fracture network model. Microseismic data
acquisition and analysis is frequently used as a tool to monitor such processes.
While some work has been done in the past to try and combine seismic as well as
microseismic data for interpretation of hydraulically induced fracturing in oil and gas
reservoirs (refer preceding discussion), the methods are not well developed and deal
primarily with using the MEQ locations as an indicator of fracture location. Information
such as fracture orientation, focal mechanism, etc. have also been obtained as a means for
fracture characterization (Rial et al. 2005). In recent years, some attempts have been
made at integrating information from seismic and microseismic data to try and better
understand the subsurface. One recent example involves integration of surface seismic
and microseismic for a hydraulic fracturing program (Maxwell et al. 2011) where the
authors make use of edge detection algorithm applied on seismic data to detect fault
activations during hydraulic fracturing by using magnitude, b-value and focal
8
mechanisms derived from microseismic data as well as demonstrate an improved
understanding of fracture activation through inverted Poisson’s ratio a nd well ISIP data.
However, such attempts are nascent and a holistic integrated approach towards making
use of geomechanical information from microseismic (instead of 3D surface seismic)
within integrated characterization workflows still eludes the industry. Moreover, for
small operators, available microseismic data is generally not very well constrained with
small surface arrays providing insufficient coverage leading to noise and bias issues.
The motivation for this study is to develop a workflow incorporating soft computing
techniques to integrate seismic as well as microseismic data towards an integrated
analysis approach. Important tools such as advanced autopicker design and development,
improved survey designs for optimized data quality and usability, soft computing and
geostatistical data integration schemes and finally property prediction methods including
uncertainty quantification, etc. have been developed and demonstrated to better
characterize the unconventional reservoirs in question. Unique fracture zone
identification attributes have been designed and developed to better characterize the
fracture swarms within the study area and potentially monitor their temporal and spatial
development with microseismic data. Moreover, these attributes provide vital clues
indicating reservoir connectivity and compartmentalization. The aim is to provide small
operators with the necessary interpretation workflow and associated tools to better
characterize and manage their reservoirs with the limited resources at their disposal. The
available data for this study faced many of the challenges that small operators face w.r.t.
data, resources and analysis capabilities. The observed limitations are as follows:
9
1. Limited MEQ recording stations (5 available with one station not generating usable
results due to sensor issue).
2. Poor quality of microseismic records including both high and low frequency noise
artifacts and many dead or near dead traces.
3. Limited well logging suite (such as few available sonic and FMI logs) and many
shallow completions leading to poor well control below depth of 1.5 km.
4. Highly complex regional geologic setting and highly compartmentalized reservoir.
5. Limited 3D seismic data (no pre-stack data) and non-availability of depth migrated
data.
With these inherent challenges in mind, a standardized workflow becomes necessary
which makes optimal use of all available data and gets usable results for improved
interpretations. Our study is aimed at formulating such a workflow and showing its
potential through implementation with test data from two geothermal reservoirs.
1.3 Research objectives
The objective of this research is to develop an integrated interpretation framework for
unconventional reservoirs by using relatively low cost yet effective geophysical data sets
which can be integrated and the strengths of each of the techniques can be leveraged in
order to provide usable results. This includes, but is not limited to integrated fracture
zone characterization of the reservoirs. Figure 1-3 provides a broad outline of the said
integrated workflow used in our analysis.
10
Figure 1-3: Integrated workflow for fracture characterization in unconventional reservoir settings using two
different geophysical data types (3D surface seismic and microseismic) as well as well logs
Though this workflow is specific to the problem at hand involving a “middle
temperature” geothermal reservoir, it is possible to modify it for application in other
unconventional reservoir settings such as shale gas, tight sands, CO
2
flooding, etc., as
discussed in the preceding sections. Major advances in sensor design and declining costs
associated with many geophysical tools now allow operators to have relatively
inexpensive geophone based monitoring arrays available for reservoir monitoring. In this
work, we hope to demonstrate ways of integrating the data from baseline conventional
3D surface seismic surveys with geomechanical information that is derived from
11
microseismic data analysis as a means of constraining and improving the eventual
reservoir “property” estimations. This workflow also provides the framework which
should allow us the possibility of using the microseismic arrays for 4D monitoring and
characterization of the reservoir and to better understand the effects and interaction of the
injection programs with the reservoir. Moreover, apart from the qualitative interpretations
based on the baseline property estimation, we also demonstrate quantitative uncertainty
analysis for the mapped reservoir properties through simulation studies.
1.4 Organization of manuscript
The next few chapters are organized in such a fashion so as to provide an understanding
of the work that has been completed as a part of this research. Chapter 2 shares
background information on the two geothermal test sites and associated datasets used in
this study. Chapters 3 and 4 share the issues with data and contemporary autopicking
algorithms. They also introduce a novel autopicking workflow which can be used with
the available dataset taking data limitations into consideration. An improved autopicker
has been designed and implemented with adequate efficiency and robustness and its
effectiveness has been compared with some contemporary autopicking algorithms.
Chapter 5 deals with using the information from the autopicking process to obtain
earthquake locations and simultaneous P and S phase inversion for velocity estimates
which are necessary for further geomechanical analysis. Chapter 6 deals with
conventional seismic data evaluation including attribute studies. Chapter 7 shares
workflow for geomechanical property estimation from velocity estimates including stress
12
mapping. It also shares simulation results and uncertainty quantification to qualify the
interpretations made later. Chapter 8 introduces the hybrid fracture zone identifier
attribute including the framework for fracture detection used in this study based on ANN
modeling. Chapter 9 develops upon the shared work and introduces a new framework for
fractured zone characterization and modeling based on the integrated analysis and
interpretations made over the preceding chapters. It also provides examples of integrated
geophysical data evaluation and interpretation for reference. And finally, Chapter 10 and
Chapter 11 provide summarizing comments and guidelines for possible future work to
expand on the ideas proposed in our study.
13
Chapter 2: Test beds and datasets
2.1 Study areas and objectives
At different points of our study, tests were conducted and methods applied on real data
made available from two different geothermal fields. The first is the Geysers geothermal
field from Northern California and the second is the Brawley geothermal field from
Southern California. In this chapter, we provide a brief overview of the two fields in
question and provide some preliminary understanding of the available datasets from the
two fields, both geophysical and otherwise which are made use of in our study.
2.2 The Geysers geothermal field
The Geysers geothermal field is located northeast of the San Andreas fault in Northern
California about 150 km from San Francisco. Steam from the Geysers is produced from a
network of fractures in crystalline and metamorphic rocks. Some of the fracture networks
are a result of injection of cold water into the hot rock while others are naturally
occurring tectonic fractures associated with the nearby San Andreas Fault plate boundary.
Due to very low permeability of the formation matrix within the main reservoir rocks,
production performance depends heavily on the presence of these natural and induced
fractures. This makes it vital to tap these fractured reservoir sections to enhance
productivity which makes identifying such zones commensurately important.
14
The Geysers geothermal development spans an area of around 30 square miles in
Sonoma, Lake and Mendocino counties in California, located in the Mayacamas
Mountains. Power from The Geysers provides electricity to Sonoma, Lake, Mendocino,
Marin, and Napa counties. Steam used at The Geysers is produced from a sandstone
reservoir where the reservoir rock is a Jurassic-Cretaceous Franciscan Greywacke. The
reservoir is capped by a heterogeneous mix of low permeability rocks which is a mélange
of metamorphic Franciscan rocks. It is underlain by a siliciclastic intrusion composed
primarily of Pleistocene felsite (Berge et al. 2001). Gravity and seismic studies suggest
that the source of heat for the steam reservoir is a large magma chamber over 4 miles
beneath the ground, and greater than 8 miles in diameter. Unlike most geothermal
resources (including North Brawley geothermal field), the Geysers is a dry steam field
(producing superheated steam). Due to major reduction in the steam resource over the
years of production at the field, The Geysers complex is now recharged by injecting
treated sewage effluent from the City of Santa Rosa and the Lake County sewage
treatment plant. This effluent, which used to be discharged into waterways such as the
Laguna de Santa Rosa, is now piped to the geothermal field where it replenishes the
steam reservoir. This injection of cold water into relatively hot reservoir leads to induced
fractures as a result of thermo-elastic cooling shrinkage and other mechanisms (such as
pressure build ups). These fracture networks are vital for the longevity of the resource
and continued production from the field.
15
2.3 Brawley geothermal field
The Brawley geothermal field currently under development by Ormat is centered about
two miles north of the city of Brawley in the Imperial Valley of Southern California. The
Imperial Valley is part of a larger low lying geologic province called the Salton Trough
with intense geologic activity. Had it not been for sediments carried to the Trough over
millions of years by the Colorado River, it would be part of the Gulf of California today.
It is a major geothermal province in North America, with approximately 1320 MW of
installed geothermal capacity in four fields in the Imperial Valley, i.e.; the Salton Sea,
North Brawley, East Mesa, and Heber along with Cerro Prieto in Mexico.
The Union Oil Company (Unocal) initiated a geothermal program of exploration
and development in the Brawley area in middle of 1970’s. After shallow thermal gradient
well drilling and geophysical surveys which defined a thermal anomaly, Unocal followed
with 14 deep wells between 1975 and 1982. Chevron Oil Company also drilled two deep
wells on the western margin of the resource in the late 1970’s. The well depths ranged
from about 5000 ft. near the center of the productive area to over 13,000 ft. on the
marginal non-productive areas. The resource encountered was a fractured reservoir
containing hypersaline brines (>200,000 ppm) with temperatures in excess of 525 ºF. The
central wells were highly productive, typically producing ~ 1,000,000 pounds hourly at
flowing wellhead pressures of ~ 450 psig.
Ormat began investigating the Brawley potential for geothermal electricity
development in 2006. The existing well records and logs indicated that the sands between
1500 and 5000 feet (overlying the deep fractured reservoir pursued earlier by Unocal)
16
would contain moderate salinity (10,000 to 40,000 ppm) brines in the 300 to 400 ºF
range. Using porosity logs, the net sand footage and permeability was calculated using
an average porosity/permeability correlation developed previously for the older Ormat
owned/operated Imperial Valley fields of East Mesa and Heber. Subsequently, Ormat
confirmed the predicted production potential with five wells drilled and tested
extensively in 2007. This was followed in 2008 & 2009 by full scale development well
drilling and construction of a 49.9 Mw (net) binary power plant. At present, there are a
total of 15 producers and 20 injectors with commercial operations since mid-2009 (Figure
2-1). The generated power remains below expectations because of unanticipated injection
capacity issues and requires further investigation.
Figure 2-1: Current operational wells drilled by Ormat (Ormat Inc.)
17
The deltaic sedimentary sequence at Brawley consists of alternating sand, silts,
and clays to a depth of more than 15,000 ft. according to earlier seismic refraction
studies. From surface to about 2000 feet the sediments are mostly non-marine and
unconsolidated. The deeper formation is mostly marine deposits which become more
consolidated with depth due to compaction and cementation with carbonates and silica.
At depths where the rock (i.e.; sandstones and shales) are sufficiently hard to hold open
fractures, there is little or no porosity and permeability left between the sand grains.
Roughly this conforms to about 500 ºF temperature level, suggesting that the cementation
sealing the pore space between sand grains is strongly related to the circulation of
geothermal fluids over geologic time. The shallower reservoir (above 5000 ft.) being
developed by Ormat retains porosity and permeability in the sand matrix, but
progressively this decreases with depth. Typically these sands are very fine grained, well
sorted, and relatively clean (with less than 10% clay and silt size particles). Reservoir
continuity of the shallower sands is still a significant question as in any active geologic
and deltaic environment with sands pinch outs that are displaced by faulting. This lack of
continuity may contribute to the observed poor injectivity for some wells due to
unexpected pressure build-ups.
Figure 2-2 is an integrated display with well locations, seismic survey area as well
as microseismic stations array shown in order to provide a better understanding of the
study area in question. Detailed interpretation of the available 3D seismic survey and its
integration with the other geophysical data (from passive seismic monitoring program)
and well logs have provided valuable insights into reservoir continuity. Resulting
18
refinement of cross-sections and isopach maps have provided a better understanding of
how production and injection sand intervals are connected between wells and how the
fracture zones map out within the reservoir. Integrating all of this data/ information for
improved characterization can be helpful in designing well completions and
injection/production patterns to better optimize injection distribution and increase total
injectivity.
Figure 2-2: Integrated display of survey area with location of the study area (surface footprint) including
MEQ recording stations, well locations and 3D surface seismic survey (receivers) patch used
2.4 Data
For the North Brawley geothermal field, the data has been collected by a number of
contractors for the operator developing the resource (Ormat Inc.). Microseismic data is
19
being continuously collected by a subcontractor. A 3D seismic survey was conducted by
a geophysical company in 1
st
quarter of 2011 and the processed data was made available
for evaluation and use. Large number of wells drilled in the area under study indicates
good well control and the logs used in our study were made available by the operator
which have been used to constrain property estimates within corresponding workflows in
this study. It is important to note that the available data was time migrated and therefore
accurate velocity models were required to tie the wells with the seismic data. More
detailed description of both the seismic as well as microseismic data will be provided in
subsequent chapters wherever appropriate.
Unlike the data from the Brawley field, for the Geysers, no 3D seismic data is
available for reservoir imaging. Instead, historically, both microseismic and gravity
studies have been used to image the subsurface. At the field, a large acquisition array and
high levels of microseismicity allows for testing of advanced processing algorithms that
have been introduced in this study. The available microseismic dataset was collected by
LBNL and spans from 2006 through 2011 in the form of earthquake catalogs.
2.5 Application of datasets
The Geysers dataset serves as a testing bed for advanced neuro-evolutionary autopicker
that has been developed as a part of this study for working with difficult passive seismic
datasets. The Geysers dataset is very exhaustive and provides adequately representative
data samples with potential noise and acquisition related issues to test various aspects of
autopicker design. This allows us to have an increased confidence on the applicability of
20
the designed phase detection workflow for other unconventional settings involving
passive seismic data such as the North Brawley field.
The North Brawley dataset is used to implement and test the interpretation
workflow including the newly proposed FZI attribute set for fracture characterization
apart from geomechanical property predictions. Moreover, a modified phase detection
algorithm (from the one designed for The Geysers dataset) is used during the processing
of the raw seismic records for catalog generation.
While only two sample geothermal datasets have been used and validated in this
study, the workflows are broadly applicable in other unconventional settings as is
discussed in future chapters. It is hoped that this workflow will be adequately modified
and implemented for shale oil/ gas setting for validation as an effective tool for reservoir
characterization as part of broader hydraulic fracturing programs.
21
Chapter 3: MEQ - Preliminary Data Analysis
3.1 Introduction
Passive seismic as a tool for monitoring reservoirs has become very common in recent
years. They find frequent use in development of unconventional reservoirs such as
geothermal fields, tight reservoir systems which require hydraulic fracturing, monitoring
of injection wells (CO
2
injection, steam flooding, etc.) among others. Passive seismic
arrays involve selected number of geophone recorders placed appropriately at
predetermined locations and independently powered over extended periods of time. A
geophone converts ground motion into voltage which acts as the seismic response and is
recorded for further use. Records of seismic data can help provide information regarding
the actual location of the event, the focal mechanism of the rupture, subsurface velocity
field, etc. to name a few. One of the most basic elements of processing required for
passive seismic records involve first break and phase arrival detection. Phase detection
involves identification and classification of the arrival of elastic waves through various
propagation modes. Two important phases used for hypocentral location and velocity
inversion algorithms are P and S waves. With very large and/ or widely distributed arrays
and huge amounts of data being collected, traditional event detection algorithms are
increasingly facing limitations with required processing times (for larger gathers) as well
as noise related issues. Moreover, robust phase detection with improved accuracy helps
22
reduce location, moment tensor and inverted velocity uncertainties and allows optimized
use of available MEQ data.
Considering the fact that event detection is fundamental to most applications
using seismic data, it is necessary to correctly detect the first break and phase arrivals
with high degree of confidence in order to limit the errors propagating further in the
workflow and reduce the influence of any such errors in the final interpretations being
made. A number of autopickers have been developed in the past to detect phase arrivals
from both microseismic & teleseismic data. Some of these methods have been shown to
be very effective in certain high SNR environments (though others tend to fail under
similar conditions). Moreover, some algorithms that work very well in some noisy
environments may fail in others. Looking at the available methods, one of the older and
commonly used algorithms for phase arrival detections is the STA/LTA algorithm (Allen
1978) and its modifications (Baer et al. 1987). Methods based on abrupt changes in
attributes of the seismic coda such as energy change as used in Coppens (1985) and
modified Coppens method (Sabbione et al. 2010), statistical attributes such as frequency
and higher order statistics such as skewness & kurtosis (Saragiotis et al. 2002), etc. have
all been proposed over time. Modern autopickers using fairly advanced algorithms such
as cross-correlation attribute analysis, frequency spectrum analysis, multi window filters
or advanced statistical techniques have been used in specific scenarios and have been
instrumental in improving the accuracy and validity of the picks being made. However,
with low SNR, there is always a possibility of the arrivals not being detected by some of
these methods (which could be a result of data or algorithmic limitations). This is
23
particularly the case with advanced analysis methods such as shear wave splitting studies
which require robust algorithms to detect fast and closely following slow shear arrivals
(considering that we are not dealing with teleseismic events in this study) and the ability
to detect picks where radiation patterns or subsurface signal attenuation lead to indistinct
energy arrivals. The highly non-unique nature of the picking process makes all of these
issues ever more pronounced. This becomes a particularly serious issue when S phase
arrivals are required for tomographic inversion schemes for velocity, etc. and number of
stations available are limited (as was identified to be an important issue in this work). The
quality of the first arrival detection is related to the near- and sub- surface structure,
source type, and prevalent SNR conditions, etc. Any of the available autopicking
methods could be applicable in some specific scenarios and not in others based on the
acquisition conditions as mentioned above. Thus, finding a robust method to work under
most circumstances is a challenging task. Moreover, due to large volumes of seismic
records common in acquisitions involving passive seismic arrays, the arrival detection
can be time consuming. Since finding the first arrival of energy from microseismic events
in a fast and accurate fashion is the key step for additional processing, the aim is to work
with data that is characterized by low SNR and poor data quality issues and to develop an
effective workflow to obtain both P-wave and S-wave first arrivals (P and S phase
arrivals) with relatively high accuracy by using combination of derived attributes within
an ANN modeling framework. We have implemented this workflow on multiple datasets
collected in geothermal environments as well as modifications of the said workflow in
hydraulic fracturing environments with extremely robust and positive results when
24
compared with contemporary autopickers. The results from said implementations and
tests will be shared in the next chapter.
3.2 Microseismic dataset
Figure 3-1: Seismic sensor used for the local passive seismic monitoring array (Source: www.eentec.com)
The microseismic data used for our study ranges from November, 2008 up to May, 2011
(The array is still in place at the field in question and is being used as a tool to check for
induced seismicity resulting from injection operations). The microseismic data is
collected using a local microseismic array which includes 5 local stations (BRAW1,
BRAW2, BRAW3, BRAW4 and BRAW5). The seismometers are tri-axial (horizontals
and vertical) instruments using a proprietary electro-chemical transducer technology
(Figure 3-1). The operating frequency range of the instrument is 0.065 Hz to 50 Hz and
the dynamic range is 142 dB.
25
These sensors are positioned in the boreholes using a camera to orient the north
axis as close to the actual north as possible. The continuously collected data is recorded
in digital recorders. Both continuous and triggered data are collected and while the
continuous data files are written in blocks of 2 minutes throughout every hour of every
day, triggered data files are extracted from the continuous data using frequency, threshold
and STA/LTA ratio (either one of the above attributes are used) based triggers. The
seismic files are stored in "xyz.dmx" format (SUDS) with each file storing 3 component
data sequentially as well as requisite header information. Figure 3-2 shows a sample
triggered file with header information and stored data for reference.
With the data stored as is, there was a need for a lot of preprocessing before it can
be run through any sophisticated autopicking algorithms. These steps include conversion
from "xyz.dmx" into easily readable and editable “ASCII” format as the entire
autopicking algorithm is implemented in Matlab environment for this project. Before
conversion, event files from different stations require sorting based on the arrival times in
order to generate ordered event file catalogs which have data from all stations in the same
storage location and which belong to the same triggered event based on a time window
(selected by the acquisition company). Next step is to read all the "xyz.dmx" files within
the cataloged database and to sort them based on the filenames and by making use of a
specific naming convention as the basis of sorting. Once the files are sorted as per
triggered events and the recordings from each station (for that event) are placed together,
these files are ready for running through different autopicking workflows for tests and
analysis.
26
Figure 3-2: Sample "xyz.dmx" file showing specified format and stored data
3.3 Available autopicking algorithms - STA/LTA & AIC
Two commonly used autopickers in earthquake seismology are the STA/LTA algorithm
by Rex Allen (1978) and the AIC based autopicker (Akaike 1973 and St-Onge 2011).
While these two algorithms are by no means complex, they provide robust picking
possibilities under many circumstances and variants are in use among geophysicists to
this day. These two algorithms were tested using a small dataset in order to verify their
efficacy particularly with the kind of data seen from the study area to validate the need
for an improved algorithm. Both algorithms were implemented in Matlab environment
27
for ease of compatibility and control. The details on how to evaluate these two attributes
can be obtained from the references cited.
Figure 3-3: Sample single event file from dataset used in comparative analysis of contemporary autopickers
Figure 3-3 shows a small section (single event) of the data subset used for
evaluating the available autopickers to test their applicability in our study area. It is
clearly observable that noise artifacts (both high and low frequency) are common and
need to be tackled when picking energy arrivals. Visual inspection does allow us to make
picks with relative ease however, as is clear from the data (particularly for station
BRAW2), random noise, dead or near dead traces, high frequency harmonic noise,
scattered energy artifacts and low frequency noise all influence the final recorded trace
for our study area. In order to remove as much of this noise as possible, a band pass filter
28
is implemented and the traces (including all 3 components) are run through this filter
before testing the two algorithms (Wilmshurst 1990).
3.4 Results with STA/LTA & AIC
Figure 3-4 shows two different composite gathers generated with the two different
autopicking algorithms for the same event file. The green dashed lines indicate P phase
picks made on vertical components while the red dashed lines indicate S phase detections
on one of the horizontals (S
1
). We observe that AIC algorithm based autopicker is able to
resolve the P phase picks in a better fashion as compared to the STA/LTA based
autopicker. Moreover, AIC based autopicker is able to provide a good estimate of S phase
arrivals in some situations but fails in some other cases. The STA/LTA algorithm based
autopicker makes a single usable S phase pick for this test case. Moreover, both
algorithms face problems when the SNR levels are comparatively low as is the case with
data from BRAW2 station and SNR is considered an important criterion in determining
whether the autopickers are able to make valid picks or they fail to do so. At this stage, it
is important to understood that a minimum of two S phase picks are necessary along with
a minimum of 4 P phase picks for valid velocity and location inversion (for the specific
inversion algorithms used in this study) which needs to be done at a later stage of data
processing and analysis. For our study where we have data from limited number of
recording stations (4 stations as BRAW3 was not triggered by this event), it is clear that
we require a more robust autopicking algorithm to obtain usable phase arrival data
(meeting with the minimum number of phase detection requirement as specified earlier).
29
An advanced ANN based autopicking algorithm is looked into in the next chapter as a
means to resolve the phase detection problem.
Figure 3-4: AIC and STA/LTA based autopicker - preliminary results for a sample triggered file (tagged as
occurring on 12
th
April, 2009 at 12:48 hours. Results over a data subset indicates possible issues with
obtaining minimum number of picks required for inversion for location/ velocity unique to this particular
dataset
30
3.5 Identified Bottlenecks
Preliminary analysis using contemporary autopickers show limited applicability for many
identified events as discussed in the preceding section. Since the Brawley field has a
small microseismic array with just 5 stations and relatively low MEQ activity, it becomes
essential that adequate number of P and S phase arrivals are detected so that enough data
is available to generate necessary phase files to be used for further evaluation
(tomography, etc.). Another major limitation is observed with the quality of the picks
being made when compared with manual pick evaluation. Figure 3-5 shows the 4 P phase
picks made by AIC autopicker from the event being discussed as well as manual pick
interpretations as a means to highlight the issue with accuracy of the detections and
associated uncertainty when more than one expert is involved or multiple autopicking
algorithms are used. Further analysis of improvements in inversion results from such
seemingly small changes in phase detections will be briefly discussed in Chapter 5. It is
however quite clear that an improved autopicking algorithm should have marked
improvement in the final results. Moreover, such an algorithm could provide high degree
of flexibility to be operationalized under varying environmental settings and should allow
us to improve microseismic data analysis workflows in other monitoring programs as
well (such as hydraulic fracturing jobs or CO
2
injection programs).
31
Figure 3-5: Comparison between manual (green) and AIC (red) picks from vertical components with inserts
showing magnified view close to the pick. Larger inserts indicate higher phase arrival errors and
consequently more severe effects on inversion algorithms. The traces are normalized for ease of evaluation
32
Chapter 4: Automated Phase Arrival Detection Using
Artificial Neural Networks
4.1 Background
Figure 4-1: Integrated workflow for P and S phase arrival detection using ANN autopicker
33
As indicated in Chapter 3, limitations with contemporary pickers required developing an
advanced AI based autopicker using an integrated ANN modeling approach aimed at
providing accurate detection and picking of both P and S phase energy arrivals in a rapid
fashion. Also, the hope is to be able to obtain usable results in situations where traditional
autopickers show a relatively high degree of failure. Such a technique becomes
particularly useful in situations where a high degree of non-uniqueness is involved. The
AI based approach provides the "best possible" pick detection (based on selected
parameters and expert picks) within an environment of appreciable uncertainty created by
the non-uniqueness of the picking process. Some effort has already been made on various
applications of ANN in autopicking and earthquake prediction (McCormack 1990;
Veezhinathan et al. 1991; Aminzadeh et al. 1994; Zhao et al. 1999; Aminzadeh et al.
2011; etc.). This work extends the previous work incorporating many seismic attributes
as the basis of detection and other methods to further enhance the quality and efficiency
of the overall picking process. This hybrid autopicking workflow, as depicted in Figure
4-1, has been tested extensively on two datasets (LBNL's Geysers passive seismic
monitoring array and Ormat Inc.’s North Brawley array). While the original ANN
autopicking workflow was developed with the Geysers dataset in mind (which provided
enough data density to allow for training of the ANN autopicker due to ~25 receiving
stations), the North Brawley array dataset provides a valuable testing platform as it has a
small array (5 receiving stations) creating the need for accurate picks with relatively high
success rates in order to have adequate amount of data for inversion (discussed in Chapter
5). This dataset also poses considerable challenge with very low SNR levels and high
34
percentage of unusable data due to various factors including ambient noise as well as
lower average MEQ signal amplitudes (or event magnitudes). We compare the results
obtained from the hybrid autopicker with contemporary autopickers in use and manual
picks to validate the advantages of the new proposed autopicking approach. Individual
sections describing each part of the workflow (Figure 4-1) along with results are
discussed in this chapter.
While the workflow and associated algorithms were designed for geothermal
environments, it has also been modified and made applicable for hydraulic fracturing
data. Tests with a synthetic hydrofrac dataset for extremely noisy environments have also
been shared in this chapter for reference. The synthetic dataset was generated for
benchmarking purposes with several noise scenarios added to the base data for exhaustive
autopicker testing. In order to capture all relevant wave propagation effects an elastic
isotropic modeling algorithm was chosen which allowed for input of arbitrary source
configurations and mechanisms. An efficient high-order finite difference algorithm that
allows capture of displacements, stresses and strains was employed (Karrenbach 2000) to
generate and record seismic wave forms in vertical wells at several lateral offset distances
from the hypocenter. A high resolution complex velocity field (with dipping thin layer
horizons) was generated and used for the modeling process. Target oriented full wave
form modeling was carried out with a high order finite difference algorithm with depth
ranging from 4000 to 12000 ft. and with a lateral distance of 6000 ft. Several hundred
receiver levels were recorded at depth increments of 5 ft. above and below the event
horizon and then decimated for this particular test case to 50 ft. receiver spacing. The
35
single microseismic double-couple event (3000 ft., 7800 ft.) roughly simulating the zone
of interest gives rise to complicated wave forms due to reflection, diffraction, refraction,
etc. at high and low velocity interfaces in the vicinity of the target region. In real data
these multiple secondary P and S phase arrivals could easily be mistaken as arrivals from
nearby microseismic sources and present challenges to automatically produce reliable and
accurate phase arrival time picks for inversion algorithms. Figure 4-2 shows the synthetic
wave-field and generated seismic data volume for reference. Due to the receiver patch
design implemented, the output microseismic gathers show distinct moveout
characteristics which may not be present in geothermal datasets that we tested for
(depending on sparse passive arrays used in such fields).
Figure 4-2: Full waveform modeling of MEQ event in complex velocity field showing wave propagation
effects
36
4.2 Data preprocessing
The raw windowed data around detected events is obtained from the recording stations
based on threshold noise using a triggering algorithm as discussed in Chapter 3. In both
the case studies, we have 3C data from each recording station. Once the events have been
detected, the next step is to obtain accurate phase arrivals from data collected at different
stations to be used for further analysis. A small subset is selected from the available
dataset for the autopicker development. Before running any picking algorithms, it is
necessary to remove noise from the collected data and this is done by using a band pass
filter designed to remove both high and low frequency noise artifacts. Since the best
results for P phase arrival are expected from the vertical component and the S phase
arrival from the horizontals, the dataset is divided based on the component type and fed
through a rotation algorithm to maximize the P phase energy on the vertical component
and the S phase energy on the horizontals.
It is important to note that based on the data quality, additional data conditioning
steps may be necessary and are decided on a case – by – case basis. Figure 4-3 gives a
sample case with pre-processing steps applied and the processed signal extracted for use
in the autopicking workflow. These include a bandpass filter to remove very high and
very low frequency noise bands as well as a wavelet threshold based de-noising scheme
(Donoho 1995).
37
Figure 4-3: Pre-processing steps to improve SNR and remove other acquisition artifacts
4.3 Attribute analysis & ANN design for P phase autopicking
In order to test different attributes/ properties to be used for the ANN autopicker design, a
literature review was carried out to understand the different methods/ attributes in use
today and these attributes were computed on a trial basis to select useful attributes for use
in the actual training of the ANN used in our study. The selection criteria used places
maximum emphasis on sharp transitions near arrivals as per manual (expert) picks. The
aim here is to minimize the number of attributes which need to be used in the training
process to reduce run times and to increase the accuracy without dilution in results due to
too many attributes. Moreover, higher number of input attributes can cause a more
computationally intensive and biased network design.
38
Figure 4-4: Sample attribute maps used to choose ANN input nodes including the original synthetic gather
from one of the datasets used in our study
Figure 4-4 shows sample attribute maps computed from synthetic seismic trace data
stacked together for easier computation. Many of these are tested as possible ANN input
nodes before deciding on the final set of inputs for testing and validation on an
independent dataset. Figure 4-5 shows blow up of a sample seismic trace (at arrival, ~
0.2s) along with some of the sample attributes that were used as inputs to the ANN. This
comparative framework allows pruning of those attributes which either fail in low SNR
environments (such as “AbsSumMax” for selected window sizes) or those which are too
computationally expensive (such as “MultiFilter”). This is also necessary to check how
computation windows (over which attributes are computed) impact the said transitions in
attribute trends. Also, it helps identify the best computation windows for good ANN
design under given data specific conditions such as sampling rate, dominant frequency,
39
etc. While our tests with unsupervised networks showed that they can be used for model
design and are particularly effective with larger attribute numbers due to their relatively
short training times, our final implementation involves a supervised back-propagation
algorithm (Figure 4-6). A detailed look at the steps involved with a typical back-
propagation algorithm is provided in Appendix A for reference.
Figure 4-5: Comparative analysis of actual trace with evaluated attributes for improved attribute design and
selection (A complete list of attributes studied is provided in the accompanying text)
40
Figure 4-6: Sample supervised ANN implementation to design an autopicker. Design elements include the
network topology, activation functions and inputs.
The ANN output is conditioned (which can include normalization, application of
Hanning filter, etc.) which generates the necessary characteristic function used for actual
pick selections. The required conditioning depends on the input trace quality and noise
artifacts propagating into output ANN models. The post processing algorithm is designed
to extract the P phase picks based on the matching ANN probability output as well as the
confidence map of the trained ANN output. The picking approach involves time averaged
data which is compared with limiting (predefined) thresholds. If the said average is
beyond this threshold, a possible pick is declared and then a check is made to ensure that
the time averaged value falls below the threshold before a second picks can be declared.
Once a pick is declared, the picking threshold (independent predefined parameter) is used
to make the exact pick (the limiting thresholds are lower than the picking threshold). A
final verification step can be made use of which checks for ratio of statistical measures
41
(mean and maximum) across the pick within the limited window and picks are quality
controlled based on these ratios. Successful P phase detections are important to extract
the necessary localized time windows for possible S phase arrival detection. In this part
of the workflow, a time window is chosen starting from the P phase arrival at each station
and the data is extracted from within this window from the two horizontal components
for use in the S phase autopicker design and implementation. It should be noted that the
window selection is dependent on the maximum source – receiver separation but is
seldom more than a few seconds for local seismicity (in this study, we choose a window
of 2 seconds as all of the data is assumed to be microseismic and therefore, inherently
local).
Figure 4-7 shows the actual ANN output classification for two sample traces after
application with the final pruned outputs which are selected at the location of transition in
the ANN classification based on model output. We observe that under most scenarios, the
ANN output models show sharp transitions close to the actual arrivals including test
cases with considerable noise conditions. Moreover, the presence of multiple phases
result in high values in the output characteristic function introducing the possibility of
identifying all primary and secondary phase arrivals through the use of the ANN models.
42
Figure 4-7: Sample ANN output classification and seismic trace with arrival match for (a) high SNR and
(b) low SNR scenarios
43
4.4 Mathematical description of attributes
Next we discuss the mathematical background of the attributes discussed in the preceding
section which form the basis of the ANN nodal input. While other attributes can also
computed and used in the evaluation and selection phase, the attributes shared here have
been extensively tested and applied on both synthetic and real microseismic datasets with
reasonably good results.
a) A1 (AbsSum): A simple statistical attribute that sums the absolute values of the trace
over twice the user defined window length.
∑ |
|
(4.1)
Where N is the evaluation window size defined by the user and x is data from seismic
trace.
b) A2 (AbsSumMean): This attribute is computed by taking the average of the A1
attribute computed over the window length around the point of evaluation with
window length “N” defined the same way as previous attributes.
∑
(4.2)
c) A3 (AbsSumMax): This attribute is computed by taking the maximum of the A1
attribute computed over the window length around the point of evaluation with
window length “N” defined the same way as previous attributes.
∑
(4.3)
44
d) A4 (AbsSumVar): This attribute is computed by taking the variance of the A1
attribute computed over the window length around the point of evaluation with
window length “N” defined the same way as previous attributes.
∑
(4.4)
e) A5 (Skewness): This is the skewness attribute computed using the standard definition
for skewness of a distribution (spread over the user defined evaluation window length
N).
∑
̅
∑
̅
(4.5)
f) A6 (Kurtosis): This is the kurtosis attribute computed using the standard definition for
kurtosis measure of a distribution (spread over the user defined window length N).
∑
̅
∑
̅
(4.6)
g) A7 (BIC): This attribute is based on modified Bayesian Information Criteria (or
Schwartz criteria) and is similar to Akaike Information Criteria (AIC) as far as
implementation goes.
∑
̅
(4.7)
Here, k is defined as the number of evaluation parameters preselected before attribute
calculation and N is the window length as per earlier definition.
45
h) A8 (InstPha): This is the instantaneous phase attribute computed using the Hilbert
transform for the seismic trace (H).
(4.8)
i) A9 (InstFreq): This is the instantaneous frequency attribute computed using the
instantaneous phase attribute by computing its time derivative.
(4.9)
j) A10 (Fractal): This attribute is based on adapted fractal dimension calculation over
the defined evaluation window length.
(
√
∑
̅
)
(4.10)
Where “F” is a predefined fractal constant whi ch is same for all evaluations and ∆x
provides the maximum variation in the observed signal value within the defined
window (length N).
k) A11 (MultiFilter): This attribute involves calculating the first differences followed by
use of two 1-pole HP filters and one 1-pole LP filter. Finally, the envelope is
calculated from the filtered output and then normalized to provide the final attribute
for use.
(4.11)
(4.12)
(4.13)
46
(4.14)
(4.15)
(4.16)
(4.17)
〈
〉
〈
〉
(4.18)
Where T
m
gives the corner period, ‹E
m
› is the time averaged value and ‹σE
m
› is the
standard deviation till i-1. The band for “m” is chosen such that there is at least one
final T
m
value which is greater than the dominant period as observed from the data
(method derived from Lomax et al. 2011).
l) A12 (Wavelet): This attribute makes use of Daubechies 2
nd
order wavelet transform
involving simultaneous LP and HP filters. The filter coefficients and a single DWT
step are defined as follows:
{
√
√
√
√
√
√
√
√
} (4.19)
{
√
√
√
√
√
√
√
√
} (4.20)
(4.21)
(4.22)
{ ( (
)) ( (
))} (4.23)
Where G is the wavelet filter, H is the scaling filter, YH and YL are the high pass and
low pass data, n is the evaluation window and M is the signal length. In actual
47
implementation, care must be taken to compute for spillover variables to avoid failure
of implementation.
m) A13 (BKM): This attribute is based on the method for autopicking proposed by Baer
et al. 1987. The characteristic function is computed using the following equations:
∑
∑
(4.24)
̅ ̅ ̅ ̅
(
)
(4.25)
n) A14 (ESM): This attribute is based on the method proposed by Earle et al. 1994. The
characteristic function and the subsequent attribute is computed using the signal
envelop by applying moving window averaging method. The size of the smaller
window is based on specified evaluation window size while the size of the larger
window is 5 times the smaller one. The evaluation scheme is as follows:
√
̃
(4.26)
∑
∑
(4.27)
4.5 S phase autopicking
Once the necessary data for S phase picking has been extracted as described in the
previous section, a new stacked volume is obtained which is expected to include the
distinctive S phase arrivals. It should however be noted that the S phase arrivals can be
directly obtained from the original characteristic function as well. However, in order to
48
apply a different model (considering differences in the energy patterns observed with the
different phases), a different training run may be necessary. As a precursor to that,
representative manual picks are made in order to create a sample set for ANN training.
Since the distinctive signal response for S phase arrival is difficult to detect in many
scenarios, the ANN output is used to make a preliminary arrival estimate and the final
pick is made by pruning the location based on the observed variation of a chosen attribute
(which shows the best transition over the arrival for the training dataset). The first
priority is always to try and use the attributes used for P phase detection in order to
enable faster computations of S phase picks due to lower computation times for ANN
node attributes.
Figure 4-8 shows the ANN output (variable density plot) superimposed on pruned
(windowed) seismic sections used in S phase picking as well as one of the ANN inputs
superimposed on the corresponding seismic section. The high ANN output probability
values and phase arrivals (from seismic trace) can be observed which indicates the
applicability of the trained network for S phase arrival detection. The corresponding
sample attribute values (shown in Figure 4-8 (a)) indicate the usability of the selected
attribute with transitions seen at the detected arrivals. The ANN output match is used to
obtain a preliminary arrival time which is refined further based on localized information
around this initial pick. A small window is selected with adequate offsets and AIC (or
any other preselected attribute) is re-computed within the short window with the defined
optima selected as the arrival location. Figure 4-9 and Figure 4-10 show two samples
windows from the validation dataset showing associated P and S phase arrivals as
49
detected by the autopicker (at the end of the defined workflow). The validation set over
which both the P and S phase arrival workflows were implemented had 12 event files
(each event file with 12 to 15 traces with 3 component data). A visual analysis has shown
that most of the P and S phase picks are reasonably good.
Figure 4-8: (a) Seismic stacked section overlaid on training attribute (A4: AbsSumVar) and (b) ANN
output probability map (autopicker characteristic function) highlighting successful S phase detections
50
Figure 4-9: Sample composite gather with multiple vertical component signals and ANN derived P phase
arrivals (blue inserts) for reference. The traces are stacked at arrivals for visualization (red inserts indicate
actual arrival time w.r.t. trigger)
Figure 4-10: Sample composite gather with multiple horizontal component signals and picked S phase
arrivals (blue inserts). The traces are stacked at arrivals for visualization (red inserts indicate actual arrival
time w.r.t. trigger)
51
Figure 4-11 shows results for five different noise scenarios (SNR varying from
100 to 5) with the synthetic data used in our tests. We observe that even under relatively
low SNR conditions, the algorithm is able to discriminate and provide good picks with
near zero false positives. Both the P as well as the S phase arrivals are graded based on
two independent quality control parameters which are used as a basis for selection of
picks and corresponding traces for further analysis. These attributes (QF
1
and QF
2
) are
described in some detail in the next section of this chapter.
Figure 4-11: Sample synthetic gather with 3 component signals and picked P & S phase arrivals. The noise
increases from (a) SNR = 100, (b) SNR = 50, (c) SNR = 30, (d) SNR = 10 and (e) SNR = 5
52
4.6 Autopicker results: sample comparative analysis
The validation dataset (from the Geysers field) is run through an independent in house
(USC) first break autopicker as well as an advanced autopicking algorithm used by
LBNL. The LBNL autopicker is designed to make P and S phase picks (from vertical and
horizontal components respectively but without rotation) and then store the best P phase
and S phase picks for additional processing schemes such as inversion. The integrated
autopicker developed at USC also scores the P phase and S phase picks made and the
soundness of the scoring and selection criteria has been tested using an independent
dataset used for the autopicker validation. While the scoring criteria are based on the
logarithmic amplitude ratios (decibel scale), they evaluate the ratios over separate
intervals of the trace as explained in Figure 4-12.
Figure 4-12: Window definitions for QC computation (both P and S phase data)
The two QC factors used in the final selection as discussed above are defined as
follows:
{
]
]
} (4.28)
53
{
]
]
} (4.29)
Based on the devised selection criteria, the total number of usable P phase and S
phase picks from the validation dataset were extracted and stored for tests. Figure 4-13
shows the variation of the quality control factors obtained during independent P and S
phase autopicking workflows over the entire validation dataset. As observed a very high
percentage of successful P phase picks can possibly be made due to relatively high values
but the percentage of successful S phase picks that can be made with high degree of
confidence is comparatively low (low values). We can also observe most values for S
phase picks to be below 5 which is a randomly associated value for S phase picks that
have not been made. This is expected as the detection of distinct S phase arrivals may or
may not be possible contingent upon various factors including phase energy (event size),
instrument errors, left over low frequency noise, and other effects. However, when
compared with contemporary S phase autopicking algorithms under study, the results
were found to be satisfactory with respect to percentage of successful picks being made
as well as perceived accuracy levels. Comparative tests with sample cases confirm the
validity of the P phase picks being made and also corroborate the high degree of accuracy
obtained for P phase arrivals.
A statistical study is also conducted on the Geysers validation dataset to look at
how close the picks being made are when compared with the contemporary autopicker.
This comparison is based on a selection attribute which gives the maximum possible
value (1) for the pick being at the same location and a minimum value (0) for picks being
outside a specified window with the score decreasing normally over the selected
54
windows. For P phase picks, a window size of 5 data points is chosen (which corresponds
to ± 0.02 seconds for the Geysers dataset) while for S phase picks a window size of 31 (±
0.15 seconds) is used after considering the inaccuracies associated with S phase energy
arrivals as discussed before. Considering only those cases where both the contemporary
autopicker as well as the ANN autopicker are able to make successful picks (as per
defined criteria), Figure 4-14 gives an outline of the match obtained. We observe higher
“time difference” with S phase picks due to higher uncertainty with S phase picks an d
consequently higher ∆ between arrivals from different algorithms. Also, with P phase
picks, we still observe some ∆ between picks (albeit smaller than S) which is a result of
differing picking criteria used by different algorithms and consequent variations in arrival
times.
Finally, the designed algorithm was tested on some teleseismic events in order to
validate application over wider arrival windows. Tests with a sample dataset from
Parkfield (California) dataset have shown comparable results with contemporary
autopicking algorithms. Figure 4-15 shows a sample gather for a single teleseismic event
from Parkfield and the detected phase arrivals matched with manual picks made by an
expert for reference.
4.7 Summary
The need for a standard workflow for quick and accurate detection of P and S phase
arrivals was felt and the workflow was implemented for the Geysers dataset and
subsequently tested on a synthetic dataset for exhaustive analysis. Eventually, it was
55
applied on the data obtained from the North Brawley field for this project. The designed
autopicker works using an integrated ANN model design based approach which involves
a number of input attributes and a few manual picks for training. The input attributes are
devised based on those in use by many contemporary autopickers and other useful
properties as identified during the initial stages of this work. A comparative study of the
picks obtained with other available autopickers show a good match with both P and S
phase arrivals in most cases and show better results in many situations (where other
autopickers under rigorous tests failed to make good picks). Improved P and S phase
arrivals are critical in obtaining good location and velocity inversion results as well as
improved applicability in the area under study (North Brawley field) due to the limited
number of sensors in the passive monitoring array and other limitations which will be
discussed in Chapter 5.
56
Figure 4-13: Results from the validation dataset. Plot (a) shows the variations in QF's for P phase picks and
plot (b) shows the variation in QF's for S phase picks. QF's are combined using cut-offs to select picks
57
Figure 4-14: Match between ANN and contemporary (LBNL) autopicker for both P and S phase arrivals
falling within specific window as per discussion. 1 implies perfect match while 0 implies no match. Plot (a)
shows the composite results for P phase picks (~ 136 traces) and plot (b) shows the results for S phase
detections (~221 traces)
58
Figure 4-15: (a) Picks based on ANN autopicking workflow and (b) comparative analysis with manual
picks for sample teleseismic dataset
59
Chapter 5: Hypocenters and Preliminary Velocity
Modeling
5.1 Introduction
Earthquake hypocentral event location and velocity tomography has been widely used as
a tool to better understand the local seismicity as well as to characterize the subsurface
seismic anomalies in unconventional reservoirs. Boyle et al. (2011) has shown how
microseismic data including its time lapse analysis can provide valuable understanding of
the unconventional reservoirs under study (the Geysers geothermal field). MEQ locations
have also been used extensively for an improved understanding of the hydraulic
fracturing process for more than two decades now (Talebi et al. 1991, House et al. 1996,
etc.). We make use of the MEQ analysis methods involving location and velocity as a
means to characterize the geomechanical properties of our study area. In order to invert
the P and S phase arrival data for velocity modeling as well as for hypocentral locations,
we use open source inversion algorithms namely hypoinverse (location) and SimulPS
(velocity) which are discussed in some detail in this chapter. A lot of data preprocessing
and data reformatting is required in order to make the generated data usable with the said
programs and these steps have also been briefly outlined in this chapter. The methods
used in this study are widely available in open source literature and the software codes
can be accessed from USGS. There are other algorithms available for similar inversion
problems (such as recent work involving double difference tomography). However, for
60
the purpose of our own analysis, we consider the use of hypoinverse and SimulPS to
obtain preliminary velocity models due both to their availability and extensive
documentation on the concepts as well as their ease of application. Moreover, as will be
observed later in this chapter, the event count available places limitations on the
application of double difference algorithms.
5.2 Hypoinverse: data preparation & background
After running the extracted data files (Chapter 3) through the ANN autopicking workflow
(Chapter 4), we generate the output picks for each phase pick being made. The extracted
data is then run through a format converter implemented in Matlab which calculates
necessary values from the P and S phase arrival data and stores the data in new phase
files. These files have specific format as required by the initial run using
HYPOINVERSE-2000 software used for preliminary processing for locations as
explained in the introduction. Each phase file is run separately in order to make
modifications in the parameters as necessary based on the type of data available for the
files (phases and picks). Figure 5-1 shows a screenshot from the phase file generation
program which we have developed for our processing scheme.
Based on the number of available phase picks (as per the minimum required for
proper working of hypoinverse algorithm), the program automatically decides whether to
select the event under evaluation for phase card output generation or to discard the same.
The program provides the option of visual verification for selection of picks for final
phase cards or we can set the program to choose the picks on its own. This is based on a
61
robust selection criteria which involves signal SNR, limit on maximum time differential
(as a means to limit events within zone of interest) and quality factors as defined in
Chapter 4. Once the program has run over the entire dataset under study, it stores all the
generated phase arrival data separately and also generates a concatenated phase file with
all the phase card information stored sequentially for final catalog generation. Figure 5-2
shows a sample generated phase card data for a single event for reference. As is clear
from the data stored, hypoinverse uses the station information and the corresponding time
differentials from a reference time (Δt
P
and Δt
S
) which is typically the start of the
extracted window as well as the corresponding phase information.
Figure 5-1: Sample phase card selection (manual override window to bypass auto-select for improved
performance). Black traces are the vertical components and blue ones are the horizontal components
62
Figure 5-2: Sample phase card used as input for hypocentral inversion for location
A complete understanding of the HYPOINVERSE-2000 program including the
underlying hypoinverse algorithm as well as operational elements and variables can be
referenced using the HYPOINVERSE-2000 manual (Klein 2003). As already discussed,
the hypoinverse program determines earthquake locations and magnitudes from seismic
network data (first break P and S phase arrival times for location, amplitudes and coda
duration for magnitude calculation). Since at this stage, our work is focused on the final
velocity models based on event locations, amplitude calculation using coda duration
(coda magnitudes) or magnitudes from maximum amplitudes are not used in our
evaluation (Lee et al. 1972) but could be added with simple modifications within the
software implementation scheme. Figure 5-3 shows a sample hypoinverse batch file used
in our analysis for reference. This file helps specify values for all the variables the
program makes use of. We use an iterative process of making modifications in some
variables set in the batch file while keeping others constant and test different program
runs for the best possible results (location errors) discussed later in this chapter.
63
Figure 5-3: Sample hypoinverse batch file with necessary commands for location inversion. Three letter
commands are based on hypoinverse syntax
Hypoinverse program is highly complex and elaborate with multiple algorithms
embedded within it. An exhaustive discussion on the working is beyond the scope of this
manuscript. However, we briefly share an outline of the working of the program as a
reference to understand the methodology involved. The program requires a few basic
inputs for hypocentral inversion. These include the station data to provide the latitudinal
and longitudinal locations of the same, a crustal model (or multiple models) to define the
velocity variation (only vertical variation is allowed) and a phase file generated as per our
discussion earlier in this section. The background calculation involved does not include
any ray tracing scheme (as is the case with SimulPS described later). Instead it relies on
simple numerical weighting of travel times and travel time derivatives. Based on the
epicentral locations, different crustal model may be used (location defines the model
weight, taper within transition zone as well as normalization). A typical hypocentral
64
location algorithm involves the following evaluation scheme: Assuming any earthquake
location, (x, y, z) and origin time (t
0
), the travel time and its derivatives to i
th
station can
be easily defined. The differences between the predicted and observed travel times
(residuals) are calculated as:
(5.1)
The residuals are approximated using Taylor series expansion to obtain
normalized weighted residuals (∑w
i
= 1) with the weighted residuals given as:
(
) (5.2)
In matrix notation, we can redefine the above equation in the following format:
(5.3)
(5.4)
(5.5)
These equations in turn give the following modified formulation for weighted
residuals for inversion algorithm:
(5.6)
[
] (5.7)
A least square inverse of the above equation can be calculated to minimize the
sum of squares of the residuals:
(5.8)
65
Using an initial guess, this equation can be used iteratively to obtain a solution to
the hypocentral location problem and this forms the basic linearized least square solution
to the earthquake location problem as implemented within hypoinverse algorithm. There
are some other steps which follow the basic inversion scheme including centering (Lee et
al. 1972) and scaling (Smith 1976) which improve the numerical accuracy significantly.
We use the output files to extract necessary information for the catalog (Figure 5-4). We
then use the necessary information (latitudes, longitudes and depths) for analysis as well
as for modifications and generating necessary input files for the SimulPS inversion
program.
Figure 5-4: Event catalog generated from Hypoinverse inversion algorithm
66
5.3 Hypoinverse results
Figure 5-5 shows preliminary event locations as well as magnitude distribution
(magnitude is calculated here based on maximum amplitude data). This information is
plotted based on preliminary catalogs provided by the original data acquisition entity.
Most events seem to fall in the 7 km to 10 km depth range and are low in magnitude (M
W
< 1.5) indicating relatively small MEQ's. Figure 5-6 (a) shows the final catalog results
plotted out into an MEQ volume for visualization. Figure 5-6 (b) displays possible error
differentials using circular RMS error ellipsoids for a small data subset in order to better
understand the inherent lack of precision within the available data and algorithms. Both
these plots show inversion results from hypoinverse run. The total number of detected
events extracted for phase data generation based on the time stamps of the triggered event
files are about 3200 over the entire study period. However, owing to limitations which
are discussed in some detail later, the final catalog is limited to < 200 events (Figure 5-6
(a)). These events also have a high degree of RMS errors associated with them due to the
said limitations. The data is constrained based on a depth range of 0 to 6 km to limit the
interpretation over the depth range of interest to us (due to shallow nature of the resource
and all of the available drilled wells). Improved location algorithms such as HypoDD
(Waldhauser et al. 2001) can potentially be used to obtain better location estimates with
lower associated errors. However, the inherent limitations posed by the use of a small and
non-optimized passive monitoring array with limited ray path coverage, as is the case
with our study area, places a major limitation on the improvements that may be possible
without solving the inherent shortcomings mentioned in our discussion. A later section
67
details study on ways of optimizing the monitoring array design to improve the overall
results.
Figure 5-7 shows two cross-sectional map extracts with events superimposed on
the background velocity model used to verify locations of possible velocity anomalies
and associated MEQ's as detected. We can clearly observe very small lateral velocity
variations within the available models at the shallower depths of interest (most lateral
variations are at deeper intervals along with tied events). It is quite clear from the results
shared here that the inversion scheme used does have major limitations including
relatively high degree of uncertainty associated with the event locations as well as
limitations with the total number of extractable events within the timeframe of study. A
detailed discussion on the reasons for observed inaccuracy of hypocentral locations and
high associated error ellipsoids is presented in Table 5-1. In the next section, we discuss
possible improvements in the results with the use of improved autopicker and how it
partially tides over an otherwise serious challenge of not having enough data to work
with.
68
Figure 5-5: MEQ spread generated using supplied catalog (Zapata Inc.) with both hypocenters and epicenters mapped for reference. (STA/LTA picking
routine and Hypo71 location inversion algorithm)
69
Figure 5-6: (a) MEQ event cloud with sample injection and production well locations for reference. (b)
Sample event subset with RMS errors plotted on the X-Y axis
70
Figure 5-7: MEQ event cloud with compressional velocity (km/s) overlays at (a) reference depth of 2.5 km
and (b) reference longitude of -115.56. Event clouds are concentrated around velocity anomalies indicative
of triggered events due to injection induced perturbations.
5.4 Comparative analysis with contemporary picking algorithms
While many contemporary autopicking algorithms are available for possible use to
validate the efficacy of the ANN based approach towards picking, we use a simple
modified AIC based autopicker developed in-house at USC (based on an autoregressive
AIC attribute described by Sleeman et al. 1999 and others). The comparative analysis is
carried out on a small data subset where the modified contemporary algorithm gives
widely distributed event cloud and the modified ANN autopicking algorithm is found to
have improved the locations significantly when used with the same inversion algorithm
(hypoinverse). The RMS location errors also reduce significantly with the use of ANN
algorithm. This validates the improvements over the small dataset which is in-line with
the expectation of observable improvement in event locations with improved and more
71
robust autopicker. Figure 5-8 shows the selected subset with the old (blue) and new (red)
relocated events. We can clearly see a reduction in the spatial spread of the events which
makes more sense when viewed in relation to the actual survey area, the injection scheme
being followed, and consequent the expected seismicity.
Figure 5-8: Figure indicating relocation results on the data subset used for testing. Old events used AIC
based autopicker and the new events are with ANN autopicker
5.5 Limitations with algorithm & possible improvements
As understood from the preceding discussion, we observe a relatively large degree of
uncertainty involved with the located hypocenters which can and do result in a significant
impact on the final results used in our interpretation. There are many factors which create
72
the said uncertainty and methods are available to overcome these issues. Table 5-1 shows
the basic issues that are observed and available solutions for the said problems.
Table 5-1: Issues with microseismic data collection scheme used in this study and possible solutions
Specific Issue Current status Possible solutions
Microseismic
array design
Limited number of surface
stations leading to
inadequate coverage
Increased surface station count and
improved array design based on well
locations. Possible use of GA
(Raymer et al. 2004)
Inversion
algorithm
Use of hypoinverse
algorithm which lacks
centering, scaling, etc.
Use of more robust relocation
algorithms including TomoDD
(Zhang et al. 2003)
Ray coverage
Suboptimal ray illumination
coverage due to widely
dispersed areal spread of 5
available stations
Increase station density and use of
basic concepts of ray tracing to better
design the network based on known
subsurface features (if applicable)
It should be noted that while it may seem that some of these issues can be
resolved under any setting, there are complex interplays between the defined issues
which can make it very difficult to optimize the results based on limits with the survey
design. Improvements can be introduced with superior autopicking algorithms as well as
better hypocentral inversion schemes. While we have already introduced the former,
improvements in hypocentral location algorithms have not been looked into in this study
due to lack of adequate high resolution catalog data from our analysis. Moreover,
adequate number of recording stations with an improved spatial spread is extremely
critical in such developments to make full use of the available analysis schemes. The next
73
section deals with the requirement for improved survey design and potential design
elements for such surveys. This is particularly necessary in our case due to the absence of
adequate receiver coverage and sparse receiver count within the study area.
5.6 Optimization of MEQ survey design based on Rays and Moments
MEQ receiver patch design is governed by myriad of considerations and design
optimization can quickly become an extremely complex task. Some typical design
considerations include site specific constraints (surface features, legal restrictions, costs
and noise sources as well as available locations), available tools and time windows,
tooling design, types of analysis and processing desired for, local crustal model, zones of
interest, geologic considerations, redundancy and trade-offs, attenuation effects and so
on. While the list can be exhaustive, it is possible to consider those design elements that
are of vital importance and then to design the receiver array taking some data redundancy
considerations into account.
Amongst the many possible design issues that influence array performance,
optimization for ray coverage and moment tensor inversion are selected for the modeling
studies in our optimization workflow. Based on the potential location of the monitoring
arrays and receiver patch design, all potential receiver locations are modeled in 3D.
Similarly, based on the identified zones of interest, artificial sources are placed at depth
of interest to mimic actual events. The first component for optimization are the actual ray
traces based on all source receiver pairs. The separation of the ray traces and the actual
ray lengths have a direct bearing on the final results obtained during hypocentral
74
inversion runs. The second component for optimization is the moment tensor inversion
algorithm (least square inversion for focal mechanisms) normally in use. Here the
stability of the inversion matrix (in the presence of noise and attenuation effects) plays
the most important role as far as design considerations go and is optimized for in the
study. Figure 5-9 shows a schematic view of both these elements and how their
optimization relates to their evaluation methodology.
Figure 5-9: Background on selected design elements with (1) showing workflow for optimized ray
coverage and (2) showing the workflow for optimized moment tensor inversion.
In case of ray trace focusing, a generalized solution for arrival times based on
model slowness is represented as d = AM. Here the ij
th
element of the A matrix denotes
the ray-length within the corresponding element. We use the concept of singularity where
zero or near zero eigenvalues for the identified matrix occur if rows in A are a linear
75
combination of other rows which in turn indicates redundancy in the data (Curtis et al.
2004). The generalized equation for quality measure used is as follows:
∑ ∑
|
|
‖
‖‖
‖
]
(5.9)
Where “a” gives the partial derivative of the source/ receiver pair and “w” gives
the relative weights assigned to each datum based on ray-length function used as proxy
for attenuation. The subscripts “t Rec.” corresponds to target receiver parameter,
subscript “o Rec.” corresponds to other receivers in the array while subscript “src”
corresponds to source parameters. Since each row of A corresponds to a single datum,
hence singularity of A indicates redundancy in the ray matrix (as per discussion on "data
angle" by Sabatier 1977). We start iterating with a design which involves all of the
potential source/ receiver pairs. The resulting measure shows the weighted angle between
each row and the space spanned by all other rows in the matrix. At the end of iteration,
those receiver pairs which show the smallest measure are pruned and the iterative
selection process in continued with the shortened array till adequate number of receivers
have been removed. The final quality measure for the component associated with the
iteration is calculated as follows:
(5.10)
The second element used for optimization is the moment tensor inversion
component. Generalized lease square inversion of 3C amplitudes of P and S phase direct
arrivals can be used to retrieve the moment tensor for any event and is one of the standard
methodologies in use today. The relation between observed first arrival data and moment
tensor elements can be written in matrix form as:
76
(5.11)
(
)
(5.12)
(5.13)
Where d gives the observed amplitudes of the direct arrivals and m defines the
components of the moment tensor. A matrix can be evaluated using P and S wave particle
motion equations which make use of direction cosines(γ), travel times(τ), density(ρ),
phase velocities(α & β) and displacement - time function at the source (w) to solve for
moment tensor (Aki et al. 2002):
{
}
(5.14)
{
}
(5.15)
Due to large number of source - receiver pairs, a least square solution has to be
obtained for the resulting over determined system which can be given in matrix form as
m = (A
T
A)
-1
A
T
d. The influence of array design on the computation of generalized inverse
has been extensively studied by Eaton (2011) where the stability of the inversion for
matrix B = A
T
A (condition number of B) is tested for. The condition number (k) indicates
the stability of the matrix and we use it as a proxy for the degree of instability of the
generalized inverse. Eaton has shown that for the purpose of stable inversion for seismic
moment tensors, receivers located at the perimeter of the designed patch are the most
important. Since numerical tests indicate that the area of receiver patch may in itself not
be a sufficient indicator (it will also depend on the distance of the patch from the source),
solid angle can be used as a good proxy for parameterization of the optimization problem.
The final quality measure for the moment tensor inversion component is given by:
77
(5.16)
An experimental setup including the necessary observation wells, production
well, potential source locations, receiver spread and adequately representative velocity
models is used for optimization study. A generalized setup with rectangular surface array
is created and tested for optimization possibilities (Note: tests with down-hole arrays
have been shared despite the fact that they hold no relevance to our study area). Figure
5-10 shows the 3D model as well as the surface/ subsurface setup and some pseudo
sources. A ray tracing algorithm is used to populate the ray-trace matrix for analysis.
Figure 5-10: Setup with surface, subsurface vertical & horizontal arrays with pseudo sources (green inserts)
78
Figure 5-11 shows a typical optimization framework as implemented for model
testing towards array design optimization. The framework is flexible and allows new
framework elements to be added and unnecessary elements (based on actual study area
and associated conditions) to be removed. While current implementation only involves
two elements as discussed earlier, new elements can be easily accommodated by adding
new framework functions, increasing the number of elements and assigning new
weighting factors to the new elements. It is however necessary to have the framework
functions providing a quality factor for each receiver (based on all source/ receiver pairs
if required) for proper computation of “cost function” for iterative array pruning.
Figure 5-11: Passive seismic survey optimization workflow design
79
Initial test results with a constant velocity model and a single source located on
the central vertical axis are shared in Figure 5-12. This controlled modeling run allows
validation of the final algorithm (as implemented) and allows generalized observations on
the behavior of the optimization workflow for the two selected optimization criteria. The
quality measures QF
1
& QF
2
are used to generate the optimization parameter (w
1
QF
1
+
w
2
QF
2
) or cost function. We observe that optimization with full weight on ray focusing
prunes receivers from the periphery and iteratively move closer towards receiver patch
center. For moment tensor inversion optimization, the receivers located on or close to the
radial arms (mutually perpendicular) hold more importance. We also observe that with
equal weight given to both quality measures, the selected sensors are closer to the actual
source and the sensors from the periphery get pruned. However, some sensors along the
radial arms (showing relatively high solid angles) remain important and within selected
patch. In essence, the radial pattern (a common deployment scheme followed by many
service companies in the industry) seems to be an optimal surface array design.
Additional arms may be necessary depending on the complexity of the source mechanism
for the modeled events. It should be noted that while other important considerations such
as physical limitations with surface deployments as well as noise and attenuation effects
can have substantive effect on the design optimization workflow, they have not been
considered in this study explicitly (though an attenuation component has been added
within the first QF component computation shared earlier).
80
Figure 5-12: Sample runs with non-variant velocity model, surface receiver array (grey/ red dots) and
source at central vertical (green dot). Subplot (a) shows results with full weight for ray-trace focusing, (b)
shows results for full weight on moment tensor inversion and (c) shows results with equal weight assigned
to the two components. Red dotted locations are those finally selected based on parameter
81
Figure 5-13: Sample runs with complex velocity model, surface receiver array and sources at lateral offset.
Subplot (a) shows results with full weight for ray-trace focusing, (b) shows results for full weight on
moment tensor inversion and (c) shows results with equal weight assigned to the two components. Red
dotted locations are those finally selected based on parameter
82
Once the algorithm is validated, sources are relocated at lateral offsets (from
central vertical axis), velocity is modified to a more complex layered model and various
receiver configurations are tried as per the baseline configurations introduced in the
earlier section. This is done to simulate a realistic scenario to model for best possible
designs for eventual deployment. As observed in Figure 5-13, again the ray trace
focusing based optimization leads to denser receiver spread close to the actual event
cloud and the pruning moves from outer periphery towards the zone of interest. For
moment tensor inversion optimization, receivers along the radial lengths hold importance
(the perpendicular radial arm is not selected in runs due to very high values being
converted to zeros). The optimization curves also show that the results are relatively
poorly conditioned for ray focusing. We hypothesize this as an artifact of source -
receiver separation which is the highest for the surface array. Moreover slight variations
in velocity profiles (dipping layers incorporated in our tests) lead to substantial
perturbations in the ray trace matrix (compared with the scenario where we had no
velocity variations). It is important to remember that through solid angles can be used as
a good proxy for identifying the best receivers from among all receivers within the patch;
receivers beyond critical separation must be pruned as attenuation effects can lead to
significant degradation in SNR.
The next case (Figure 5-14) involves a single vertical array at a lateral offset from
the source locations (zone of interest). We again observe that in all three test cases (ray
trace focusing, moment tensor and combination of the two), the sensors closest or close to
the zone of interest remain important and get selected. It is important to note that single
83
vertical arrays show zero solid angle (and therefore moment tensor optimization does not
yield actual optimized array design). For moment tensor optimization, an optimal
solution could be the presence of a few receivers at the surface and the main vertical
array within the wellbore close to the zone of interest. This becomes necessary when
considering cost/ design issues associated with geophones for wellbore deployments.
Figure 5-15 shows sample runs with both the surface and the vertical array
deployed at one go. This allows analysis of the impact of first array on selection of
receiver locations on the second array and vice versa. Again maximum weight on ray
trace focusing based optimization leads to selection of receivers closest to the zone of
interest for both arrays. For optimized moment tensor inversion problem, the surface
array provides the best solution with radial pattern emerging as optimal again (and the
vertical array seems redundant with few receivers selected). However, solid angle
analysis may provide different results as selecting some receivers of the wellbore array
may provide good solid angle projections when considered in conjunction with surface
receiver locations (though such deployments cannot provide backup receivers due to
design limitations). Again the parameter solutions for the three test cases show slightly
unconditioned results (considered to be carried over effect of surface arrays). Another
important scheme tested for and discussed later is the presence of multiple vertical arrays
which provides adequate sensor count for good moment tensor inversion results (through
creation of pseudo surfaces) thereby making surface arrays redundant (specifically when
wellbores used for receiver deployment completely surround the zone of interest).
84
Figure 5-14: Sample runs with complex velocity model, vertical wellbore receiver array and sources at
lateral offset. Subplot (a) shows results with full weight for ray-trace focusing, (b) shows results for full
weight on moment tensor inversion and (c) shows results with equal weight assigned to the two
components. Red dotted locations are those finally selected based on parameter
85
Figure 5-15: Sample runs with complex velocity model, surface + vertical wellbore receiver array and
sources at lateral offset. Subplot (a) shows results with full weight for ray-trace focusing, (b) shows results
for full weight on moment tensor inversion and (c) shows results with equal weight assigned to the two
components. Red dotted locations are those finally selected based on parameter
86
Figure 5-16: Sample runs with complex velocity model, surface + horizontal wellbore receiver array and
sources at lateral offset. Subplot (a) shows results with full weight for ray-trace focusing, (b) shows results
for full weight on moment tensor inversion and (c) shows results with equal weight assigned to the two
components. Red dotted locations are those finally selected based on parameter
87
Figure 5-17: Sample runs with complex velocity model, vertical wellbore receiver array and sources at
lateral offset surrounding wellbore. Subplot (a) shows results with full weight for ray-trace focusing, (b)
shows results for full weight on moment tensor inversion and (c) shows results with equal weight assigned
to the two components. Red dotted locations are those finally selected based on parameter
88
Next configuration tested for involves both horizontal wellbore as well as surface
arrays (Figure 5-16). The horizontal array is configured as parallel (laterally offset but at
same depth from reference sources) to zone of interest. Again the receivers in the
wellbore get preference when ray focusing based optimization have the maximum
weighted impact on design. Within wellbore, those receivers closest to the zone get
pruned first and the process seems to spread away from the center (zone) towards the
periphery of the receiver array. An optimized run tends to select some sources radially
deployed at the surface as well (as moment tensor inversion based optimization requires
laterally separated receivers creating relatively large solid angle which is not possible
with a single horizontal wellbore array). Thus the presence of wellbore array again
indicates that a relatively small, well designed surface array may suffice (few well
positioned receivers based on design considerations).
Finally, Figure 5-17 shows a configuration with multiple vertical wellbore arrays
oriented symmetrically about a single source. Results indicate that unlike the previous
case of single vertical wellbore array where the sensors are all located close to the zone of
interest and sources are located within a narrow field of view; in this case, the final
configurations all involve sensors being selected close to the zone of interest, but spread
on pseudo surfaces. In case of weight given to moment tensor inversion parameter, the
results seem to conform to solid angle computations as the sources are selected on
inclined pseudo surfaces. This is potentially due to the effect of sensors from multiple
fields of view on the final optimization parameter.
89
Based on the results obtained for various configurations as discussed, we can
extract receiver locations which correspond with the local optimum for few test cases in
order to verify their possible use within the candidate receiver configurations. This
exercise is recommended and is particularly useful in case we have highly irregular and
unconditioned outputs (optimization curves) in order to make an optimal choice. Solid
angle analysis can also be carried out to validate observations made from optimized array
designs or as one of the selection criteria for optimal design from multiple potential
candidate designs. The best method is to encode maximum solid angle calculation for
designs within the optimization algorithm and use it as an additional element of selection.
While tests on solid angle based selection have not been conducted within the
optimization workflow for this study, observations on maximum receiver patch areas
(within limitations posed by attenuation) can be done either visually or through
rudimentary calculations.
Based on the modeling and design optimization work, we can conclude that the
surface array is highly sensitive to subtle changes in the subsurface structure of the
reservoir. (Sensitivity would normally follow the sequence surface > vertical > horizontal
due to large velocity variations with depth compared with lateral variations). This is
further validated by the unstable design solutions observed for surface arrays. For vertical
arrays, it is best to have the sensors closest to the actual zone of interest (which also
makes intuitive sense as this would reduce estimation uncertainties). However, if vertical/
horizontal wells are the only observation wells, it is necessary to place a few surface
sensors taking solid angle criteria into account (in case moment tensor solutions are
90
important). The other way is to have multiple wellbore arrays distributed around the zone
of interest. Moment tensor inversions for source characterization should be done with
boundary elements alone (of the subtended solid angle by the arrays) during final
processing/ analysis and with the minimum number of elements possible (taking noise
issues into account). However, for the array design and deployment, adequate backup is
desired for some degree of redundancy. Actual noise conditions in the field and their
impact on sensors is difficult to predict but can have substantial degrading effect and
impact design suitability. High noise environments should require built in redundancy in
the designs or should include multiple “deployment scenarios” based on modeled noise
conditions. The algorithm allows us to incorporate specific noise based weighting
coefficients to receiver locations before the optimization runs begin to adequately factor
it in within the optimization workflow. Properly designed well arrays can make large
surface arrays redundant. However smaller arrays are necessary in case the number of
observation wells are limited. For poorly constrained scenarios or badly conditioned
solutions, physical constraints may be imposed or multiple solutions may be compared
based on local optima and conclusions drawn from such comparisons. Constraints can
also provide solutions which are more stable. While it is possible to see some optimal
array designs to show segmented receiver spread in vertical wells (observations from
optimization runs as discussed earlier for the test case with 4 vertical wells); in actual
field implementations, it is not seen that often as tools have standardized length
interconnects and it makes more sense to cover the complete acquisition instead of using
a complex array design. Finally, while only ray-trace focusing and Moment tensor
91
inversion optimizations have been tested for in this study, other design factors can also be
easily added within the optimization framework. Potential candidate factors include
arrival time differentials (based on move-out), event amplitudes, attenuation pseudo
factors, polarity, noise, etc. Moreover, based on known limitations on the availability of
wells, surface conditions, etc., preset arrays can be designed for and included and only
those arrays which provide such flexibility can be set for optimization.
This modeling work demonstrates the potential for optimized survey designs
before injection programs can be taken up in unconventional reservoirs. This is of vital
importance as data generated can be valuable for improving the reservoir development
cycle and optimization tends to maximize the benefits by keeping the costs as low as
possible. However, this optimization framework is a proposal and the results have not
been implemented in any of our work as it has not been implemented in the actual
reservoir setting.
5.7 SimulPS for velocity model generation
SimulPS versions are all an extension to the widely used & well tested Simul-code family
which was first introduced by Thurber (1983) and further developed by Thurber et al.
(1987) and Eberhart-Phillips (1986 & 1990) among others. The simultaneous inversion
by SimulPS is characterized by a damped least squares (where norm of model
perturbations is weighted and combined with the squared data misfit and minimized at
each iteration), full matrix inversion algorithm for natural local earthquakes. Arrival
times are used to produce necessary velocity models namely V
P
and V
P
/V
S
(using t
S
- t
P
92
inversion as per Thurber 1993). The inverted model shows long columnar velocity
anomalies around the periphery of the model due to planes of nodes at the top, bottom
and at the four sides of the original model placed at infinity (high value compared to
model dimensions).
Inadequate spatial sampling and strong dependence of hypocenters on velocity
models can lead to non-linearity within the system. Homogeneous distribution of events
and homogeneous distribution of stations (within the study area) is ideally required to
tide over the problem of having under-sampled (noise) and over-sampled (bias) volumes.
In this regard, a discussion on array optimization has been shared in the preceding
section. Also, the final events chosen from the catalog for inversion need to be of
relatively high degree of accuracy with good picks and broad azimuthal coverage around
the hypocenters. The pruning for dataset selection is done based on the quality control
parameters for the picks (implemented within the autopicker used as per discussion in
section 4.6) and the RMS errors for individual catalog entries. The inversion process
itself is iterative and ill-constrained hypocenters are removed with each progressive step.
The process is initialized with 2D gridded velocity models (X & Y) within the volume of
interest. We then progressively iterate to get finer 3-D models based on the hypocentral
dataset.
Even though the tomographic dataset was inadequate to constrain the velocity
model fully, detailed a-priori velocity model was not used due to lack of sufficient
information in this regard. The ideal choice (in case of absence of good raypath coverage
as is the case here) would have been the use of an improved a-priori 3D velocity model
93
for inversion. However we use the coarse model and refine that model by using seismic
derived impedance data as structural constrain (discussed in section 7.2 & section 7.3).
This is not possible for the entire model volume (as used in SimulPS) due to the
constraints placed by the distribution of microseismic recorders and the actual 3D seismic
survey location (as seen in Figure 5-20). While a detailed discussion of the algorithm
underlying SimulPS-14 used in our study is beyond the scope of this discussion, the basic
algorithm governing inversion for V
P
is shared here. Inversion for V
P
/V
S
follows a more
complicated algorithm but is similar in its basic theoretical approach.
The earth's structure is represented in 3D by velocity defined at discrete grid
points and the velocity within this volume at any other location is calculated using
interpolation with data from surrounding grid points (8 closest data points in the same
depth interval). The linearized equation for simultaneous inversion in terms of arrival
time residuals is given as follows (Thurber, 1983):
∑
(5.17)
Where Δt
e
, Δx
e
, Δy
e
, Δz
e
and Δv
n
are perturbations to hypocentral event origin
times, locations and the velocity parameters, and the partial derivatives are computed
w.r.t. arrival times. Each arrival (as defined in the earthquake input file) has one
corresponding residual equation. In order to calculate the arrival time, a ray tracing
algorithm is required. There are many single and multi-dimensional ray tracers available
in literature for possible use. However, the ray tracer used originally by Thurber (ART)
involves a large number of circular arcs connecting the source and receiver which
represent the ray path and travel times along each arc (ray path) is computed using the
94
defined 3D velocity model. Figure 5-18 shows how this ray tracing scheme works in
theory. The true first arrival ray path is selected from the entire set of ray paths based on
shortest calculated travel time. While use of other ray tracers is possible, the major
advantage associated with ART algorithm is short computation times involved.
Figure 5-18: Perturbations in the radii and dip of the plane of arcs to sample the entire volume of interest
(Thurber, 1983)
The hypocentral partial derivatives in the residuals equation can be further
modified as follows:
(5.18)
Where V
e
is the velocity at the hypocenter, s represents the path length and
derivatives w.r.t. s are unit vectors tangent to the ray paths at hypocenters and pointing in
the direction of ray propagation. In order to compute variation in velocity parameters in a
computationally simple framework, partial derivative w.r.t. slowness is utilized (with the
ray path divided into segments to approximate path integral).
∑
(5.19)
95
Where u
n
= 1/v
n
, U(x, y, z) = 1/V(x, y, z), ΔS
m
is the length of path segment and
(x
m
, y
m
, z
m
) is its mid-point. The partial derivative is evaluated using the interpolation
function used which is defined in the control file. For each event, we have a set of
residual equations. Assuming we have L equations in all corresponding to L events, the
system can be defined in matrix notation as follows:
(5.20)
(5.21)
For computational purposes, the algorithm uses sequential calculation and storage
for each event giving the following set of equations:
∑
∑
(5.22)
The normal equations can then be represented in the following form:
(5.23)
Damped least square approach is used to generate the covariance matrix. Velocity
parameter perturbations are applied to the model and the earthquakes are individually
relocated with the new model and with the inversion step being repeated. Stopping
criteria is used to end this iterative process. Figure 5-19 shows a sample earthquake data
file used as an input to the program. There are other necessary input files including
Synthetic velocity model, Ray-Tracer definition file, station file and controlling
parameter file (with necessary variables governing the algorithm and its implementation)
which have not been shared here. The earthquake data file needs to be formatted before
input to SimulPS and this is done using "arc2cnv" code developed internally within
USGS and available in open source. A complete discussion on the format for each of the
96
files is provided by Evans et al. (1994). In our study, the defined grid area is over a large
spatial expanse defined based on known event locations. Figure 5-21 shows the actual
model used in our analysis. After few iterative runs involving trial and error based
catalog pruning and input parameter changes, we converge on a reasonable model.
Figure 5-19: Sample data point from Earthquake input file for SimulPS run
Figure 5-20: Spatial (X-Y) expanse of SimulPS 'finite node grid' with stations/ events for reference
97
The final output grid used is 18×14×11 model with actual usable grid defined by
the inner 16×12×9 nodes which removes the confining boundary layer values. Based on
the defined grid, the results include both inverted V
P
and V
P
/V
S
estimates for the model
as well as relocated events. Figure 5-21 (a, b) show depth slices from the final output
velocity models as obtained from the tomographic inversion workflow.
Figure 5-21: Original sparse models (a) showing V
P
and (b) showing V
S
at a depth of 1 km. Dotted inserts
show the actual spatial zone of interest based on 3D surface seismic acquisition spread
At this stage, it should be noted that there are a few issues to be kept in mind
while working with the final models. As can be observed from Figure 5-20, a part of the
area of interest is not covered by any MEQ activity. This sparseness leads to higher levels
of noise towards the right side of the area of interest (dotted insert) and we should be
98
mindful of the same. Improved array design and a more uniform hypocentral distribution
would be required for better coverage but this was not obtained in our analysis for the
available earthquake data and inversion algorithms used. A closer look at the inversion
uncertainties is shared in Chapter 7 as part of the overall uncertainty evaluation with final
reservoir property estimates.
5.8 Summary
We have used available inversion algorithms to locate hypocenters using the phase data
(as described in Chapter 4) and then used the preliminary locations and a sparse velocity
model (from SCEC) to carry out simultaneous inversion for location and velocity. The
results as well as some of the major limitations with the process have been discussed in
this chapter. While more robust algorithms may be available for use, the shared workflow
provides a broad outline of velocity variations within the area of interest. It is believed
that by integrating this model with the information from conventional seismic data
(within the zone of interest), it should be possible to have an improved and more
representative velocity model for our use. Scope for further improvements include using
improved hypocentral location algorithms such as TomoDD by Zhang et al. (2003) and
possible use of improved and constrained initial velocity model for inversion provided
better data is made available. Moreover, as discussed, the monitoring program should
involve a more comprehensive sensor array design with careful analysis of resulting
raypath coverage. In this regard, we have shared a holistic and integrated approach
towards optimized passive seismic monitoring array design framework which should be
99
able to improve results significantly and should be used for future deployments. It is
easily expandable with additional design elements as required and should be able to
provide multiple candidate design solutions for possible deployment in most
unconventional settings.
100
Chapter 6: Conventional Seismic Data Analysis
6.1 Introduction
3D seismic data has been extensively used as a tool to understand the subsurface,
particularly for conventional oil and gas reservoirs. Taner et al. (1994) provides one of
the best references on seismic attributes in frequent use for interpretations. More recent
and more exhaustive look at seismic attributes is provided by Taner (2001), Brown
(1996) and Chopra et al. (2007). Ouenes et al. (2004) showed how both pre-stack and
post-stack 3D seismic data can be used for fracture zone characterization when used with
engineering and geologic data. Figure 6-1 (a) shows an example of an ANN derived 3D
fracture porosity model and Figure 6-1 (b) shows how such models can be verified based
on image logs and other independently available information. Different techniques which
make use of information from available seismic and well log data and which form the
basis for CFM have been well established (Ouenes et al. 2005). In recent years, we have
seen a large number of case studies involving characterization of unconventional
reservoirs using various methods involving the use of 3D seismic data. These include
tight reservoirs (Lewallen et al. 2008; Gupta et al. 2011; etc.) as well as geothermal fields
(Lamb et al. 2010). Moreover, a lot of work has gone into integration of multiple
properties into "meta-attributes". Aminzadeh et al. (1984) was one of the first to look into
ways of combining multiple attributes through application of cluster analysis techniques.
Russell et al. (1997) also looked into multi-attribute seismic analysis primarily through
101
geostatistical approaches such as co-kriging. Aminzadeh (2005) looked into the use of
complex "meta-attributes" designed using soft-computing techniques (such as ANN) for
more holistic characterization workflows. Aminzadeh (2006) showed the potential use of
hybrid "neuro-fuzzy" algorithms for prospect identification and ranking by manipulating
traditional seismic attributes. Chopra et al. (2004) looked into the issue of non-linearity
that is commonly faced with when dealing with multi-attribute seismic analysis. Recent
studies have also looked at combining conventional seismic attributes with geometric and
geomechanical attributes to characterize unconventional shale developments in particular
(Trinchero et al. 2013). Figure 6-2 show four examples of seismic attributes used for
discontinuity mapping in Woodford shale as an example of how post-stack attribute
studies can be a useful tool for characterizing faults, fractures and other discontinuities.
Figure 6-1: (a) 3D fracture porosity model using ANN and (b) Comparison of FMI log and fracture
102
Figure 6-2: (a) Most positive curvature co-rendered with coherence with identified Karst, (b) Coherence
and amplitude with identified faults, (c) vector reflector convergence, Sobel filter similarity and amplitude
showing opposite convergence on sides of rotated fault blocks and (d) reflector rotation showing faults and
branches (Gupta et al. 2011)
As discussed in Chapter 2, conventional 3D surface seismic data is available for
the field under study. In order to make maximum use of the available data, we carry out
seismic interpretation based on the available data using many existing methods including
meta-attribute studies as well as exhaustive well to seismic mapping schemes such as
ANN based well log property prediction workflows. Figure 6-3 shows a broad outline of
the methods we have followed in our interpretation and Figure 6-4 shows a seismic cut
section of the data with well track inserts and some evaluated attribute maps at specific
time horizons as a reference.
103
Figure 6-3: Workflow for 3D seismic attribute studies and interpretation
The aim is to establish many baseline reservoir property estimates such as
porosity and discontinuity maps so that they can be used with geomechanical property
estimates obtained from microseismic data analysis discussed in Chapter 7. Due to data
quality issues and type of seismic data available, some additional preprocessing steps
were deemed necessary and they have been discussed as required in this chapter.
104
Figure 6-4: Sample sectional view of seismic data with available well control and sample seismic attribute
maps
6.2 Data preparation
The available data required extensive preprocessing before any interpretation in order to
improve discontinuity mapping and more broadly, to make the data usable within the
defined framework of our interpretation workflow. Due to the nature of the reservoir
105
involving extensive structural discontinuities and compartments, dip steered median filter
is applied to better resolve the data for subsequent studies. A dip steered filter uses the
local dip-azimuth information in order to compute the filtered seismic signal. We use a
FFT algorithm which involves transforming the available time domain data into
frequency domain data. Mathematically, for 2D data, DFT can be defined as follows:
∑ ∑
(
)
(6.1)
∑ ∑
(
)
(6.2)
Many FFT algorithms are available in the literature for use in solving the above
DFT equations. The solution typically involves M×N 1D transforms (M rows followed
by N columns). Each transform involves breaking the function into 2 parts (even indexed
and odd indexed parts) and using periodicity relationship of DFT's to calculate the final
transform. The designed filter with its defined frequency response (including cut-off
frequency) needs to be provided which is then convolved with frequency domain data.
The output is transformed back into time domain using inverse DFT to get filtered data.
The filtered data cannot be directly used for interpretation as the data still is in the
time domain. Since we do not have access to any seismic velocity models for the survey
area, well log data is used to tie the well logs with the seismic data to get a desirable
match for log based attribute prediction from seismic data. Figure 6-5 shows the
workflow followed for the well to seismic ties including an example of synthetic seismic
section showing how the synthetic seismograms are used to match the two. Since we
have sonic velocity logs for only two of the available wells, we resort to generating
synthetic velocity logs from available resistivity logs (Rudman et al. 1975) in order to
106
make full use of other available well logs and to have a good match over a wider areal
spread within the study area. The process involves using available models by Faust
(1953) which relates resistivity to pseudo sonic data. Faust's relationship is one of the
oldest available and can be used for consolidated sandstones. It is defined as follows:
(6.3)
Where F is the formation factor, Z is the depth and γ is referred to as Faust's
parameter. Another method that can be used involves understanding and extracting
available correlations for velocity and using the same to identify the best possible log
data relationship for pseudo sonic data generation. This procedure was tried for the study
area but usable relationships could not be obtained for the entire depth range and hence
Faust's relationship was implemented. We then upscale the sonic data so as to match the
seismic scale and extract the reflection coefficients from the logs. A seismic wavelet is
selected and the reflection coefficients are convolved with the same to generate a
synthetic seismic trace. In order to tie this synthetic trace with actual seismic data,
seismic traces around the well track locations are averaged from within the available
seismic dataset. The process of synthetic-to-seismic ties involves matching discernible
extremes (maxima and minima) that show up on both the seismograms as possible
reflectors (Figure 6-5) over the entire depth range of interest.
107
Figure 6-5: Workflow for well to seismic tie and Time/ Depth modeling
6.3 Attribute mapping
Seismic attributes are different measurements derived from the original seismic data to
help reveal features, relationships or patterns in the data useful in reservoir
characterization. They provide information related to structure, stratigraphy and reservoir
properties and therefore can be very useful in reservoir characterization workflows. We
measure seismic attributes to infer fracture systems as fracture size is much smaller than
the typical seismic wavelength and therefore it becomes essential to use indirect methods
of observation through attributes to both qualify and quantify their presence. Attributes
have been used in many different characterization problems and a lot of literature is
available in this regard including by Barnes (1999), Chopra et al. (2006; 2007 & 2011),
108
Costa et al. (2007), Staples (2010), etc. which discuss most of the commonly used
attributes and techniques. Some work has also been done on sub-seismic fracture
characterization in recent years (Anjaneyulu et al. 2011) using attributes. We apply many
of these attributes and techniques to the available dataset to check the viability of their
use and to make preliminary observations/ interpretations so as to better understand
reservoir properties. Attributes can be calculated either before or after stacking and/ or
migration. The initial survey data is in the form of gathers of traces for each of the
recorder and they contain valuable directional (azimuth) and offset related information
which can be used to identify fluid content and fracture properties apart from velocity or
azimuthal variation of other attributes. Attributes from post stack data are useful for quick
reconnaissance investigations while prestack data is generally used at later stages for in-
depth studies. Table 6-1 provides a tabulated list of common post-stack attributes in use
today. The table also lists the common uses of the said attributes and their most
commonly used mathematical formulations. There are a large number of attributes in use
within the industry and academia today and this can lead to high degree of redundancy
and extensive computational requirements when doing attribute based analysis. Figure
6-6 shows a list of attributes characterized by the properties they measure. Though this
attribute list in not exhaustive, it still provides an idea of the challenge an interpreter may
face while doing attribute analysis. Due to lack of pre-stack seismic gathers, pre-stack
attributes were not looked into in this study and have not been discussed in any detail in
this section as well.
109
Table 6-1: Examples of seismic attributes including their mathematical description and possible use in
reservoir characterization workflows
Attribute Mathematical formula Use
Envelope
√|
|
Where g(t) is the imaginary part of complex trace
Bright spots,
gas, thin beds,
lateral
discontinuities
Coherence
Where G represents correlation b/w signals
Quantitative
fault and
fracture
measurement
tool
Curvature
( (
)
)
Where y & x are represent data along in-line/ cross-line
Faulting and
lineaments.
Augment
coherency
observations
Energy
Where A(t) is the instantaneous amplitude
Stratigraphic
features and
geo-bodies
Instantaneous
Frequency
]
Where Ph(x,t) represents the instantaneous phase
Fracture zone,
bed thickness
and edges, shale
content
Polar Dip
(
)
Where α is the azimuth
Characterization
of structural
features and
beddings
Absorption
quality factor
(Q)
Where decay(t) is instantaneous decay rate
Absorption
characteristics
of beds
110
Table 6-1, Continued
Attribute Mathematical formula Use
Relative
acoustic
impedance
(AI)
∫
Zero phase trace is used in evaluation leading to band
limited AI
Porosity,
sequence
boundaries,
unconformities
Semblance
∑ ∑
∑ ∑
∑ ∑
m indicates the trace number and N is the computation
window size
Laterally
continuous
zones
Chaotic Zone
Indicator
Where σ(t) is the standard deviation of dip and S(t) is
lateral semblance
Zones of chaotic
bedding,
carbonate
indicator
Figure 6-6: Seismic attribute list with properties they measure (Barnes 2006)
111
As a precursor to property prediction and seismic characterization, we carry out
preliminary seismic attribute studies to identify usable properties and to also determine
any issues that may be identified directly from such attribute studies. Figure 6-7 shows
three sample seismic attribute maps for reference. These include attenuation or absorption
quality factor (Q), instantaneous frequency and coherency maps to identify areas of
interest. Basic attribute analysis does indicate possible zones of interest particularly when
analyzed in conjunction with well track information for zonal reference. However the
need for improved analysis techniques is clearly identified and these have been discussed
in the subsequent sections of this chapter.
Figure 6-7: (a) Q factor, (b) instantaneous frequency and (c) coherency maps at 1.2 km depth for reference
112
6.4 Log-seismic derived property prediction
The process of translating well log properties into log derived property estimate maps
over the entire seismic volume involves the use of seismic attributes maps (as discussed
in the previous section). An ANN based approach is used to map the non-linear
relationship between multiple input attributes and actual well log data. Seismic attributes
can be influenced by rock properties including porosity and density variations and these
potential influences and relationships can be used to narrow down possible input
attributes for the ANN training schemes. While geometric attributes are best suited for
lateral changes in reflector impedance and morphology, acoustic impedance and spectral
decomposition are sensitive to vertical variations in lithology, porosity and fluid content.
Apart from many of the simple attributes as defined in the previous section,
colored inversion is used to generate band limited seismic impedance data (Lancaster et
al. 2000). Due to the absence of adequate number of sonic logs, pseudo sonic travel times
obtained from resistivity logs are used with density logs to generate pseudo acoustic
impedance logs as discussed earlier. ANN property prediction workflow involves
extracting attributes from a small temporal/ spatial window around the well track in order
to compensate for misalignments. Impedance is independently calculated first using
colored inversion workflow where a single inversion operator is derived that optimally
inverts the data and honors available well data in a global sense. Since this requires sonic
travel time data, only those control wells (two in our study) with sonic data are used in
this inversion scheme. Independently, Impedance is also calculated using data from all
the wells by using pseudo sonic logs generated from resistivity log data using Faust’s
113
relationship assuming consolidated sandstones as discussed earlier. The impedance maps
obtained are then matched as a basis for validating the pseudo-sonic logs generated and
used for property predictions. This process helps in validating the choice of attributes for
mapping as well as the validity of the overall workflow in question (Figure 6-8). We do
note that Faust’s relationship is not directly applicable for our study area as the field is
known to comprise primarily of unconsolidated sandstones. Therefore, the workflow can
be implemented by using the impedance based on actual log data alone.
For many of the properties in question, namely porosity, SP, gamma ray and
saturation logs; acoustic impedance is a good seismic indicator for network training
(ANN property modeling workflow using well log data). The use of acoustic impedance
has been shown in previous works for specific areas of interest including in the Gulf of
Mexico (Koesoemadinata et al. 2008) and in Norway (Zhang et al. 2010) which have
validated the use of seismic derived impedance in different lithological environments for
ANN based property prediction workflows. Other attributes for training are selected
based on log/ attribute cross-plotting to check for best possible relationships. Figure 6-9
shows two sample attribute cross-plots eventually selected for rock property prediction.
The workflow involves careful pruning of applicable depth range and radius of
investigation and generating necessary property map inputs. In some cases, directional or
spatial filtering might be necessary to get good correlations between properties of
interest. Moreover, in many situations, linear relationships may be absent and more
complex activation functions may be necessary to mimic the non-linear nature of intra-
property relations.
114
Figure 6-8: Impedance maps generated using (a) colored inversion with 2 sonic logs (actual) and (b) with
all sonic logs (pseudo logs). The figure shows appreciable match obtained between the two indicating the
validity of the pseudo logs generated
Figure 6-9: Sample cross-plots showing extracted log properties with seismic attributes close to the well
tracks. Attributes showing perceptible relationships are used for property prediction workflows
115
A supervised ANN is used to train on the examples which are extracted from
seismic and well log data along the well tracks. Inputs include acoustic impedance and
other selected properties and the outputs are the property maps. Figure 6-10 (a) shows
impedance map generated using colored inversion while Figure 6-10 (b, c) show two log
property maps obtained using the workflow as explained earlier (porosity and density
maps) for a selected depth interval of interest (~ 1.24 km).
Figure 6-10: (a) Impedance map from colored inversion, (b) density map and (c) porosity map at selected
depth horizon of 1.24 km. Density and porosity maps are obtained using ANN property prediction
workflow
In order to verify the match of the final maps at the well locations, the properties
are compared at the well tracks for a test well not used in the ANN training stage. Figure
6-11 shows examples of two such comparisons made for (a) density and (b) porosity as
estimated in the above workflow. Despite the differences in resolution associated with
116
logs and seismic data, we obtain appreciable matches in the overall trends seen in the
derived property maps and therefore they are used for further analysis. Impedance match
as explained earlier (Figure 6-8) provides further justification for their use. These maps
can then be used to better characterize the reservoir and is particularly useful in
identifying and validating potential zones of interest including validating independently
obtained reservoir characteristics.
Figure 6-11: Comparison showing actual log and seismic derived log property estimates at two well
locations which were used as test cases for validation of results
It is important to understand that attribute selection is a tricky process and
adequate understanding and care is necessary for best possible results (Kalkomey 1997).
An interesting method to help solve some of the problems related to attribute selection
was introduced by Dorrington et al. 2004 which makes use of integrated GA/ ANN
approach for attribute selection to solve the problem of well log property prediction. The
117
selection process needs to be exhaustive and each interpreted or modeled property map
needs to be independently validated if possible. This can be done using “test well” da ta
exclusively set aside for said purpose after careful pre-selection or by using independent
data (such as core logs or production data).
6.5 Integrated attribute analysis
It is possible to integrate multiple seismic attributes derived in the previous sections in
order to better understand and characterize the subsurface. While there are many methods
available for integrating different attributes, particularly nonlinear mapping schemes
(such as ANN or hybrid ANN algorithms) as is explained in the following sections, we
discuss some simple mapping techniques and data display methods which can allow for
improved interpretation using attributes without getting into workflows involving
complex computations. Figure 6-12 shows a color blended multi-attribute map showing a
combination of Laplacian edge enhancement and edge preserving (section 6.6) filters as
well as similarity and curvature attributes (section 6.4). It is interesting to note that this
integrated map shows dark lateral discontinuities accentuated by similarity using green
sections. The blue sections indicate edge preserved smoothening results while red zones
indicate possible smaller fracture swarms and better defined lateral continuity of faults as
well as possible stratigraphic features.
Another technique that can be used involves designing new attributes using
singular attribute maps calculated during preliminary evaluation. Those attributes which
are likely to highlight the zones of interest or other features (with either high or low value
118
anomalies) can be combined after they are adequately conditioned (data normalization).
While the selection of attributes is of vital importance, it is also necessary to have a good
understanding of the limitations of the selected attributes and how any modification such
as mathematical operations or filtering of data can impact the final analysis and results.
Figure 6-12: Color blended multi-attribute map including Laplacian, edge preserving, similarity and
curvature attributes
119
6.6 Discontinuity mapping using filters
Various attributes and algorithms have been used to map major discontinuities from
seismic data to better characterize major faults, fractures, bed boundaries, etc. Use of
multiple attributes becomes essential to make use of different discontinuity characteristics
which may be highlighted using different attributes for different zones of interest within
the study area and therefore an integrated approach becomes essential. We first discuss
two common filters which are used for discontinuity mapping and follow up the
discussion with the results of ANN based fault maps generated from many of the
discussed discontinuity attributes in a later section.
The first filter used is the Laplacian filter which is an edge enhancement filter
wherein the sharpness is determined by the filter size. The central sample value is highly
weighted (positively) while the edge values are negatively weighted.
(6.4)
The process involves defining a small kernel and convolving the actual attribute
map with the small kernel. The convolution is performed by sliding the kernel over the
attribute map, starting at the top left corner of the map, so as to move the kernel through
all those positions where the kernel fits entirely within the boundaries of the maps. Each
kernel position corresponds to a single output pixel, the value of which is calculated by
multiplying together the kernel value and the underlying attribute value for each of the
cells in the kernel, and then adding all these numbers together. The operator is defined as
follows:
120
∑ ∑
(6.5)
Where 'i' runs from 1 to M-m+1 and j runs from 1 to N-n+1. The next filter used
in our study is the edge preserving filter. The filtering process involves implementation of
the method defined by Luo (2001). It is useful in detecting sharp edges in seismic data
which often correspond with important geologic features (faults, fractures and channels).
Figure 6-13 briefly outlines the steps involved in the algorithm. To clarify how EPS
works, let us consider a five-point EPS operator as an example. In this case, for any
output location at index 'i', we first calculate standard deviations for five shifted windows
(Caswell 1982):
(6.6)
Where, A
i
represents the amplitude of the i
th
sample in the input data and n varies
from 1 to 5. Next, we select the window that has the minimum standard deviation,
calculate the average over the selected window, and assign the average as output at the i
th
output location. This general scheme can be extended for any output location in a 2-D or
3-D case. Figure 6-14 shows a sample Laplacian filtering result and edge preserved
filtering result for a selected depth level of interest (1.24 km).
121
Figure 6-13: (a) shows the input step function, (b) is the noise added signal, (c) is the result with regular
smoothening and (d) is the result of EPS application (Luo et al. 2001)
Figure 6-14: (a) Original seismic, (b) Laplacian edge enhancement and (c) Edge preserving smoothening
results. Note that these filters are run after running the data through dip steered median filter
122
6.7 Attribute limitations & ANN derived discontinuity mapping
As has already been discussed, one of the major issues with seismic attribute studies is
the large number of attributes available in the literature and being used in interpretations
today (Barnes 2006). Not only do many of these attributes lead to high degree of
redundancy, but they also have highly non-unique solutions. Moreover, the large number
of attributes involved and the complex nature of some of these attributes lead to longer
evaluation and interpretation times as well as requirement for domain expertise. Using a
mapping technique to map some of these disparate attributes into coherent results can
help us tide over some of these challenges. ANN provides a tool to undertake such a
mapping in a non-linear fashion as is expected with these input attributes. Their use also
allows for better characterization in scenarios where single attributes or a group of few
attributes may not be sufficient to understand some of the more unique features that may
not show up prominently enough through individual attributes but can get mapped with
the ANN highlighting hidden and complex relationships.
ANN can be used to obtain improved discontinuity maps and identify faults and
flexures which may otherwise be difficult to identify and interpret. The process involves
identifying attributes which are best suited to be able to show specific characteristics
particular to discontinuities and training an ANN using sample training data points
identified by an expert as zones of interest (identified discontinuities). Figure 6-15 shows
sample attribute map (Laplacian edge detect) and identified discontinuities (training
picks) as well as the trained map which better highlights the discontinuities. The figure
also helps identify a sample location which ANN is able to pick and highlight as a
123
possible discontinuity though we are unable to make that interpretation with the original
attribute.
Figure 6-15: (a) Sample attribute map with selected zones of interest (identified discontinuities) for ANN
training inputs and (b) Final ANN trained discontinuity map indicating identifiable faults not seen easily on
the individual attribute maps
The process of optimal attribute selection involves a few trial and error runs but
the final discontinuity map obtained is used as the base map for integrated fracture
characterization workflow as required in this study. Identifying major faults can also help
in characterizing possible fracture zones and the fracture mechanisms which in turn allow
for better classification of fractured zones when interpreted in conjunction with available
injection/ production data. While the method is used here to identify and map
124
discontinuities, this method can also be used to identify fractured sections of the reservoir
by identifying attributes and properties unique to fractures (e.g. high velocity anomalies
and higher sonic derived porosities, etc.). However, in order to have a reasonable
solution, the ANN requires verifiable and representative training data which could be
available from core data, image logs, etc. so that known fracture zones can be used for
training, testing and validation of the trained models.
Gomes et al. (2002) has shown the use of multiple pre- and post-stack attributes
and their use with geologic information for fracture zone characterization. We try to
select a few specific attributes which are known to be indicative of possible fracture
zones or which could be useful based on how these attributes or properties should
theoretically behave in relation to the presence of fractures. Instantaneous frequency,
Absorption quality factor and relative acoustic impedance volumes are selected for
integration into what is referred to as fracture zone identifier attribute. Figure 6-16 (a, b)
show the integrated attribute maps (FZI's). The integration involves normalization of
each of the constituent attributes and FZI's are defined as follows:
[
] (6.7)
[
] (6.8)
Where f
n
(t) is the normalized instantaneous frequency, AI
n
is the normalized
relative acoustic impedance, Q
n
(t) is the normalized Q factor, Φ
n
is the normalized log
derived porosity and ρ
n
is the normalized log derived density map. F is a functional
representation of ANN model which are trained and tested using zones known to have
fracture induced permeability from logs, core or production data. A radius of
125
investigation is defined around the well tracks and average attributes are calculated and
used as training data for this unsupervised classification problem (Note: supervised
algorithms can also be used). Despite availability of prior information on known sections
with fracture dominated flow around the wellbore, a preliminary log analysis is done for
the perforated section of the well (as per drilling logs) and some sections identified
broadly as "possible" fracture zones (after tallying with image logs) for training.
Moreover, a few zones of fracture dominated flow were identified and provided by the
operator for our use (specific depth interval for sample well A). A small window (8
nearest data points at that horizon) around identified location is used for ANN training.
Figure 6-16: (a) FZI
1
and (b) FZI
2
maps at a reference depth level. Note the subtle differences where one
map shows higher values and the other shows lower values and vice versa
126
There are other possible attributes that can be used for fracture zone identification
including the use of coherency and curvature attributes as well as pre-stack attributes
such as AVO (not used here due to lack of formatted pre-stack data). However, the
shared examples here are to provide a broad idea of the process involved which can be
repeated using the basic attribute selection and definition workflows shared in this work
or in use independently.
6.8 Results & discussion
Figure 6-17 shows sample discontinuity maps and confidence maps for two arbitrarily
extracted layers from the survey area. As can be observed from the figures, we are able to
identify major discontinuities from the seismic data for further characterization. Figure
6-18 (a, b, c & d) show four sample production well tracks with discontinuity maps at
those depth intervals which are known to be perforated. These maps clearly indicate the
presence of major discontinuities close to the wellbore which could potentially justify the
high levels of hot water production observed at the identified wells. Integration of this
data with FZI attributes and corresponding characterization is discussed in Chapter 9.
This could also be taken as a validation of the workflow followed for 3D seismic data
interpretation and attribute studies and justifies the use of the maps obtained for
integrated analysis schemes. We also look at validating some of the rock property
estimates individually in an attempt to validate the workflow and the final mapping
carried out using the ANN framework. This is particularly important as the use of ANN
based approach does not include a clear understanding of the underlying mapping being
127
identified. Moreover, it is important to validate the FZI characterizations as they were
defined using purely academic observations of property response expected with fracture
zones. We use well ‘A’ as the control well to test the property maps as a means of
understanding the validity of the results discussed above. Figure 6-19 shows two ANN
derived log properties while Figure 6-20 shows the two FZI maps (as defined in section
6.7) with well 'A' co-located for reference. The reservoir characterization schemes shared
here can be further improved with the availability of actual injection/ production data for
the reservoir in question which can then be tied to the discontinuity maps to characterize
the types of faults and compartmentalization observed within the reservoir.
128
Figure 6-17: ANN derived discontinuity maps at (a) 800 m and (b) 1220 m. Confidence levels (darker
regions indicate higher confidence) of the trained ANN networks at (a) 800 m and (b) 1220 m
129
Figure 6-18: Four sample production well tracks (a) Well A at 1220 m, (b) Well B at 1240 m, (c) Well C at
1420 m and (d) Well D at 760 m with discontinuity maps at specified depths. Depths for display are
selected based on known high flow throughput observations (from perforated intervals)
130
Figure 6-19: Log derived (a) instantaneous frequency & (b) porosity (depth ≈ 1226 m)
Figure 6-20: Seismic derived (a) FZI
1
and (b) FZI
2
attribute maps with well A co-located (depth ≈ 1226 m)
131
Chapter 7: Microseismic Derived Property Models
7.1 Introduction
Chapter 5 discussed the inversion of hypocentral locations and arrival times to obtain
velocity models using SimulPS (Haslinger 1998) software. As discussed earlier,
obtaining a usable model for analysis is an iterative process. This is particularly the case
here due to limited ray-path coverage with the limited number of recording stations
available as has been discussed before. This problem is compounded by the lack of
adequate number of stations necessary for both location and velocity inversion
algorithms. In this chapter we discuss the analysis based on the velocity models that we
obtained using SimulPS program. While the original velocity models are distributed over
a very coarse grid, we use both co-kriging and geostatistical simulation studies by
incorporating impedance data (derived using colored inversion from conventional seismic
data as described in Chapter 6). The aim is to downscale from the coarser to a finer
velocity grid model (by a factor of about 10) which conforms with not just the actual
velocities obtained from microseismic data but also with the relative acoustic impedance
data which independently describes the acoustic properties of the reservoir as interpreted
from the 3D seismic data (Chapter 6). Estimates using ordinary kriging are used to verify
the validity and potential improvements in the final estimated velocity fields. In this
chapter, we share the methodology as well as a comparative analysis of said methods.
The final velocity models are used as a tool to interpret reservoir rock properties
132
including the elastic properties as well as stress and anisotropic weakness estimates. We
use property estimates as a means of getting a better understanding of the reservoir
including preliminary fracture zone characterization. Broad implications of the results are
shared and analyzed within a comparative framework with 3D seismic results used as
reference. Figure 7-1 shows the basic outline of the workflow followed in our analysis
using compressional and shear wave velocity estimates. Detailed discussion on each of
the workflow elements follow.
Figure 7-1: MEQ based property prediction workflow
7.2 Cokriged velocity models
As mentioned earlier, co-kriging is used to obtain better velocity models with improved
resolution while conforming to the baseline structural features of the reservoir as
133
observed with the conventional seismic data analysis. This step becomes more relevant
due to the data quality issues which the microseismic data suffers from due to reasons
discussed in earlier chapters. Figure 7-2 shows inversion errors estimated from the
SimulPS resolution matrix obtained as an output. We clearly observe very high
uncertainties at the referenced depths. Figure 7-3 shows the original compressional and
shear wave velocity models at a selected depth level. Due to sparse gridding of the inputs,
the resolution is very poor, particularly at locations away from the grid centers. We
clearly observe the need to improve the velocity estimates before characterization
workflow is initiated.
Figure 7-2: Inversion uncertainty mapped at reference depths (0.5 km, 1.0 km & 1.5 km). Also shown are
the relative well locations (green inserts represent injection wells and black inserts represent production
wells)
134
Figure 7-3: (a) V
P
and (b) V
S
estimates at a randomly selected depth level (1 km) for reference. The data is
extracted from the baseline velocity maps (section 5.7) using latitude/ longitude information and converted
into In-line/ Cross-line format
The model data has been constrained and shown over the area of interest which
corresponds with both the latitudinal and longitudinal expanse of the conventional
seismic data survey. This is done as a precursor to cokriging as the two datasets (primary
and secondary variables) need to be defined within the same grid-space for the algorithms
to function. We use SGeMS (Remy et al. 2009) geostatistical package to run the data
through a cokriging algorithm. The data range is converted to In-line and Cross-line
scaling beforehand for ease of data handling and interpretation. Here the output grid
spacing corresponds with the spacing observed with the conventional 3D seismic data
survey and is scaled such that is conforms to the downscaling ratio used (~ 10).
135
Figure 7-4: (a) V
P
and (b) V
S
estimates at selected depth level (1 km) for reference. The results are obtained
by cokriging impedance map with velocity map using impedance as the secondary variable
The final velocity model for the reference depth based on co-kriging approach is
shown in Figure 7-4. It is to be noted that the use of impedance maps as the secondary
variable leads to better constrained final velocity models. Figure 7-5 shows a simple
analysis of cross-correlation between the primary and secondary data extracted
individually over two specific depths for reference (0.5 km and 1.0 km). In order to verify
conformation with major anomalous attributes seen in the velocity maps, the final models
are compared with sonic derived velocity models filtered over an interval in order to
constrain the results with the relatively lower vertical resolution of the microseismic
derived velocity models (Figure 9-1) despite the use of geostatistical estimation and
subsequent resolution improvements.
136
Figure 7-5: Velocity (V
P
) vs. Impedance cross-plots at (a) 0.5 km and (b) 1.0 km validating its potential use
for co-kriging or co-simulation
137
The basic idea behind the co-kriging estimation technique is to try and estimate
the primary property in a manner that its dependence on the secondary property is also
taken into account during the estimation process. The impedance map is of higher
resolution as it is derived from conventional seismic data. This allows improved
estimation of primary variable (phase velocity) from the baseline sparsely gridded data.
In other words, the velocity estimate V
P
obtained is dependent not only on the distance to
nearby sample locations within V
P
property map and its variogram but also dependent on
the distance to the nearby sample locations on the impedance map as well as the
variogram governing the impedance property and finally, the cross-variogram between
V
P
and impedance. While the current discussion relates the use of co-kriging, we will
later demonstrate the use of Sequential Gaussian Simulation as a way of improved
geostatistical property prediction which will be eventually used for analysis and
interpretations. Let us assume that the primary variable is specified as v and the
secondary variable as z. Then the estimate is a linear combination given by:
∑
∑
(7.1)
Where v
o
is the velocity estimate at the evaluation location, v
i
defines the velocity
values (primary data) at n nearby locations and z
j
defines the secondary impedance data
at m nearby locations. Both a
i
and b
j
define the cokriging weights. The co-kriging
weights must satisfy the commonly defined unbiasedness condition given as:
∑
∑
(7.2)
The solution for the cokriging system involves the following set of equations:
138
∑
{
} ∑
{
}
{
}
(7.3)
∑
{
} ∑
{
}
{
}
(7.4)
Where j varies from 1 to n for eq. 7.3 and varies from 1 to m for eq. 7.4. We
consider symmetric cross-covariance and therefore use semi-variograms and convert
them to covariance values for the cokriging matrix using the following relation:
(7.5)
The auto and cross-variograms used must be positive definite for a solution to
exist. Moreover, the primary and secondary variables should not exist at all locations and
the individual variogram models should not be too similar. Figure 7-6 shows both the
ordinary kriging and cokriging results for reference.
Figure 7-6: (a) V
P
model using ordinary kriging & (b) V
P
model using cokriging
139
Figure 7-7: Variogram models including those for (a) V
P
estimation (primary variable), (b) V
S
estimates
(primary variable) and (c) co-variogram (V
P
vs. Impedance)
140
Clear differences in the final estimated velocity volumes are discernible though
the overall structure seems to remains the same. Variograms for individual variables are
evaluated by first calculating the variograms for values ranging from 100 m to ~3 km
which provides the maximum lag separation based on the data grid. Figure 7-7 shows the
two evaluated variograms for cokriging. Gaussian model is used for both V
P
and V
S
with
a maximum range of 16.4 and a sill contribution of 0.97 for V
P
and 28.9 and 1.0
respectively for V
S
. A Gaussian model is again used for the co-variogram with a range of
41.5 and a sill contribution of 1.05.
7.3 Sequential Gaussian Co-Simulation
Due to higher uncertainty associated with the co-kriging approach resulting from the
presence of a single estimation and its behavior as a low-pass filter causing the smaller
values to be overestimated and the higher ones to be underestimated, a more robust
estimation technique would be the use of geostatistical simulation. COSGSIM is a useful
geostatistical interpolation algorithm that has been used in the past to estimate reservoir
properties based on known correlations between partially dependent variables. It is
ideally suited to model spatial variability and reduce local uncertainty (enhance
resolution). COSGSIM assumes the unknown variable to be Gaussian in nature and
velocities are transformed before simulation for use. The transformation of variable V(x)
with cdf F
V
into standard normal variable V
N
(x) with cdf F
NV
(Goovaerts 1997) is
obtained as:
] (7.6)
141
It should be noted that only the variogram of the resulting transformed variable
has a multivariate Gaussian distribution. The COSGSIM algorithm first involves the
transformation (if necessary) of the two independent yet correlated variables V & Z into
Gaussian variables V
N
and Z
N
as per eq. 7.6. This is done by using inverse normal score
transformation for the simulation realizations (Figure 7-8 & Figure 7-9) the steps for
which are specified in the subsequent discussion. To simulate V
N
, we first define a path
visiting each node of the grid. At each node, the local conditional cumulative distribution
function is Gaussian and its mean is estimated by using simple cokriging estimate and the
variance using simple cokriging variance. The conditioning data at that node consists of
the original primary and secondary data as well as the previously simulated values. Next
a single value is drawn from the previous cdf and it is added to the dataset. This process
is repeated over all available nodes within the simulation grid. Then the simulated values
are back-transformed to the original distribution by using back normal score
transformation. Since the expected values of both the primary and secondary variables
are not known, ordinary cokriging is used in place of simple cokriging for nodal
calculations within the COSGSIM algorithm.
142
Figure 7-8: pdf & cdf of normal score transformed V
P
Figure 7-9: pdf & cdf of normal score transformed V
S
143
Figure 7-10: Workflow used for stochastic phase velocity modeling using impedance as secondary data
The stochastic modeling workflow utilized in our study has been briefly outlined
in Figure 7-10. Normal score transformation involves ranking of the original data in
ascending order followed by the computation of sample cumulative frequency of the
datum of V(x) with rank k. Finally the z-datum with rank k is matched with the quantile
of the standard normal cdf. For our problem, we run COSGSIM algorithm using phase
velocity as the primary hard data (to be estimated) and seismic derived impedance as the
secondary hard data. Figure 7-8 and Figure 7-9 show the original distribution of the
primary (V
P
) and secondary (impedance) variables for reference as well as the
transformed primary variables (V
P
& V
S
) before the simulation runs. The variograms and
co-variogram used for the simulation runs are exactly the same as used with the cokriging
144
estimation approach explained earlier. After the simulation runs are complete, the “back -
transformed” velocities are analyzed to select the final velocity models for our use.
Figure 7-11 & Figure 7-12 show sample velocity maps (V
P
and V
S
respectively) at
multiple reference depths (ranging from 0.5 km to 3.0 km). The figures also show
evaluated simulation errors which have been computed using standard deviation over 100
optimal realizations selected from the global realization dataset. These were selected
based on weighted MSE computed for each realization using the original sparse velocity
map with the weights computed from the inversion resolution maps. This ensures that the
selection criteria takes into account the error of the velocity values at discrete sparse grid
locations where the velocity values are known (through inversion results) with adequate
weight to associated inversion errors. Figure 7-13 (a, b) show the final (normalized)
selection measure used to select the optimal realization for further analysis. While
selection based on error analysis allows for the final maps to conform to the original
velocity maps as best as possible (including errors), the original maps through inversion
are very sparse and this leads to a relatively high degree of uncertainty in the final (high
resolution) velocity estimates.
Instead of COSGSIM, SGSIM can also be used in situations where independent
density/ impedance data (derived from available well logs or 3D seismic data) is not
available. In such situations, information on the expected dependence of velocity or other
properties on structural features within the reservoir is not available and the use of
COSGSIM algorithm in not possible. While we have not considered the variability of the
predictor due to estimating the variogram and co-variogram structure, it is of importance
145
to understand the effect of additional variations in the estimating parameters on the final
prediction results. A more extensive look into the COSGSIM algorithm is beyond the
scope of this text but can be referenced from many freely available sources (Verly 1993;
Wackernagel 2003; Manchuk et al. 2008; etc.).
146
Figure 7-11: Final V
P
realization selected for analysis at reference depths (0.5 km, 1.0 km, 1.5 km, 2.0 km,
2.5 km & 3.0 km). Also shown are the estimated simulation errors based on the standard deviation
observed at each node over all realizations
147
Figure 7-12: Final V
S
realization selected for analysis at reference depths (0.5 km, 1.0 km, 1.5 km, 2.0 km,
2.5 km & 3.0 km). Also shown are the estimated simulation errors based on the standard deviation
observed at each node over all realizations
148
Figure 7-13: Normalized error plots showing variations across the 100 (a) V
P
and (b) V
S
realizations
selected for analysis. The realization showing minimum variation compared with the original (sparse)
velocity model is finally selected for analysis
149
7.4 Property estimation
The velocity models obtained are used as the basis for estimating rock property estimates
based on well understood geophysical and geomechanical models. The compressional
and shear wave velocities can be related to elastic rock properties including bulk
modulus, shear modulus and Poisson’s ratio and these properties can in turn be used to
estimate zones of interest using available framework that relate geophysical and
geomechanical properties with reservoir attributes such as fractures (Tokosoz et al.
1981). We know that compressional and shear velocities can be used to derive Lame's
parameters (Feng et al. 1981) as follows:
(7.7)
(7.8)
The Lame's parameters can in turn be used to estimate the elastic properties of the
rock using the following relationships (Mavko et al. 2003):
(7.9)
(7.10)
(7.11)
Where ρ is the bulk density (g/cc), K is the bulk modulus (Pa), E is the Young's
Modulus (Pa) and σ is the Poisson's Ratio. Figure 7-14 (a, b & c) show estimated V
P
/V
S
ratio and Lame's parameters for selected reference depth intervals while Figure 7-15 (a, b
& c) show the estimated inertial properties for the same depths for reference.
150
Figure 7-14: (a) V
P
/V
S
ratio, (b) Lame's parameter ’μ’ and (c) Lame's parameter ‘λ’ at three reference
depths (0.5 km, 1.0 km & 1.5 km). Also shown are the relative well locations (green inserts represent
injection wells and black inserts represent production wells)
151
Figure 7-15: (a) Bulk Modulus and (b) Young's Modulus and (c) Poisson’s Ratio maps at three reference
depths (0.5 km, 1.0 km & 1.5 km). Also shown are the relative well locations (green inserts represent
injection wells and black inserts represent production wells)
152
Certain observations can be made (with accompanying uncertainty) based on well
understood rock physics framework. Bulk modulus measures the resistance of the
reservoir to volumetric compression while young's modulus measures the stiffness of the
reservoir rock under study (and therefore defines the stress-strain relationship). Poisson’s
ratio is considered a measurement of the geometric shape change due to extensional
stress regime. Low compressional and shear velocities can be an attenuation effect due to
presence of fractures. However, most artifacts observed through these property estimates
can be explained using more than one possible source and therefore a more holistic
interpretation approach is necessary.
Next we look at the elastic properties and the underlying relationship that seismic
velocities have with rock properties such as porosity, fluid saturation, fracture content,
density and texture etc. Many relationships between the intrinsic rock properties and the
measured velocities are available in the literature. We know that as a general norm, both
velocity and density are expected to increase with depth due to compaction. Let us
consider how seismic velocities can be related to rock properties and analyze these
properties.
First we look into porosity prediction using velocity data. We can use a rough
porosity estimator based on Wyllie's time average relation (Wyllie et al. 1956) given as:
]
] (7.12)
Where V
P
is the measured P wave velocity, V
m
is the matrix velocity and V
f
is the
pore fluid phase velocity. This model however is highly deceptive as it presumes matrix
and fluid to exist in normal layers and the wavelength to be very small compared to those
153
layer thicknesses. This relation can generally be applied for consolidated sandstones
within a narrow porosity range (25% to 30%).
Another useful porosity to velocity transformation which could be used for
unconsolidated sandstones in particular was developed by Raymer et al. (1980) and it is
mathematically defined as:
(7.13)
{
√[
]}
(7.14)
We can also use density-velocity transformation (Gardner et al. 1974) to obtain
density estimates and use that to get a usable velocity - porosity relationship as shown:
(7.15)
(7.16)
{[
] }
(7.17)
[
]
(7.18)
Where ρ
m
and ρ
f
are the matrix and fluid densities (g/cc) respectively. It should
be noted that extensive literature does exist providing effective medium models taking
internal rock structures into account. For example, Kuster et al. (1974) provides models
for low porosity rocks, Berryman (1980) provides models for low to medium porosity
rocks, Hudson (1990) provides models for fractured rocks and Dvorkin et al. (1994)
provides models for high-porosity granular rocks. However, we look at the traditional
transformations alone as a means of validating the velocity models as against models
generated using log-seismic mapping discussed in the previous chapter. Figure 7-16 (a, b
154
& c) show the three different porosity estimates obtained at an arbitrarily selected
reference depth and Figure 7-16 (d) shows seismic derived porosity map at that depth.
The seismic derived porosity map is obtained by using a long window smoothening filter
over a depth interval of 50 meters above and below the depth of interest so that it
corresponds with the vertical resolution of the microseismic derived maps used in the
estimation workflow.
Though the method shared here is a quick and relatively crude technique for
velocity validation, the more elaborate technique, however, is to use statistical analysis of
data from the actual reservoir (compressional velocity and porosity estimates, etc.) within
specific litho-bodies/ depth intervals of interest and to obtain best possible relationships
for the reservoir in question. This is, however, considered unnecessary as the actual data
derived from inversion is sparse and as can be observed with the multiple relationships
used here, the overall porosity trends remain similar and the observed trend is considered
more than enough for validating the workflow including both the inversion process as
well as the final geostatistical estimation. The porosity maps used for analysis (as
required) is the map obtained using conventional seismic data analysis (ANN modeling).
At the same time, geostatistical estimation of properties such as porosity using seismic
impedance and statistical co-simulation is also possible (similar to the approach used for
phase velocities).
155
Figure 7-16: (a) ϕ
1
, (b) ϕ
2
and (c) ϕ
3
as per discussion and (d) shows 3D seismic derived porosity map (log-
seismic mapping). We can easily observe major features similar in the maps indicative of valid and usable
inversion results
156
We also know that the process of lithification leads to chemical alterations within
the pore space with mineral deposition and this must have a strong impact on the moduli.
Moreover, increased cementation also leads to higher bulk densities. The added effects of
cementation and porosity reduction (due to increased overburden) lead to observable
increase in velocities with depth. Compressional wave velocity is also strongly dependent
on effective stress in the subsurface. Increased confining pressure (increased effective
stresses with increase in burial depth) should lead to increase in velocities till we reach
terminal velocity. At low effective stress levels, cracks tend to be open and show large
strains (with small increase in stress). This indicates lower velocities and smaller bulk
modulus values. The condition is reversed in regions with high stress levels. In case of
over-pressured regions, we should again see anomalous velocities (low values). We can
calculate estimates of stress directly using V
P
and V
S
estimates obtained earlier and the
stress estimates can be a useful tool in fracture mapping (Tokosoz et al. 1981). The
extensional (V
E
) and hydrostatic (V
K
) stresses are defined as shown as:
(
) (
) ⁄ (7.19)
⁄
(7.20)
Figure 7-17 shows maps of both the stress fields for three reference depths (0.5
km, 1.0 km & 1.5 km). It should be noted that while some regions show spatially high
normal stress values, they show low hydrostatic stress values. On the other hand, there
are regions where the opposite effect is true as well. This provides a good example for the
need to have more complex relationships between various geomechanical attributes and
157
the requirement for “non-linear” modeling techniques such as ANN or fuzzy inference
systems.
Figure 7-17: (a) Relative extensional (V
E
) and (b) hydrostatic (V
K
) stress maps at selected depth interval.
Also shown are the relative well locations (green inserts represent injection wells and black inserts
represent production wells)
Theoretical and empirical studies indicate that compressional wave velocities
decrease with increasing fluid saturation. Increase in the fraction of gas in the pore space
158
should lead to reduced bulk modulus and reduction in both compressional and shear a
wave velocity which is similar to the effect seen with highly porous systems or in low
pressure anomalous zones. We also observe anomalous impedance values due to reduced
densities which compounds the effects of low velocities mapped at the zone of interest.
We can also calculate the normal and tangential weaknesses (Hsu et al. 1993)
provided we have some estimate of fracture density beforehand and certain assumptions
regarding fracture models are made. This assumes a specific anisotropy model in place
guiding the fracture system within the reservoir under study. The weakness estimates
were first introduced based on the linear slip model proposed by Schoenberg (1983) and
Schoenberg et al. (1988 and 1989) which is based on Backus averaging technique
(Backus 1962). This model treats fractures as either infinitely thin, highly compliant
layers or as planes of weakness with linear-slip boundary conditions. Both of these
representations are equivalent in the long-wavelength limit of the seismic signal. Another
possible approach is to use Hudson's model (Hudson 1980, 1981, 1990 and 1994) which
is an effective medium theory based model that assumes an elastic solid with an internal
distribution of thin, penny-shaped cracks. Hudson uses two parameters, e (crack density)
and α (aspect ratio) to describe the structure of fracture systems. Figure 7-18 provides a
graphical representation of the volumetric distribution of the cracks/ fractures in both of
the models mentioned in this section.
159
Figure 7-18: Graphical representation of fractures/ cracks using (a) Hudson's model and (b) linear-slip
model (infinite planes of weakness)
Under the assumptions of HTI where physical properties are symmetric about the
vertical axis and the medium has a horizontal plane of symmetry (layered medium) and
taking Hudson's model due to its simplicity, the fracture weaknesses are defined as:
(7.21)
(7.22)
(7.23)
In order to further conform to the seismic data, normalized FZI parameters as
defined in section 6.7 are used to obtain normalized crack density (e) which in turn is
used for the evaluation of defined weakness estimates. Figure 7-19 shows the two
weakness estimates for a sample depths used as a reference (0.5 km, 1.0 km & 1.5 km)
along with the well track locations.
160
Figure 7-19: (a) Tangential (Δ
T
) and (b) Normal (Δ
N
) weakness maps obtained for sample reference depths
of 0.5 km, 1.0 km & 1.5 km outlining structural features not identifiable with velocity or Poisson’s ratios .
We can clearly observe anomalous weakness estimates in many zones of interest. Also shown are the
relative well locations (green inserts represent injection wells and black inserts represent production wells)
Though this relation assumes no inclusion material (dry cracks), the weakness
estimates as defined above are calculated to check if it could possibly provide a better
characterization of zones of interest when compared with the local stress estimates
calculated earlier and more importantly, how they are related to one another. Moreover,
'g' as a property should be evaluated for the background matrix material and not for the
161
fractured medium as a whole. However, evaluating 'g' for the actual medium still allows
for the variability of the FZI maps obtained earlier to be mapped into the weakness
estimates. Therefore, we still use the weakness estimates as a tool to compare with the
hydrostatic and extensional stress estimates with the understanding that its independent
use is limited by the assumption of isotropic medium and therefore, the property values
are primarily qualitative in nature.
Apart from stress and weakness estimates shown, minimum horizontal stress
estimates can be evaluated using overburden and pore pressure estimates. Minimum
principal stress (Hubbert et al. 1957) can be computed as follows:
⁄ ] (
)
(7.24)
Where υ is the Poisson’s ratio, σ
1
is the overburden stress, α is the Biot’s constant
(taken as 1 for our calculations), P
p
is the reservoir pore pressure and finally, σ
ext
is the
extensional stress computed earlier. Both overburden stress and pore pressures can be
estimated using well understood relations. Overburden stress is a function of depth and
density (variable) in the overburden. We make use of the estimated seismic derived
density model at each nodal location and integrate over the entire depth range using the
following relation:
∫
(7.25)
Where ρ is the density, g is the acceleration due to gravity and the depth range
varies from 0 to z. At the same time, pore pressure can be computed if the effective stress
estimate can be found using the following relation (Doyen et al. 2003) which links
162
compressional velocity with pore pressure, clay content and porosity for sedimentary
systems:
[
⁄ (
)]
⁄
(7.26)
Where Vs
h
is the shale content obtained using well log analysis and conventional
seismic data (on the basis of clay content evaluation from available well log analysis
followed by ANN based rock property prediction workflow). The values for the constants
are supposed to be unique to each reservoir setting and needs to be estimated using well
calibration. We have used standardized values (sourced from Doyen et al. 2003) as our
purpose is to obtain indicative values (since properties are normalized before final stress
calculations). Estimates for normalized overburden stress are shown in Figure 7-20 for
reference depths. Figure 7-21 shows normalized estimates for pore pressure while Figure
7-22 shows normalized estimates for minimum principal stress. These estimates can be
used to identify and characterize overall stress regime in the subsurface and can help
throw up potential areas of interest which we may miss with simple rock mechanics
based modeling workflow shared earlier.
163
Figure 7-20: Estimated normalized overburden evaluated for three depths of reference (0.5 km, 1.0 km and
1.5 km) including well locations (green inserts represent injection wells and black inserts represent
production wells)
Figure 7-21: Estimated normalized pore pressure evaluated for three depths of reference (0.5 km, 1.0 km
and 1.5 km) including well locations (green inserts represent injection wells and black inserts represent
production wells)
164
Figure 7-22: Estimated normalized Minimum Principal Stress evaluated for three depths of reference (0.5
km, 1.0 km and 1.5 km) including well locations (green inserts represent injection wells and black inserts
represent production wells)
While it is a well understood fact that extensional stress has a bearing on the
fracture opening (aperture), Rutqvist et al. (2003) showed how it could be used to predict
fracture apertures in geothermal settings. We define the normalized fracture aperture
expandability (F
E
) as follows:
(7.27)
Where b
r
is the residual aperture and b
max
is the maximum mechanical aperture
and α is a constant ( all three parameters are obtained from laboratory and field
measurements using empirical relations involving rock classification index). Since lab
experiments have not been conducted for Brawley field (estimates of residual and
maximum apertures), we use the standard values for apertures and α based on the Yucca
Mountain study (Rutqvist et al. 2003). Since values are eventually normalized for
qualitative interpretations, quantitative judgments on actual fracture aperture distribution
165
cannot be made for our study area based on this property. Figure 7-23 shows mapped
aperture estimates.
Figure 7-23: Estimated normalized fracture aperture evaluated for three depths of reference (0.5 km, 1.0 km
and 1.5 km) including well locations (green inserts represent injection wells and black inserts represent
production wells)
V
P
/V
S
ratio or Poisson's ratio can both be used as good fluid indicators (Cardona
et al. 2001). Though examples of their traditional use for hydrocarbon detection are many
(Robertson et al. 1993, Hamada 2004, etc.), we have looked at their possible use in a
geothermal environment. Modifying Wylie's equation (Wyllie et al. 1956) as the basis for
simple travel time calculation, we get the following equations for travel times:
(7.28)
(7.29)
With porosity estimates available from seismic data, we use the fluid and matrix
travel times based on the limits as derived through experiments (Johnston et al. 1993)
shared in Table 7-1. Figure 7-24 shows a V
P
/V
S
cross-plot for small sections of the
166
reservoir at the same horizon with interpreted travel time relations based on the data from
the table for specific rock types. With the understanding that zones with gas should show
up as lower on both the scales with a more dramatic effect on the compressional velocity,
we are able to observe such anomalous zones clearly on the plot with zone C and zone D
showing indications of possible presence of vapor phase.
Table 7-1: Typical shear and compressional wave travel times (μs/m)
Rock Type ΔT
p
ΔT
s
Limestone 142.5 270
Dolomite 130.5 238
Sandstone 159 258
Water 567 1050
Figure 7-24: V
P
/V
S
cross-plot as a possible tool for fluid detection (dots). The lines indicate the water lines
for sandstone (black), Limestone (red) and dolomite (blue). Zones are selected based on V
P
anomalies
167
7.5 Results & discussion
We have shown how we can use the sparsely gridded velocity models that we obtain
from inversion to generate improved velocity models for use in our reservoir
characterization workflow. We discussed how velocities depend on elastic properties and
therefore how we can use velocities to obtain estimates of the said properties using a
generalized rock physics framework. The elastic rock properties as well as stress
mapping and other associated observations can also be used to interpret possible zones of
fracturing and other zones of interest. Direct observations on anomalous zones w.r.t.
velocities can also be used in our interpretation.
If the aim is to look for highly weathered and unconsolidated regions with high
porosity and presence of possible vapor phase (steam), the ideal candidate could be the
velocity anomalies. With regard to fracture zone characterization, low V
P
/V
S
anomalies
and low V
P
anomalies can be considered as fracture induced. Moreover, low extensional
stress anomalies indicate open fracture zones (Berge et al. 2001 and Martakis et al. 2006).
As an observation, a time lapse velocity anomaly map can be therefore used as a means
of understanding both the temporal and spatial fracture network development. As a
corollary, we can deduce moderate to high velocity anomaly zones to be indicative of
consolidated regions with no secondary fracturing or having closed/ cemented fractures.
Evaluating the stress maps, we look at where the known well tracks intersect depths of
interest and how it relates to the maps in question. Such maps have been provided
throughout this chapter for reference. We clearly see many of the current wells associated
with the anomalous zones identified in some of the horizons shown. The anomalous
168
zones near the injection wells should indicate possible fluid induced fracturing which
could be due to pressure and cooling shrinkage. A study on the time lapse nature of the
said anomalies can indicate both the efficacy of the injection schemes as well as reservoir
connectivity. Similarly we can observe the time lapse changes around the production
wells which may indicate possible fracture swarms associated with faults and the natural
fracture system and how it may change with time. A more detailed analysis on how the
properties vary near wells of interest will be taken up in Chapter 9. The analysis with
wells and operational data is important as it allows us to better understand the reservoir
behavior under production/ injection induced perturbations.
In summary, it is possible to invert the arrival data collected from microseismic
sensors (used as a tool for MEQ detection) into velocity and other associated elastic rock
properties. It is also possible to use many rock property models developed over the years
to obtain useful stress estimates within the reservoir. We also see how cokriging or
geostatistical simulation based estimators can be used to overcome the underlying
limitations of sparseness and quality of data obtained using tomographic inversion by
holding true to the underlying structural features of the reservoir known from the
conventional seismic survey. Moreover elastic property anomalies as well as stress and
weakness maps and porosity estimates have all been derived from the basic velocity
mapping and we have seen how these properties can individually help in a better
understanding of the reservoir.
169
Chapter 8: Hybrid FZI Attributes
8.1 Introduction
The property estimates obtained from microseismic data analysis as well as those
obtained from 3D surface seismic data using ANN – well log property prediction
workflow can be combined based on the expected and observable fracture zone
observations such as high porosity, low density and low impedance values. Moreover,
elastic rock properties as well as stress map observations can also be used to interpret
possible zones of fracturing. As discussed in previous chapter, while low V
P
/V
S
anomalies and low V
P
anomalies can be considered as fracture induced (Berge et al.
2001), low extensional stress anomalies indicate open fracture zones (Rutqvist et al.
2003). Moreover, Tangential weakness estimates can be used to predict fracture density
with known fracture presence (Downton et al. 2010). Each of these individual properties
by themselves may not be indicative of fracturing with high degree of certainty. However
when they are combined together into newly devised FZI attributes, they are postulated to
provide reasonable fracturing indications within study area. The purpose of using an
ANN framework is the highly nonlinear nature of the input attribute correlations and their
ability to map such relationships in a robust manner. The new attribute model which
combines the two different sets of properties is defined as follows:
(8.1)
170
Where ϕ
n
represents normalized porosities, AI
n
is the normalized acoustic
impedance, V
Pn
is the normalized compressional velocity, V
Sn
is the normalized shear
velocity, ρ
n
is the normalized density, ∆
Tn
is the normalized tangential weakness estimate
and V
En
is the normalized extensional stress. In order to carry out this mapping, the
velocities and stress data used is from the nearest locations around the well track. The
selection framework involves using 8 data points nearest to the mid-point of the
perforated section for selection of training data.
Figure 8-1: Sample FMI log for a producer providing well control and fracture control data for modeling
purposes. Blue sinusoids are interpreted as conductive fractures and associated depths are used for training
(Source: Internally obtained from Ormat Inc. Original log and initial interpretations by Schlumberger Ltd.)
Additional testing and validation is carried out based on the available a-priori
knowledge on fractured intervals within available wells from the study area. Image logs
171
(FMI) and field data as available from the operator is used to quantify fractured intervals
to be used for ANN model designs. Figure 8-1 shows a sample FMI log section from one
of the control wells for reference. Partially and Fully conductive fractured intervals are
interpreted based on the FMI and production data and the fracture logs are subsequently
used to identify zones of interest. All interpreted zones for potential use for model design
and verification are extracted and made use of as necessary. Figure 8-2 shows extracted
zones for the same control well which are used for ANN training as discussed in
subsequent sections of this chapter. Acoustic impedance projections (at control well
location in-line and cross-line) are additionally overlaid in order to provide a frame of
reference w.r.t. the structural features at the known zones of interest within the reservoir.
Figure 8-2: Two control wells (used for fracture zone identification for ANN model design) along with
identified fractured and conductive intervals (blue inserts) using FMI logs and well data. Also shown are
impedance map projections showing structural discontinuities and dipping beds close to fractured intervals
172
The modeling for FZI is carried out using two different ANN modeling schemes.
The first method involves the use of a classification algorithm (LVQ) and the second
method involves a simple multi-layer RBF algorithm. The two FZI attribute volumes are
combined in order to obtain the final fracture zone indicator map for potential use in
reservoir characterization workflows. Figure 8-3 shows sample training data used for
both networks as well as training performance for both these networks before application
of the trained models on the entire study volume. It should be noted that the classification
algorithm produces a bivariate classification map while the supervised RBF model
produces a probabilistic output map which is normalized for use. We can clearly see
much lower mean square errors obtained with adequately large number of iterations when
RBF algorithm is used instead of the LVQ method.
Figure 8-3: (a) Sample training data, (b) Input properties, (c) LVQ training and (d) RBF training results
173
8.2 Theoretical Background
There are many neural network designs available for possible use in order to solve the
problem of both classification and prediction of fracture presence based on the
framework provided in the introduction of this chapter. Moreover, many hybrid
techniques such as Neuro-Fuzzy methods may also allow for improved mapping based on
varying correlations (spatial variability) between different input attributes as observed
within the property volumes. The use of more complex modeling schemes can allow for
more representative fracture properties for study. However, with the aim of
demonstrating the general applicability of the defined characterization workflow, we use
two simple supervised ANN implementations in sequence.
The first supervised network used (LVQ) provides an encoding scheme to map a
set of vectors such that the average distortion as a result of the mapping is minimized. For
a set of vectors {v} drawn from a distribution f(v), the mapping to a code w = c(v) is with
the average distortion defined as:
∫ (8.2)
Where d(v,w) is a distortion measure chosen based on the specific application.
The vector quantization method as described is a clustering scheme where the aim is to
find a set of representative vectors such that each vector from the original set can be
quantized into any one of the representative vectors. In our case, the two vectors are
defined as either the presence or absence of fractures. Figure 8-4 shows an example of a
clustering scheme observed with a LVQ implementation. While there are a number of
variations of LVQ algorithm available and in use, a typical LVQ algorithm involves the
174
following steps. First step is to choose the number of desired clusters, M (= 2). Next the
prototype values for each cluster is initialized (various methods could be made use of
including using input vector quartiles or random selection from input data). The next
portion of the algorithm is iterative in nature and continues till the stopping criterion is
reached. This section involves the following steps:
a. Randomly pick an input x,
b. Determine “winning” node k by finding the vector that satisfies the condition,
|
| |
| (8.3)
c. Update the “winning” prototype weights by using the standard competitive
learning rule defined as follows:
(
) (8.4)
Since the classes are predefined with a set of labeled data, the aim with a typical
LVQ implementation is to determine a set of prototypes that best represent each class.
Figure 8-4: A typical LVQ representation with 3 predefined classes for mapping showing final prototype
vector locations based on training as well as originally selected input data vectors
175
The second supervised network used is a RBF network. In a RBF network, the
activation functions used are radial basis functions where the value of the function
depends only on the distance from the centroid of the dataset. They are best suited for
prediction and approximation problems and are used in this study to provide probabilistic
fracture characterization based on defined inputs. It should be noted that the hidden layers
have RBF’s as activation functions; the output nodes have a linear activation function for
optimized design. While similar to standard back-propagation algorithm, the RBF design
allows for much faster training and they are less susceptible to non-stationary inputs due
to their inherent behavior. The typical RBFNN training algorithm is designed based on
adjustments of the parameters of the network to reproduce a set of input-output patterns.
The adjustable parameters include the weights of the last linear layer (w
i
), the centers of
each neuron in the hidden layers (c
i
) and the parameters (α
i
) used in RBF’s (with
Gaussian functions). The typical Gaussian RBF activation function is defined as:
(
)
(
|
|
)
(8.5)
This activation function is a local activation function. Thus, changing parameters
of one neuron has a small effect on the inputs that are far away from the center of that
neuron (c
i
). The centers get fixed before training based on clustering of inputs. Also, the
parameter α
i
is typically set to the maximum distance between the centers. The node
weights are iteratively adjusted using steepest descent of error function defined as:
] ∑
(8.6)
This results in the following steepest descent conditions:
176
(8.7)
Where η is the s tandard learning rate parameter which can be user defined before
learning initiation. Substituting eq. 8.5 & eq. 8.6 in eq. 8.7, we get:
∑ (
)
(
)
(8.8)
Figure 8-5: A typical supervised RBFNN topology using back-propagation algorithm where error
computed at each iteration is propagated back in order to update network properties (w, c & α). The final
output is a linear combination and provides the desired model (Source: Wikipedia)
Figure 8-5 shows a typical supervised network design topology used in our study.
The total number of input nodes used depends on the properties (fracture related) which
are finally selected for implementation. Training data is extracted based on a-priori
information on known fractured sections of the reservoir or based on combination of
property trends within the input datasets which conform with the trends expected with
fracture presence.
177
8.3 Mapped properties
Figure 8-6 shows the mapped FZI
3
and FZI
4
attribute at three reference depths (0.5 km,
1.0 km & 1.5 km) using LVQ and RBF network modeling.
Figure 8-6: (a) FZI
3
and (b) FZI
4
property maps at reference depths (0.5 km, 1.0 km & 1.5 km) including
pruning based on effective normalized uncertainty cutoffs. FZI
3
provides indication of fracture presence
while FZI
4
provides probability of fracture presence
The purpose of using LVQ network is to carry out a simplified classification of the
reservoir based on property values and their spatial variations. The classification is very
178
simple with two selected classes (true and false) and the output maps are bivariate
defining specific zones of interest (showing potential fractured sections within the study
volume). The RBF network allows for a more probabilistic output providing the
probability of an identified section to have fractures. Both these attributes, when
considered in tandem, allows for improved characterization of fractured intervals within
the study volume. Figure 8-7 shows the results for the combination of the two FZI
attributes described before including normalized uncertainty based pruning. The final
interpretations including integrated analysis using multiple properties as described in the
next chapter makes use of the final “hybrid” FZI attribute as shown in this figure. This
final “hybrid” property map is referred to as FZI
5
for any future reference. The cutoffs
based on FZI
3
values are selected using the appropriate quantile which is selected
depending on preliminary analysis of the data maps and normalized uncertainty maps
(shared in section 7.3).
Figure 8-7: FZI
5
property maps at reference depths including pruning based on effective normalized
uncertainty cutoffs
179
Finally, it is possible to obtain improved facture aperture model based on the:
fracture aperture expandability attribute (section 7.4) by making use of an ANN model:
(8.9)
Fracture permeability can be estimated empirically based on Darcy flow using
fracture density (fractures per unit length) and fracture aperture estimates. However,
since fracture density is unknown in absolute terms, we use mapped FZI probability to
estimate fracture density by preselecting “minimum” and “ma ximum” fracture density
values based on analysis of available image logs and generating scaled fracture density
values for the entire study volume. Since we only have normalized fracture aperture
estimates (FZI
A
), fracture density map is also normalized.
Figure 8-8: Normalized fracture permeability (k
Fi
) property maps at reference depths of 0.5 km, 1.0 km and
1.5 km
The following equation is used for fracture permeability estimation:
⁄ (8.10)
180
Where n
Fn
provides the normalized fracture density estimate calculated using the
predefined densities as mentioned earlier. Figure 8-8 shows the mapped normalized
fracture permeability estimates at three reference depths which can be used for
quantitative analysis including identification of flow channels and low permeability
barriers along conductive features.
8.4 Conclusion
While properties identified through analysis of seismic attributes including log derived
attribute maps as well as microseismic derived attributes can be used effectively for
reservoir characterization including fracture zone identification, we make use of neural
networks for improved fracture mapping as they are ideally suited for highly non-linear
relationships. They have the ability to “learn” from observed data as an arbitra ry function
approximator. Proper choice of the model, the cost functions as well as the learning
algorithm can lead to extremely robust network models for implementation. Their ability
to detect extremely complex relationships between dependent and independent variables
is of high value in property prediction problems. However, caution is necessary when
incorporating ANN based prediction workflows in most applications. ANN applications
(particularly training schemes) have a “black box” nature to them and it is difficult to
quantify the existing relationships between variables easily. Moreover, in many instances,
direct relationships between input properties may not be readily discernible but the ANN
training is able to identify these mappings for use resulting in a higher degree of
uncertainty in predictions. Typical ANN implementations can require high computation
181
times and are prone to over-fitting. Therefore, as a precursor to improved training, proper
representative training data with sound theoretical background to model development is
necessary for desirable results.
We have demonstrated the potential use of two different ANN algorithms (LVQ
& RBF) for classification and prediction of zones of interest as potential fractured
intervals. While there are sound reasons for the choice of algorithms made, other neural
network topologies, types as well as hybrid algorithms can be potentially used to obtain
improved network designs for better fracture zone classification and reservoir
characterization. It is vital to limit any future analysis to zones with adequately low
uncertainties in order to have confidence with the predictions that are made using derived
attributes. In Chapter 9, we will utilize derived FZI property maps to characterize the
subsurface and interpret zones of interest based on spatial variability and correlations
between various derived attributes.
182
Chapter 9: Integrated Analysis
9.1 Introduction
Chapter 6 and Chapter 7 discussed, apart from various other seismic and microseismic
derived properties, the different techniques that may be used for discontinuity and elastic
property mapping. The property estimates obtained from microseismic data analysis as
well as those obtained from 3D seismic data using ANN – well log property prediction
workflow and discontinuity maps can be combined based on the expected fractured zone
“observations” such as high porosity, high frequency attenuation, low density and low
impedance values. Moreover, elastic rock properties as well as stress and weakness
mapping observations can also be used to interpret possible zones of fracturing and other
zones of interest. Low V
P
/V
S
anomalies and low V
P
anomalies can be considered as
possibly fracture induced. Moreover, low extensional stress anomalies should indicate
open fracture zones. Each of these individual properties by themselves may not be useful
as indicative of fracturing with high degree of certainty; however, they are combined
together into what we refer to as fracture zone probability maps. There are many ways of
doing such mapping. We can do linear mapping which involves well defined
relationships between these attributes and fracture probability but require either
theoretical or experimental models. Such relations while useful in getting an indication of
zones of interest may not be truly indicative of actual fracture zones due to underlying
non-linearity and loosely defined relations between these attributes and possible
183
fractures. In order to better resolve this issue, we use available knowledge about known
zones of fracturing associated with well observations which involve image logs (FMI),
production data as well as core samples which are indicative of high degree of fracturing.
In Chapter 8, we discussed some of these integration techniques for fracture mapping
which can be used with seismic derived discontinuity maps within a broader framework
to better understand the overall reservoir discontinuities, etc. In particular, we have
looked at newly designed “FZI” attributes for fracture zone detection including indicator
maps derived using neural network modeling.
Figure 9-1: Normalized velocities from MEQ and 3D seismic data compared at reference in-line, cross-line
and depth (1 km). Dotted inserts indicate zones showing good match between the two separate data sets
Before discussing some integration schemes and associated interpretations, we
look at a basic comparison of the seismic and microseismic derived velocity maps to try
and validate the observations of the same property estimated from the two different data
184
sources with differing resolutions. The seismic derived velocity estimates have been
crudely up-scaled using a long window averaging dip steered filter. Both the velocity
maps have been normalized for comparison. Figure 9-1 shows two different sectional
views at a test depth slice of 1 km. The in-line and cross-line views represents averaged
3D seismic derived velocity maps while the depth insert, as a pseudo horizon represents
the inverted and simulated microseismic derived velocity estimate. We can clearly see
zones showing good match between the two interpretations but we also observe sections
with relatively poor match. There is a substantial degree of uncertainty associated with
the averaging and fine-scaling of the MEQ derived velocity as well as the coarsening of
the seismic derived velocity which could be one of the factors responsible observed
mismatch in some sections, It is possible to incorporate improved up-scaling algorithms
to obtain better match between the two for higher confidence in subsequent
interpretations. Another factor of importance is the inherent nature of the velocity map
which is derived from seismic logs being converted to compressional velocity models
using analytical modeling and then using 3D mapping to generate velocity cubes.
9.2 3D FZI mapping and interpretation
As shown in section 8.3, the newly defined hybrid FZI attributes can be mapped and
interpreted along the wellbores of existing well tracks to identify potential zones of
interest and corroborate the obtained properties. The first step is to validate the FZI
models obtained using test data. Of the two wells available with fracture logs (identified
from FMI logs and production/ injection data), the first is used to identify locations for
185
ANN model design while the second well is used to validate the designed models by
comparing model output with actual presence/ absence of fractures. Figure 9-2 and
Figure 9-3 shows the FZI
4
property map projections at specific In-line/ Cross-line
locations (corresponding with wellbore locations) along with the well tracks and known
fractured intervals (based on partially and fully conductive fracture logs) for reference.
As can be observed, close to known fractured intervals, we observe high FZI values in
thin bedding planes or along bedding boundaries. At the same time, for the second well,
we observed similar characteristics (particularly with the In-line projection) indicating
identified fractured intervals which are independently observed from the fracture logs.
We also observe other potential zones of interest based on high mapped FZI values along
the wellbores at shallower depths which can be of interest for easy development to
increase hot water production and can be looked into later.
186
Figure 9-2: FZI
4
map projected at (a) in-line and (b) cross-line associated with training well (fracture log
data from FMI logs used in ANN training) location. Highlighted sections indicate locations of high FZI
values which match with known intervals showing fractures (blue inserts over well tracks)
187
Figure 9-3: FZI
4
map projected at (a) in-line and (b) cross-line associated with testing well (fracture log
data from FMI logs used for model validation) location. Highlighted sections indicate locations of high FZI
values which match with known intervals showing fractures (blue inserts over well tracks)
188
Figure 9-4: FZI
4
map projected in 3D along with well locations to map out zones of interest along wells of
interest. Two horizons of interest located at ~ 0.6 km and ~ 1.1 km (with variable depth ranging) can be
identified by mapping individual wells of interest
189
Since the FZI
4
attribute maps can be mapped in 3D, it is possible to understand
the connectivity and zones of interest by mapping the attributes along existing well tracks
and identifying zones of interest based on such mapping. Figure 9-4 shows a 3D map of
FZI
4
attribute along with well track inserts along with 4 individual wells Figure 9-4 (b, c,
d & e) to better characterize the horizons with high FZI
4
values (based on 7
th
quantile
cutoff as discussed in section 8.3). We can visually identify two specific horizons of
interest with one at shallower depth of ~ 600m and a deeper horizon at a depth of ~ 1100
m which corresponds with high productivity for well 1 and well 4 demonstrated to be
fracture dominated flow. However, for well 3 (producer), we observe the absence of
fracturing along the deeper horizon of interest and this also reflects in the actual field data
(with the well showing relatively low production compared to other producers).
FZI
4
maps are validated by comparing them with microseismic derived property
estimates, we compare the values at the training well (well 1 from Figure 9-4) at three
different depth intervals (0.5 km, 0.6 km & 1.2 km) in order to observe variations in the
properties close to the wellbore at specified depths (Figure 9-5). We observe that for all
three depth intervals, high extensional stress values close to the wellbore (indicative of
fracture opening) tally very well with high FZI
4
values. Furthermore, for the two
shallower horizons tested, we observe that high tangential weakness values align laterally
along the wellbore with a spread similar to extensional stress as mapped. However for the
deepest depth interval, a relatively low tangential weakness value with high stress and
FZI
4
values might indicate bigger fractures (including possible faulting) but with lower
estimated fracture density. Further validation of said observation can be carried out by
190
mapping discontinuity derived from seismic with the above properties in order to tie
mapped FZI
4
as either highly fractured intervals associated with injection or production
(induced) or fractured sections associated with local faulting and associated stress.
Figure 9-5: Extensional stress (V
E
), Normalized tangential weakness (∆
T
) and FZI
4
mapped along well 1
(used for training) at depth intervals of (a) 0.5 km, (b) 0.6 km and (c) 1.2 km to validate mapped properties
and indicate fracture presence along wellbore at different depths
191
9.3 Integrated analysis
In section 6.7, we discussed the possibility of using different seismic (FZI
1
) and
integrated seismic-log derived (FZI
2
) attributes for fracture zone identification. In section
8.3, we discussed the possibility of using ANN modeling for designing hybrid FZI
attributes for possible fracture zone characterization (FZI
3
and FZI
4
). We have also
demonstrated a broad ANN framework involving the use of discontinuity maps from
different filter implementations and other attributes available from literature and their use
as training nodes for the fracture zone identification and other reservoir property
modeling work. We can integrate these disparate properties in order to develop a
framework for identification of zones of interest involving fault induced fracturing and
other observable interactions between said properties. Figure 9-6 shows four different
depths of reference and generated maps using data from ANN fracture zone prediction
workflow described in section 8.3. As is expected, the description is much better at 1 km
depth horizon due to a better constrained map. This is due to the higher degree of
uncertainty associated with individual training nodes at greater depths (and also at very
shallow depths) due to lack of well log data for adequately constrained property maps.
For this reason, the higher fracture probability zones (high FZI values) may or may not
tally with the identified discontinuities and they do not map out as well within the higher
depth horizon (1.5 km and 2.0 km) when compared with the shallower depths.
192
Figure 9-6: Integrated maps showing FZI
4
and discontinuities (using Laplacian filter) at depth horizons of
(a) 0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km. Blue inserts show the producer locations and the green
inserts show the injection well locations
193
The results can be improved with more data for constraining the match in the
form of deeper wells (logs). Based on the observed spread seen from the FZI
4
maps and
their association with the discontinuity maps, it is clear that in many cases, we do observe
higher FZI
4
values close to major interpreted discontinuities. In some cases, areas
showing large spatial spread of high FZI
4
values do not have any major associated
discontinuity and this can be interpreted as indicative of possible zones of interest due to
associated values of attributes mapped in the background or they could be associated with
smaller faults (sub-seismic resolution). It could also indicate highly unconsolidated
sections of the reservoir with possibly high fracture permeability values. In other cases,
we do observe high FZI
4
values close to the flanks of some discontinuities which could
be indicative of fault associated fracturing. While interpretations based on integrated
mapping are a valuable tool, care should be taken to prune locations with very high
uncertainties (section 7.4) or supplant interpretations with adequate caveats.
Next we look at the possibility of integrating information from MEQ data analysis
using inversion results along with the seismic derived discontinuity maps. Such an
analysis provides us with the potential to better understand regional elastic stress and
weakness characteristics in conjunction with the major reservoir discontinuities including
reservoir bed boundaries and major communicating and non-communicating
compartments within the study area. It can also provide a broader understanding of
anomalous zones of interest and their characterization for identification of possible zones
for further analysis. This is critical since the property maps from MEQ data inversion
workflow is expected to have a very low resolution compared to seismic derived
194
properties due to the sparse nature of inversion gridding discussed previously. Figure 9-7
shows extensional stress (V
E
) mapped as a co-attribute with discontinuity at reference
depths of 0.5 km, 1.0 km, 1.5 km and 2.0 km. Figure 9-8 shows hydrostatic stress (V
K
)
with discontinuity boundaries as mapped at the same depth horizons as V
E
. Figure 9-9
shows tangential weakness (∆
T
) estimates with discontinuity at the same depths while
Figure 9-10 shows normal weaknesses (∆
N
) mapped with discontinuity at the same depths
of reference. More detailed look at these maps will follow. However, from the integrated
maps, it is quite clear that the use of these estimates alone, particularly at shallower depth
intervals, is fraught with limitations due to the lower imaging resolutions (lack of MEQ
activity) at shallow depths. Moreover, the basic assumptions made during the estimation
of said properties leads to high uncertainties as well. However, we can still observe
similar features stand out when compared with the variable density maps (of the plotted
properties) and therefore, they can be used to validate observations at a later stage when
combined with other known reservoir properties such as known fault and fracture zones
or discontinuity gradients.
195
Figure 9-7: Integrated maps showing V
E
and discontinuities (using Laplacian filter) at depth horizons of (a)
0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km. Blue inserts show the producer locations and the green
inserts show the injection well locations
196
Figure 9-8: Integrated maps showing V
K
and discontinuities (using Laplacian filter) at depth horizons of (a)
0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km. Blue inserts show the producer locations and the green
inserts show the injection well locations
197
Figure 9-9: Integrated maps showing ∆
T
and discontinuities (using Laplacian filter) at depth horizons of (a)
0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km. Blue inserts show the producer locations and the green
inserts show the injection well locations
198
Figure 9-10: Integrated maps showing ∆
N
and discontinuities (using Laplacian filter) at depth horizons of
(a) 0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km. Blue inserts show the producer locations and the green
inserts show the injection well locations
199
Figure 9-11: Integrated maps showing FZI
4
attribute with discontinuities (using Laplacian filter) at varying
depth horizons. The arrows indicate potential connectivity within the reservoir based on discontinuities and
FZI spread (green indicates potential connectivity while grey indicate non-communicating boundaries)
200
From the individual maps, we observe that most wells align with major identified
discontinuity boundaries (contour overlays) including faults. However, when we look at
the variable density maps in each of the plots, we find it hard to correlate the variability
in the mapped properties with discontinuities. In order to better understand the behavior
of these properties, we look at blown up sections from the plots for more detailed
interpretations. From Figure 9-11 (d & f), we observe that some FZI artifacts are a result
of major discontinuities. This can be interpreted as secondary fracturing (fault induced)
considering the highly active regional geology that encompasses the study area.
However, we also observe high FZI values in other locations which are spread across
interpreted discontinuity or edge boundary or seem to be unrelated to faults as is shown
in Figure 9-11 (g & h). There can be other attributes which may be used in our
interpretation such as sand/ shale mapping (Vsh) or porosity mapping along with FZI's to
improve our understanding of reservoir connectivity and validate the observations on
connectivity made using FZI variability. From Figure 9-7 and Figure 9-9, we observe
moderate to high estimated extensional stress values and high tangential weakness values
indicative of presence of open fractures and high fracture density in overlapping zones.
Effective stress can be used in conjunction with other properties to validate
observations made based on said properties. Figure 9-12 shows extensional stress mapped
with effective stress vectors and discontinuities. We observe that variability in effective
stress aligns in many cases with major discontinuities or bed boundaries. We also observe
that there are sudden changes in variability of mapped extensional stress (V
E
) along these
identified “boundaries” (white dotted trends). At the same time, there are situations where
201
we observe similar trends but with little or no variation in extensional stress values (red
insert). This could be indicative of absence of fractures but presence of discontinuities.
This can result from inactive faults and consolidated sediments with chemically and/ or
thermally cemented fractures (hot water circulation).
Since low phase velocities (attenuation effects), high extensional stress and
tangential weakness estimates and high FZI values are indicative of a high probability of
dense and open fracture networks (providing fracture dominated flow pathways), we tie
these properties together to map “zones of interest” which can be more representative and
indicative than using the individual properties separately (Figure 9-13). This workflow
involves looking for spatial zones within the study volume having a relatively high value
(Note: relatively low values for phase velocities) for all of the mapped properties.
However, due to the very large number of properties involved, a fuzzy system is used to
define the limits for any one property based on the values of the other properties. These
property maps can be used to identify zones of interest at lateral offsets from existing
wellbores which could be developed in the future with directional drilling. They provide
a better indicator of potential zones of interest when compared with FZI property maps
alone as they take into account the behavior of other relevant properties which may not
be mapped with the neural networks or which may not be weighted heavily within the
network training process due to data artifacts.
202
Figure 9-12: Integrated display of discontinuity, V
E
along with effective stress gradients superimposed at
reference depths of (a) 0.5 km, (b) 1.0 km, (c) 1.5 km and (d) 2.0 km with identified feature boundaries
(white inserts) and mapped properties
203
Figure 9-13: “Zones of interest” mapped at reference depths of (a) 0.5 km, (b) 1.0 km, (c) 1. 5 km and (d)
2.0 km using a combination of V
E
, V
P
, V
S
, ∆
T
and FZI
4
property values
204
With good well control available within the central section of the study area
(Figure 2-1 & Figure 2-2), we select 4 sample wells and correlate their well-track
locations with generated integrated maps at depths of interest which broadly correspond
with the perforated intervals showing high injection/ production flows. The aim is to try
and understand if observed high production throughputs can be interpreted based on
estimated properties as well as the analysis and conclusions from preceding discussion.
While selecting the wells, care is taken to make sure that the collated uncertainty
estimates close to the wells are not too high as that could render the interpretations
suspect. Figure 9-14 shows sample well locations with edge, discontinuity gradients and
extensional stress maps superimposed at depths on interest. We observe that not only do
these wells align themselves with the stress gradient boundaries; they are located close to
identified edges with sharp discontinuity gradient close to the wells (as observed earlier).
We also find the wells to be present at locations with relatively high extensional stress
values indicating possible open fractures. Next, we will look at FZI mapping close to the
said zones of interest (wells) to validate observations made here.
205
Figure 9-14: V
E
, discontinuity gradient & edge maps plotted at sample well locations. Purple arrows
indicate major discontinuity boundaries with rapid gradient change indicating presence of faults or bed
boundaries. In cases (a), (b) & (c), we observe high V
E
values at or close to the wellbore indicating
contribution of connected open fracture to the observed well throughputs. Case (d) indicates minimal
contribution from fractures with flow dominated by associated faults
206
Figure 9-15: FZI
4
, discontinuity gradient & edge maps plotted at sample well locations. Dotted inserts
indicate possible connecting flow pathways close to sample wells which show high flow throughputs. It
should be noted that in case (b), the flow regime seems to be fracture dominated with no prominent faulting
close to wellbore
207
Finally, we can also integrate FZI with edge and discontinuity gradient maps to
test for possible reservoir connectivity and fracture zone characterization close to the
existing well locations. Figure 9-15 shows the four sample wells with the indicative
properties mapped at depths of interest. We observe that extended zones with high
fracture probability (based on FZI density) are present close to the well locations and are
indicative of possible zones influencing the producer wells (observed production from
perforated zones) and injectors (showing high throughputs). Moreover, we see much
higher mapped FZI values close to injectors (which is expected due to high injection
volumes compared to production throughputs resulting in more prominent perturbations
and potential in-situ induced fracturing).
9.4 Summary
We have shared different methods to develop and combine useful reservoir property
estimates and we have also demonstrated how these properties can be used for integrated
fracture zone characterization within the reservoir. The properties used in
characterization include geomechanical properties estimated from inverted phase
velocities (microseismic data) as well as 3D seismic and log derived attributes. We have
developed on the integrated characterization framework and provided specific workflows
involving derived properties for use with specific geophysical data types in our analysis.
We have shown possible ways of understanding the reservoir connectivity, dominant
flow regimes for particular wells of interest, identification of potential zones for future
development based on behavior of attributes, zones with high fracture density and their
208
impact on the reservoir behavior and finally, the identification of possible flow barriers
and relation between the regional stress regime and identified discontinuities.
We observe that even though the data quality does suffer due to issues discussed in
detail previously (sections 5.5, 6.4, 7.4 & 8.3), we can still use the derived properties to
implement the proposed integrated characterization framework. We have validated this
by making use of selected producer and injector wells of interest (with known high flow
throughput zones). The proposed framework can also be expanded as a tool to diagnose
potential issues with drilled wells where flows are restricted and other independent
information fails to provide usable diagnostic information.
209
Chapter 10: Summary and Conclusions
In this dissertation, we introduced a novel integration framework involving two
geophysical datasets with varying resolutions (3D surface seismic and microseismic)
along with well logs to characterize fractures in an unconventional reservoir setting. It
provides a novel methodology to conduct time lapse fractured zone characterization,
provided the field has passive seismic monitoring. We proposed the use of microseismic
data derived geomechanical properties as a source of additional indicative information
regarding fractured sections within the reservoir. We also demonstrated how the severe
limitations posed by poor quality of passive data can be overcome to a certain extent by
using intelligent data processing schemes. We introduced a new set of hybrid fracture
zone detection attributes (FZI’s) which can be used for qualitative interpretations. These
attributes are generated using ANN based rock property modeling which provides a
framework for modeling any other property (other than fractures shared in our work)
provided the right information is available. While a detailed summary was provided with
each chapter, we present a brief summary of the work here.
We introduced a brief overview of the reservoir which forms our area of study in
section 2.3. The actual characterization workflow involving all of the disparate datasets is
shared in section 1.3. While microseismic monitoring data was available for analysis, it
did pose some serious challenges which were identified in Chapter 3. In order to tide over
the issue of sub-optimal phase detection quality, we introduced a novel hybrid neuro-
evolutionary autopicking algorithm in Chapter 4. Apart from general improvement in
210
phase arrival detection observed with other contemporary autopickers, the new
autopicker allows for easy expandability and provides a robust framework for phase
detection for any type of passive seismic dataset (including hydrofrac data). In Chapter 5
we shared results from inversion of phase arrival data for MEQ event hypocenters and
phase velocities. We made use of open source inversion algorithms to complete this task
but at the same time, identified limitations posed on the algorithms due to data
limitations. These include the receiver patch (station locations) as well as the actual event
spread which led to higher uncertainty in inversion results. We introduced a new
approach towards optimized hybrid (multiple) passive seismic monitoring array design
and shared modeling results on hypothetical hydrofrac scenarios in order to validate the
algorithm. This approach can be used as a design approach for future passive seismic
array deployments. We conducted an exhaustive seismic attribute study which was
outlined in Chapter 6. We also introduced some basic FZI attributes (FZI
1
and FZI
2
)
which rely on seismic & log derived information for fracture zone identification. Due to
the varied limitations of the velocity models (particularly the issues of low resolution of
models and resolution mismatch between microseismic and seismic data); we introduced
geostatistical approaches towards fine-scaling of MEQ derived phase velocity maps
(Chapter 7). In the same chapter, we introduced many well-known geomechanical
attributes and generated these property estimates by using the high resolution phase
velocity maps and density estimates derived from seismic data analysis. These include
elastic rock properties (elastic moduli), stress and weakness estimates (based on
preselected effective medium theory and anisotropy models), normalized pore pressure as
211
well as normalized fracture aperture estimates. With all of the available microseismic,
seismic and log derived attributes available, we introduce an integration workflow and
methodology based on neural network property modeling in Chapter 8. We used two
learning algorithms (LVQ and RBF) and compared the results from the two training runs
to highlight the advantages and limitations of the ANN based modeling approach. In
Chapter 9, we introduced various integration methodologies to better understand the
reservoir, particularly in the zones of interest based on known productive intervals. We
compared observed FZI and other attribute behavior with the actual productive intervals
and fractured horizons (from logs). Finally, we demonstrated the ability of the hybrid FZI
and other attributes to highlight communicating pathways, reservoir compartments and
expected flow contributions at well locations.
212
Chapter 11: Future Work
While the tests with the hybrid fracture identifier attributes are preliminary, they can be
expanded with the inclusion of other, more complex and descriptive attributes which can
allow quantitative interpretations including fracture modeling studies. These include
attributes describing fracture orientation, aperture and density as quantitative estimates
which can subsequently be used as inputs to DFN modeling workflows.
While uncertainty quantification from both the inversion runs as well as
geostatistical phase velocity prediction have been demonstrated to be useful in qualifying
the interpretations made using the generated property estimates, it is important to
consider neural network performance as well. An improved understanding of modeling
errors from the ANN workflows should be looked into.
Optimizing network design for application on datasets can be looked into by
making use of Extended Kalman filters for neural network learning (to solve for the
optimized weights, functional parameters & network topology). This will help arrive at
optimized network models for further use.
While the integrated characterization workflow introduced in this study relates to
a geothermal reservoir, it could be similarly applied to other unconventional settings and
tested for efficacy and applicability (including, but not limited to shale reservoirs).
While we have highlighted the fact that this workflow can be used for time lapse
fracture characterization, it has not been tested due to lack of adequately good quality of
available microseismic data. If higher resolution MEQ data is made available (including
213
adequate event count), temporally segmented MEQ data could be tested to validate the
hypothesis.
Prestack 3D seismic data can be used to characterize fractures independently by
using AVAZ analysis. In the future, fracture properties from seismic can be
comparatively tested with properties from our workflow.
Finally, if quantitative fracture property estimates are available, they can be used
for reservoir simulation and history matching tests by using the available production/
injection data from the field (for validating the modeled fracture property estimates).
214
References
Abott, D., Neale, C., Lakings, J., Wilson, L., Close, J. C., & Richardson, E. 2007.
Hydraulic Fracture Diagnostics In The Williams Fork Formation, Piceance Basin,
Colorado Using Surface Microseismic Monitoring Technology. Rocky Mountain Oil
& Gas Technology Symposium. Denver, CO: SPE. http://dx.doi.org/10.2118/108142-
MS.
Ajayi, B., Walker, K., Sink, J., Wutherich, K., & Downie, R. 2011. Using Microseismic
Monitoring as a Real Time Completions Diagnostic Tool in Unconventional
Reservoirs: Field Case Studies. SPE Eastern Regional Meeting. Columbus, OH: SPE.
http://dx.doi.org/ 10.2118/148271-MS.
Akaike, H. 1973. Information Theory and an Extension of the Maximum Likelihood
Principle. Second International Symposium on Information Theory, 267 – 281.
Allen, R. V. 1978. Automatic earthquake recognition and timing from single traces. Bull.
Seism. Soc. Am., 68 (5): 1521 – 1532.
Aminzadeh, F. 2005. A new concept for seismic anomaly detection. Offshore Technology
Conference. Houston, TX: SPE. http://dx.doi.org/10.4043/17049-MS.
Aminzadeh, F., & Chatterjee, S. 1984. Applications of cluster analysis in exploration
seismology. Geoexploration, 23: 147 – 159.
Aminzadeh, F., & de Groot, P. 2006. Neural Networks and Other Soft Computing
Techniques with application sin the oil industry. Netherlands: EAGE. ISBN:
9073781507.
Aminzadeh, F., & Tafti, T. A. 2011. Application of high-resolution passive seismic
tomographic inversion and estimating reservoir properties. AGU Fall Meeting. San
Fransisco, CA.
Aminzadeh, F., Katz, S., & Aki, K. 1994. Adaptive Neural Nets for Generation of
Artificial Earthquake Precursors. IEEE Trans. Geosci. Rem. Sens., 32 (6): 1139 –
1143. http://dx.doi.org/ 10.1109/36.338361.
215
Aminzadeh, F., Maity, D., Tafti, T. A., & Brouwer, F. 2011. Artificial neural network
based autopicker for micro earthquake data. SEG Annual International Meeting. San
Antonio, TX, 1623 – 1626. http://dx.doi.org/10.1190/1.3627514.
Backus, G. E. 1962. Long-wave elastic anisotropy produced by horizontal layering. J.
Geophys, Res., 67 (11): 4427 – 4440. http://dx.doi.org/0.1029/JZ067i011p04427.
Baer, M., & Kradolfer, U. 1987. An automatic phase picker for local and teleseismic
events. Bull. Seism. Soc. Am., 77 (4): 1437 – 1445.
Bakker, P., Vanvliet, L. J., & Verbeek, P. W. 1999. Edge Preserving Orientation
Adaptive Filtering. Computer Vision and Pattern Recognition. 1999. IEEE computer
Society Conference on. 1: 535 – 540. http://dx.doi.org/10.1109/CVPR.1999.78698.
Barnes, A. 2006. Too many seismic attributes? CSEG Recorder.
Beer, F. P., Johnston, E. R., Dewolf, J., & Mazurek, D. 2009. Mechanics of materials.
McGraw Hill. ISBN: 0077430794.
Berge, P., Hutchings, L., & Wagoner, J. 2001. Rock Physics Interpretation of P Wave, Q
and Velocity Structure, Geology, Fluids and Fractures at the South East Portion of the
Geysers Geothermal Reservoir. GRC transactions. San Diego, CA: GRC, 1 – 21.
Berryman, J. G. 1980. Long-wavelength propagation in composite elastic media. J.
Acoust. Soc. Amer., 68 (6): 1809 – 1819. http://dx.doi.org/10.1121/1.385171.
Boyle, K., Jarpe, S., Hutchings, L., Saltiel, S., Peterson, J., & Majer, E. 2011. Preliminary
Investigation of an Aseismic Doughnut Hole Region in the Northwest Geysers,
California. Thirty-Sixth Workshop on Geothermal Reservoir Engineering. Stanford,
CA.
Britze, P., & Nielson, E. B. 2004. North Sea Chalk Porosity Resolved By Integration of
Seismic Reflectivity And Well Log Data. SEG Annual International Meeting. Denver,
CO, 1742 – 1745. http://dx.doi.org/10.1190/1.1851159.
Brown, A. R. 1996. Interpreter's corner - Seismic attributes and their classification. The
Leading Edge, 15: 1090.
Cardona, R., Batzle, M., & Davis, T. L. 2001. Shear wave velocity dependence on fluid
saturation. SEG Annual International Meeting. San Antonio, TX, 1712 – 1715.
http://dx.doi.org/10.1190/1.1816451.
216
Caswell, F. 1982. Success in Statistics. London: John Murray. ISBN: 0719572029.
Cheng, A., Huang, L., & Rutledge, J. 2009. Time‐lapse VSP data processing for
monitoring CO2 injection. SEG Annual International Meeting. Houston, TX, 406 –
410. http://dx.doi.org/10.1190/1.3255721.
Chopra, S., & Marfurt, K. J. 2006. Seismic attribute mapping of structure and
stratigraphy. Distinguished Instructor Short Course Series. EAGE. ISBN:
1560801352.
Chopra, S., & Marfurt, K. J. 2007. Seismic attributes for prospect identification and
reservoir characterization. SEG Geophysical Developments Series No. 11. SEG. ISBN:
1560801412.
Chopra, S., & Marfurt, K. J. 2011. Blended Data Renders Visual Value. Geophysical
Corner.
Chopra, S., Pruden, D., & Alexeev, V. 2004. Multi-attribute seismic analysis - tackling
non-linearity. First Break, 22 (12): 43 – 47.
http://dx.doi.org/10.3997/1365-2397.2004021.
Coppens, F. 1985. First arrival picking on common-offset trace collections for automatic
estimation of static corrections. Geophys. Prospec., 33 (8): 1212 – 1231.
http://dx.doi.org/10.1111/j.1365-2478.1985.tb01360.x.
Craig, D. P., & Burkhart, R. 2010. Using Maps of Microseismic Events To Define
Reservoir Discontinuities. SPE Annual Technical Conference and Exhibition.
Florence, Italy: SPE. http://dx.doi.org/10.2118/135290-MS.
Curtis, A., Michelini, A., Leslie, D., & Lomax, A. 2004. A deterministic algorithm for
experimental design applied to tomographic and microseismic monitoring surveys.
Geophys. J. Int., 157 (2): 595 – 606.
http://dx.doi.org/10.1111/j.1365-246X.2004.02114.x.
Donoho, D. L. 1995. De-noising by soft-thresholding. IEEE T. Inform. Theory, 41 (3):
613 – 627. http://dx.doi.org/10.1109/18.382009.
Dorrington, K. P. 2004. Genetic‐algorithm/neural‐network approach to seismic attribute
selection for well‐log prediction. Geophysics, 69 (1): 212 – 221.
http://dx.doi.org/10.1190/1.1649389.
217
Downton, J., & Roure, B. 2010. Azimuthal simultaneous elastic inversion for fracture
detection. SEG Annual International Meeting, Denver, CO, 263 – 267.
http://dx.doi.org/10.1190/1.3513389.
Doyen, P. M., Malinverno, A., Prioul, R., Hooyman, P., Noeth, S., den Boer, L., Psaila,
D., Sayers, C., Smit, T., van Eden, C., & Wervelman, R. 2003. Seismic pore pressure
prediction with uncertainty using a probabilistic mechanical earth model. SEG Annual
International Meeting, Dallas, TX, 1366 – 1369. http://dx.doi.org/10.1190/1.1817542.
Du, J., Warpinski, N. R., Davis, E. J., Griffin, L. G., & Malone, S. 2008. Joint Inversion
of Downhole Tiltmeter and Microseismic Data and its Application to Hydraulic
Fracture Mapping in Tight Gas Sand Formation. 42
nd
U.S. Rock Mechanics
Symposium. San Fransisco, CA: ARMA.
Dvorkin, J., Nur, A., & Yin, H. 1994. Effective properties of cemented granular
materials. Mech. Mater., 18 (4): 351 – 366.
http://dx.doi.org/10.1016/0167-6636(94)90044-2.
Eaton, D. W., & Forouhideh, F. 2011. Solid angles and the impact of receiver-array
geometry on microseismic moment-tensor inversion. Geophysics, 76 (6): WC77 –
WC85. http://dx.doi.org/10.1190/geo2011-0077.1.
Eberhart-Phillips, D. 1986. Three-dimensional velocity structure in Northern California
Coast Ranges from inversion of local earthquake arrival times. Bull. Seism. Soc. Am.
76 (4): 1025 – 1052.
Eberhart-Phillips, D. 1990. Three-dimensional P and S velocity structure in the Coalinga
region, California. J. Geophys. Res. - Sol. Ea., 95 (B10): 15343 – 15363.
http://dx.doi.org/10.1029/JB095iB10p15343.
Evans, J. R., Eberhart-Phillips, D., & Thurber, C. H. 1994. User‟s Manual for
SIMULPS12 for Imaging vp and vp/vs: A derivative of the “Thurber” tomographic
inversion SIMUL3 for Local Earthquakes and Explosions. U.S. Geological Survey.
Faust, L. Y. 1953. A Velocity Function Including Lithologic Variation. Geophysics, 18
(2): 271 – 288. http://dx.doi.org/10.1190/1.1437869.
Feng, K., & Shi, Z. C. 1981. Mathematical Theory of Elastic Structures. New York:
Springer. ISBN: 3540513264.
218
Gardner, G. F., Gardner, L. W., & Gregory, A. R. 1974. Formation velocity and density -
- the diagnostic basics for stratigraphic traps. Geophysics, 39 (6): 770 – 780.
http://dx.doi.org/10.1190/1.1440465.
Gomas, J. S., Ribeiro, M. T., Fouchard, P., Twombley, B. N., Negahban, S., & AL-
Baker, S. 2002. Geological Modeling of a Tight Carbonate Reservoir for Improved
Reservoir Management of a Miscible WAG Injection Project, Abu Dhabi, U.A.E. Abu
Dhabi International Petroleum Exhibition and Conference. Abu Dhabi, UAE: SPE.
http://dx.doi.org/10.2118/78529-MS.
Goovaerts, P. 1997. Geostatistics for Natural Resources Evaluation. Oxford University
Press. ISBN: 0195115384.
Gupta, N., Sarkar, S., & Marfurt, K. J. 2011. Seismic Characterization of the Woodford
Shale In the Anadarko Basin. SEG Annual International Meeting. San Antonio, TX,
1083 – 1087. http://dx.doi.org/10.1190/1.3627392.
Hamada, G. M. 2004. Reservoir Fluids Identification Using Vp/Vs Ratio. Oil & Gas
Science and Technology - Rev. IFP, 59 (6): 649 – 654.
http://dx.doi.org/10.2516/ogst:2004046.
Haslinger, F. 1998. Velocity Structure, Seismicity and Seismotectonics of Northwestern
Greece Between the Gulf of Arta and Zakynthos. In F. Haslinger. Zurich: Ph. D.
Thesis.
House, L., Flores, R., & Withers, R. 1996. Microearthquakes Induced By a Hydraulic
Injection In Sedimentary Rock, East Texas. SEG Annual International Meeting.
Denver, CO, 110 – 113. http://dx.doi.org/10.1190/1.1826283.
Hsu, C. J., & Schoenberg, M. 1993. Elastic wave through a simulated fractured medium.
Geophysics, 58 (7): 964 – 977. http://dx.doi.org/10.1190/1.1443487.
Hubbert, M. K., & Willis, D. G. 1957. Mechanics Of Hydraulic Fracturing. Petroleum
Transcations, AIME. Los Angeles, CA, 153 – 168.
Hudson, J. A. 1980. Overall properties of a cracked solid. Math. Proc. Camb. Phil. Soc.,
88 (2): 371-384. http://dx.doi.org/10.1017/S0305004100057674.
Hudson, J. A. 1981. Wave speeds and attenuation of elastic waves in material containing
cracks. Geophys. J. Roy. Astr. Soc., 64 (1): 133 – 150.
http://dx.doi.org/10.1111/j.1365-246X.1981.tb02662.x.
219
Hudson, J. A. 1990. Overall elastic properties of isotropic materials with arbitrary
distribution of circular cracks. Geophys. J. Int., 102 (2): 465 – 469.
http://dx.doi.org/10.1111/j.1365-246X.1990.tb04478.x.
Hudson, J. A. 1994. Overall properties of anisotropic materials containing cracks.
Geophys. J. Int., 116 (2): 279 – 282.
http://dx.doi.org/10.1111/j.1365-246X.1994.tb01798.x.
Jahne, B. 1993. Digital Signal Processing, Concepts, Algorithms and Scientific
Applications. Berlin: Springer. ISBN: 3540592989.
Johnston, J. E., & Christenson, N. I. 1993. Compressional to Shear Velocity Ratios in
Sedimentary Rocks. Intl. J. Rock. Mech. Min. Sci. & Geomech. Abstr., 30 (7): 751 –
754.
Kalkomey, C. T. 1997. Potential risks when using seismic attributes as predictors of
reservoir properties. The Leading Edge. 16 (3): 247 – 251.
Karrenbach, M. 2000. Combining Eikonel Solutions with Full Wave Form Modeling.
SEG Annual International Meeting. Calgary, Canada, 2301 – 2304.
http://dx.doi.org/10.1190/1.1815918.
Klein, W. F. 2003. The HYPOINVERSE2000 earthquake location program. International
Geophysics, 81 (B): 1619-1620. http://dx.doi.org/10.1016/S0074-6142(03)80287-5.
Koesoemadinata, A., Banik, N., Agarwal, V., Singh, S., & Durrani, J. 2008. A global
acoustic impedance inversion for porosity and lithology prediction in northern Gulf of
Mexico. SEG Annual International Meeting. Las Vegas, NV, 1885 – 1889.
http://dx.doi.org/10.1190/1.3059268.
Kuster, G. T., & Toksoz, M. N. 1974. Velocity and attenuation of seismic waves in two-
phase media: Part I. Theoretical formulations. Geophysics, 39 (5): 587 – 606.
http://dx.doi.org/10.1190/1.1440450.
Lamb, A. P., Wijk, K. V., Liberty, L., Revil, A., Richards, K., & Batzle, M. 2010.
Preliminary Results For a Near Surface 3-D Seismic Survey of a Geothermal System
In Colorado. SEG Annual International Meeting. Denver, CO, 1965 – 1969.
http://dx.doi.org/10.1190/1.3513228.
220
Lancaster, S., & Whitcombe, D. 2000. Fast-track 'coloured' inversion. SEG Annual
International Meeting. Calgary, Canada, 1572 – 1575.
http://dx.doi.org/10.1190/1.1815711.
Lee, W. H., Bennett, R. E., & Meagher, K. L. 1972. A Method of Estimating Magnitude
of local earthquakes from signal duration. U.S. Geological Survey.
Lewallen, K. T., Zhou, R., Chen, G., Wu, X., & Todd, P. 2008. Seismic Characterization
of Fractured Tight Gas Reservoirs, Piceance Basin, Colorado. SEG Annual
International Meeting. Las Vegas, NV, 463 – 467.
http://dx.doi.org/10.1190/1.3054846.
Luo, Y., al-Dossary, S., & Marhoon, M. 2002. Edge-preserving smoothing and
applications. The Leading Edge, 21 (2): 136 – 158.
http://dx.doi.org/10.1190/1.1452603.
Manchuk, J. G., & Deutsch, C. V. 2008. Sequential Gaussian simulation of geological
variables with non-Gaussian correlation. In: J. M. Ortiz, & X. Emery (Eds.),
Proceedings of the Eighth International Geostatistics Congress. Santiago, Chile, 99 –
108.
Martakis, N., Kapotas, S., & Tselentis, G. 2006. Integrated passive seismic acquisition
and methodology: Case Studies. Geophys. Prospect., 54 (6): 829 – 847.
http://dx.doi.org/10.1111/j.1365-2478.2006.00584.x.
Mavko, G., Mukherji, R., & Dvorkin, J. 2003. The Rock Physics Handbook: Tools for
Seismic Analysis of Porous Media. Cambridge University Press.
Maxwell, S. C., & Urbancic, T. I. 2005. The Potential Role of Passive Seismic
Monitoring for Real-Time 4D Reservoir Characterization. SPE Reservoir Evaluation
& Engineering, 8 (1): 70 – 76. http://dx.doi.org/10.2118/89071-PA.
Maxwell, S., Cho, D., & Norton, S. 2011. Integration of Surface Seismic and
Microseismic Part2: Understanding Hydraulic Fracture Variability through
Geomechanical Integration. CSEG Recorder, 36 (2): 27 – 30.
McCormack, M. D. 1990. Seismic trace editing and first picking using neural networks.
SEG Annual International Meeting. Dallas, TX, 321 – 324.
http://dx.doi.org/10.1190/1.1890185.
221
Ouenes, A., Robinson, G., & Zellou, A. M. 2004. Impact of Pre-stack and Post-stack
Seismic on Integrated Naturally Fractured Reservoir Characterization. SPE Asia
Pacific Conference on Integrated Modelling for Asset Management. Kuala Lumpur,
Malaysia: SPE. http://dx.doi.org/10.2118/87007-MS.
Ouenes, A., Robinson, G., Zellou, A., Balogh, D., & Araktingi, U. 2005. Seismically
Driven Fractured Reservoir Characterization. 18th World Petroleum Congress.
Johannesburg, South Africa: World Petroleum Congress.
http://dx.doi.org/10.2118/92031-MS.
Pavlis, G., & Booker, J. 1980. The mixed discrete - continuous inverse problem:
Application to the simultaneous determination of earthquake hypocenters and velocity
structure. J. Geophys. Res. – Sol. Ea., 85 (B9): 4801 – 4810.
http://dx.doi.org/10.1029/JB085iB09p04801.
Pettitt, W., Reyes-Montes, J., Hemmings, B., Hughes, E., & Young, R. P. 2009. Using
continuous microseismic records for hydrofracture diagnostics and mechanics. SEG
Annual International Meeting. Houston, TX, 1542 – 1546.
http://dx.doi.org/10.1190/1.3255143.
Ramkhelawan, R., Hornby, B., Sugianto, H., & Younger, J. 2008. Analysis and
interpretation of the largest US onshore 3D VSP. SEG Annual International Meeting.
Las Vegas, NV, 3375 – 3379. http://dx.doi.org/10.1190/1.3064045.
Raymer, D., Ji, Y., Behrens, R., & Rickett, J. 2004. Genetic algorithm design of
microseismic injection monitoring networks in the Tengiz field. SEG Annual
International Meeting. Denver, CO, 548 – 551. http://dx.doi.org/10.1190/1.1843327.
Raymer, L. L., Hunt, E. R., & Gardner, J. S. 1980. An improved sonic transit time-to-
porosity transform. SPWLA 21 Ann. Logging Symp. Proceedings, 1 – 12.
Remy, N., Boucher, A., & Wu, J. 2009. Applied Geostatistics with SGeMS – A User’s
Guide. Cambridge University Press. ISBN: 9781107403246.
Rial, J. A., Elkibbi, M., & Yang, M. 2005. Shear-wave splitting as a tool for the
characterization of geothermal fractured reservoirs: lessons learned. Geothermics, 34
(3): 365 – 385. http://dx.doi.org/10.1016/j.geothermics.2005.03.001.
Robertson, J. D., & Pritchett, W. C. 1985. Direct Hydrocarbon Detection Using
Comparative P-wave and S-wave Seismic Sections. Geophysics, 50 (3): 383 – 393.
http://dx.doi.org/10.1190/1.1441918.
222
Rudman, A. J., Whaley, J. F., Blakely, R. F., & Biggs, M. E. 1976. Transformation of
resistivity to pseudo velocity logs. AAPG Bulletin, 60 (5): 879 – 882.
Russell, B., Hampson, D., Schuelke, J., & Quirein, J. 1997. Multiattribute seismic
analysis. The Leading Edge, 16 (10): 1439 – 1444.
http://dx.doi.org/10.1190/1.1437486.
Rutqvist, J., & Tsang, C. 2003. Analysis of thermal–hydrologic–mechanical behavior
near an emplacement drift at Yucca Mountain. J. Contam. Hydrol., 62 – 63: 637 –
652. http://dx.doi.org/10.1016/S0169-7722(02)00184-5.
Sabatier, P. C. 1977. On geophysical inverse problems and constraints. Geophys. J. Int.,
48 (3): 415 – 441. http://dx.doi.org/10.1111/j.1365-246X.1977.tb03681.x.
Sabbione, J. I., & Velis, D. 2010. Automatic first breaks picking: New strategies and
algorithms. Geophysics, 75 (4): 67 – 76. http://dx.doi.org/10.1190/1.3463703.
Saragiotis, C. D., Hadjileontiadis, L. J., & Panas, S. M. 2002. PAI-S/K: A robust
automatic seismic P phase arrival identification scheme. IEEE Trans. Geosci. Rem.
Sens., 40 (6): 1395 – 1404. http://dx.doi.org/10.1109/TGRS.2002.800438.
Schoenberg, M. 1983. Reflection of elastic waves from periodically stratified media with
interfacial slip. Geophys. Prospect., 31 (2): 265 – 292.
http://dx.doi.org/10.1111/j.1365-2478.1983.tb01054.x.
Schoenberg, M., & Douma, J. 1988. Elastic wave propagation in media with parallel
fractures and aligned cracks. Geophys. Prospect., 36 (6): 571 – 590.
http://dx.doi.org/10.1111/j.1365-2478.1988.tb02181.x.
Schoenberg, M., & Muir, F. 1989. A calculus for finely layered anisotropic media.
Geophysics, 54 (5): 581 – 589. http://dx.doi.org/10.1190/1.1442685.
Sleeman, R., & van Eck, T. 1999. Robust automatic P-phase picking: an on-line
implementation in the analysis of broadband seismogram recordings, Phys. Earth
Planet. Inter., 113: 265 – 275. http://dx.doi.org/10.1016/S0031-9201(99)00007-2.
Sorrells, G. M., & Mulcahy, C. C. 1986. Advances in the Microseismic Method of
Hydraulic Fracture Azimuth Estimation. SPE Unconventional Gas Technology
Symposium. Louisville, KY: SPE. http://dx.doi.org/10.2118/15216-MS.
223
St-Onge, A. 2011. Akaike information criterion applied to detecting first arrival times on
microseismic data. SEG Annual International Meeting. San Antonio, TX, 1658 –
1662. http://dx.doi.org/10.1190/1.3627522.
Talebi, S., Young, R. P., Vandamme, L., & McGaughey, W. J. 1991. Microseismic
Mapping of a Hydraulic Fracture. 32
nd
U.S. Symposium on Rock Mechanics (USRMS).
Norman, OK: ARMA.
Taner, M. T. 2001. Seismic attributes. CSEG Recorder, 26 (7): 48 – 56.
Taner, M. T., Schuelke, J. S., Ronen, D., & Baysal, E. 1994. Seismic Attributes
Revisited. SEG Annual International Meeting. Los Angeles, CA, 1104 – 1106.
http://dx.doi.org/10.1190/1.1822709.
Thurber, C. H. 1983. Earthquake Locations and Three-Dimensional Crustal Structure in
the Coyote Lake Area, Central California. J. Geophys. Res. – Sol. Ea., 88 (B10): 8226
– 8236. http://dx.doi.org/10.1029/JB088iB10p08226.
Thurber, C. H. 1993. Local earthquake tomography: velocities and vp/vs theory. In
Seismic Tomography: Theory and Practice, ed. H. M. Hirahara, 563 – 583. Chapman
and Hall.
Thurber, C. H., & Um, J. 1987. A fast algorithm for two-point seismic ray tracing. Bull.
Seism. Soc. Am., 77 (3), 972 – 986.
Toksoz, M. N., & Johnston, D. H. 1981. Seismic Wave Attenuation. SEG. ISBN:
0931830168.
Trinchero, E., & Vernengo, L. 2013. Geomechanical and geometric seismic attributes in
an interpretation workflow for characterization of unconventional reservoirs. The
Leading Edge, 32 (4): 386 – 392. http://dx.doi.org/10.1190/tle32040386.1.
Veezhinathan, J., Wagner, D. E., & Ehlers, J. 1991. First break picking using a neural
network. In Expert Systems in Exploration, ed. F. Aminzadeh & M. Simaan, 179 –
202. SEG.
Verly, G. W. 1993. Sequential Gaussian Cosimulation: A Simulation Method Integrating
Several Types of Information. In Geostatistics Troia '92, ed. A. Soares, Vol. 5, 543 –
554. http://dx.doi.org/10.1007/978-94-011-1739-5-42.
224
Vinegar, H. J., Wills, P. B., DeMartini, D. C., Shlyapobersky, J., Deeg, W. J., Adair, R.
G., Woerpel, J. C., Fix, J. E., & Sorrells, G. G. 1992. Active and Passive Seismic
Imaging of a Hydraulic Fracture in Diatomite. J. Petrol. Technol., 44 (1): 28 – 34 &
88 – 90. http://dx.doi.org/10.2118/22756-PA.
Wacernagel, H. 2003. Multivariate Geostatistics: An Introduction with Applications.
Springer Berlin Heidelberg. ISBN: 3540441425.
Willis, M. E., Burns, D. R., Lu, R., Toksoz, M. N., & House, N. J. 2007. Fracture quality
from integrating time-lapse VSP and microseismic data. The Leading Edge, 26 (9):
1198 – 1202. http://dx.doi.org/10.1190/1.2780791.
Willis, M. E., Burns, D. R., Willis, K. M., House, N. J., & Shemeta, J. 2009. Hydraulic
fracture quality from time lapse VSP and microseismic data. SEG Annual
International Meeting. Houston, TX, 1565 – 1569.
http://dx.doi.org/10.1190/1.3059211.
Wilmshurst, T. H. 1990. Signal Recovery from Noise in Electronic Instrumentation (2
ed.). Taylor & Francis. ISBN: 0750300582.
Wyllie, M. J., Gregory, A. R., & Gardner, L. W. 1956. Elastic wave velocities in
heterogeneous and porous media. Geophysics, 21 (1): 41 – 70.
http://dx.doi.org/10.1190/1.1438217.
Zhang, H., & Thurber, C. H. 2003. Double-difference tomography: the method and its
application to the Hayward fault, California. Bull. Seism. Soc. Am., 93 (5): 1875 –
1889.
Zhang, J., El-Awawdeh, R., Scevchek, Z., KhourI, N., Sultan, A., Harris, C. E., & Reilly,
J. M. 2010. Seismic Constrained Reservoir Property Prediction - Example From a
Middle East Carbonate Field Offshore Abu Dhabi, UAE. SEG Annual International
Meeting. Denver, CO, 1312 – 1316. http://dx.doi.org/10.1190/1.3513084.
Zhao, Y., & Takano, K. 1999. An artificial neural network approach for broadband
seismic phase picking. Bull. Seism. Soc. Am., 89 (3): 670 – 680.
225
Appendix A
ANN is a computational system inspired by structure, processing technique used as well
as the learning ability of a biological brain. ANN’s are suited for learning in problems
involving extremely noisy and complex sensor data, such as inputs from geophones. They
should be considered for use when:
1. The target function is defined over input instances that can be represented as a vector
of predefined features.
2. Solutions require large degree of parallelism.
3. Fault tolerance and generalization ability are deemed important.
4. The target function needs to be real valued.
5. Training samples are representative with noise and errors
6. Human understanding of the target model is of low priority.
Depending on the network topology, ANN’s can be classified into either feed forward
networks or feedback networks. In feedback networks, the errors computed with learning
iteration are back propagated to update the network parameters. Examples of feed
forward networks include single layer perceptron, multilayer perceptron, RBF nets, etc.
Examples of feedback networks include competitive networks, self-organizing maps,
Hopfield nets and ART models. Different ANN models can be used for different
applications. Some typical applications include pattern recognition and classification,
clustering problems, function approximation, forecasting, function optimization, process
control, etc. to name a few.
226
For the purpose of phase arrival detection, a feedback network using a typical back-
propagation algorithm was used in our study. In this section, we will discuss back-
propagation algorithms in some detail including the basic algorithmic elements and some
of the pitfalls of ANN learning algorithms. A back-propagation algorithm propagates
inputs forward such that firstly, all outputs are computed using activation functions
(thresholding) of the inner product of corresponding weights and input vectors and
secondly, all outputs in any layer n are connected to all the inputs in layer n+1. The
algorithm then propagates the error backwards by appropriating them to each node within
the network according to the amount of the error the said node is responsible for. For our
understanding, we will first define the basic elements of the algorithm.
Let X
j
be the input vector for node j (x
ji
= ith input to the j
th
node). Let W
j
be the
weight vector for unit j (w
ji
= weight for nodal input x
ji
). Let Z
j
be the weighted sum of
inputs for unit j (Z
j
= W
j
.
X
j
). Let O
j
be the output of node j (O
j
= f(Z
j
)). Let t
j
be the
target for node j. We will denote Downstream(j) as the set of nodes whose immediate
inputs include the output of node j. Finally, Outputs will denote the set of output nodes in
the final layer. Since we update after each training example, we can simplify the
notations by assuming that the training set consists of exactly one example and therefore,
the error can be simply denoted by E. Thus, the error differentials w.r.t. weights are
obtained as:
⁄
⁄
⁄
⁄ (A.1)
Furthermore, δE/δZ
j
remains the same regardless of which input weight of node j is being
updated. So this quantity is denoted as δ
j
. When j ∈ p
227
∑ (
)
∈
(A.2)
Since the outputs of all nodes k ≠ j are independent of w
ji
, we will discard with the
summation and only consider the contribution of j to E:
(A.3)
(A.4)
(A.5)
(
)
(A.6)
(A.7)
Next, we consider the case where node j is a hidden node. In this particular case, we
observe that for each node k downstream of j, Z
k
is a function of Z
j
. Also, the
contribution made to the error by all nodes l ≠ j in the same layer as j is independent of
w
ji
. Now, we want to calculate δE/δw
ji
for each input weight w
ji
for each hidden node j.
We note that w
ji
only influences Z
j
which influences O
j
which influences Z
k
for all k ∈
Downstream(j) with each influencing E. So we get:
∑
∈
(A.8)
Again, we note that all the terms except xji in the above product are the same regardless
of which input weight of unit j we are trying to update. Like earlier, we will denote this
common quantity as δ
j
. As per known relations for the partial derivatives in equation A.8,
we get the following relation:
∑
∈
(A.9)
228
∑
∈
(A.10)
Which provides the error differentials used to update the network after each training
iteration. Each network weight is updated using the following relation:
(A.11)
While applying back-propagation based ANN algorithms, certain limitations should be
kept in mind in order to properly understand and use the results from ANN modeling.
There is a tendency for the results to converge towards local minima. To tide over this
issue, hybrid algorithms (such as neuro-evolutionary algorithms) or simulated annealing
based strategies could be looked into. Typically, convergence through back-propagation
tends to be slow (particularly with large number of inputs and complex mapping
schemes). It is also helpful to normalize inputs for back-propagation based ANN learning
schemes to improve results.
Abstract (if available)
Abstract
This study is aimed at an improved understanding of unconventional reservoirs which include tight reservoirs (such as shale oil and gas plays), geothermal developments, etc. We provide a framework for improved fracture zone identification and mapping of the subsurface for a geothermal system by integrating data from different sources. The proposed ideas and methods were tested primarily on data obtained from North Brawley geothermal field and the Geysers geothermal field apart from synthetic datasets which were used to test new algorithms before actual application on the real datasets. The study has resulted in novel or improved algorithms for use at specific stages of data acquisition and analysis including improved phase detection technique for passive seismic (and teleseismic) data as well as optimization of passive seismic surveys for best possible processing results. The proposed workflow makes use of novel integration methods as a means of making best use of the available geophysical data for fracture characterization. The methodology incorporates soft computing tools such as hybrid neural networks (neuro-evolutionary algorithms) as well as geostatistical simulation techniques to improve the property estimates as well as overall characterization efficacy. The basic elements of the proposed characterization workflow involves using seismic and microseismic data to characterize structural and geomechanical features within the subsurface. We use passive seismic data to model geomechanical properties which are combined with other properties evaluated from seismic and well logs to derive both qualitative and quantitative fracture zone identifiers. The study has resulted in a broad framework highlighting a new technique for utilizing geophysical data (seismic and microseismic) for unconventional reservoir characterization. It provides an opportunity to optimally develop the resources in question by incorporating data from different sources and using their temporal and spatial variability as a means to better understand the reservoir behavior. As part of this study, we have developed the following elements which are discussed in the subsequent chapters: 1. An integrated characterization framework for unconventional settings with adaptable workflows for all stages of data processing, interpretation and analysis. 2. A novel autopicking workflow for noisy passive seismic data used for improved accuracy in event picking as well as for improved velocity model building. 3. Improved passive seismic survey design optimization framework for better data collection and improved property estimation. 4. Extensive post-stack seismic attribute studies incorporating robust schemes applicable in complex reservoir settings. 5. Uncertainty quantification and analysis to better quantify property estimates over and above the qualitative interpretations made and to validate observations independently with quantified uncertainties to prevent erroneous interpretations. 6. Property mapping from microseismic data including stress and anisotropic weakness estimates for integrated reservoir characterization and analysis. 7. Integration of results (seismic, microseismic and well logs) from analysis of individual data sets for integrated interpretation using predefined integration framework and soft computing tools.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Integrated workflow for characterizing and modeling fracture network in unconventional reservoirs using microseismic data
PDF
Assessing induced seismicity rate increase based on deterministic and probabilistic modeling
PDF
Integration of multi-physics data into dynamic reservoir model with ensemble Kalman filter
PDF
A method for characterizing reservoir heterogeneity using continuous measurement of tracer data
PDF
Application of data-driven modeling in basin-wide analysis of unconventional resources, including domain expertise
PDF
Geothermal resource assessment and reservoir modeling with an application to Kavaklidere geothermal field, Turkey
PDF
History matching for fractured reservoirs with Hough-transform-based parameterization
PDF
An extended finite element method based modeling of hydraulic fracturing
PDF
Deep learning architectures for characterization and forecasting of fluid flow in subsurface systems
PDF
Optimization of coupled CO₂ sequestration and enhanced oil recovery
PDF
A coupled multiphase flow-geomechanics framework for modeling material, kinematic and contact nonlinearities in faulted reservoirs
PDF
Modeling and simulation of complex recovery processes
PDF
Deep learning for subsurface characterization and forecasting
PDF
Real-time reservoir characterization and optimization during immiscible displacement processes
PDF
Modeling and simulation of multicomponent mass transfer in tight dual-porosity systems (unconventional)
PDF
Continuum modeling of reservoir permeability enhancement and rock degradation during pressurized injection
PDF
Investigation of gas transport and sorption in shales
PDF
Coarse-scale simulation of heterogeneous reservoirs and multi-fractured horizontal wells
PDF
Stochastic oilfield optimization under uncertainty in future development plans
PDF
Subsurface model calibration for complex facies models
Asset Metadata
Creator
Maity, Debotyam
(author)
Core Title
Integrated reservoir characterization for unconventional reservoirs using seismic, microseismic and well log data
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Petroleum Engineering
Publication Date
07/23/2013
Defense Date
06/13/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
characterization,Fractures,geothermal,microseismic,neural networks,OAI-PMH Harvest,seismic,shale,unconventional
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Aminzadeh, Fred (
committee chair
), Ershaghi, Iraj (
committee member
), Sammis, Charles G. (
committee member
)
Creator Email
debotyam.maity@gastechnology.org,maity@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-294857
Unique identifier
UC11294770
Identifier
etd-MaityDebot-1812.pdf (filename),usctheses-c3-294857 (legacy record id)
Legacy Identifier
etd-MaityDebot-1812.pdf
Dmrecord
294857
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Maity, Debotyam
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
characterization
geothermal
microseismic
neural networks
seismic
shale
unconventional