Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Intraocular camera for retinal prostheses: refractive and diffractive lens systems
(USC Thesis Other)
Intraocular camera for retinal prostheses: refractive and diffractive lens systems
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INTRAOCULAR CAMERA FOR RETINAL PROSTHESES:
REFRACTIVE AND DIFFRACTIVE LENS SYSTEMS
by
Michelle Christine Hauer
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(ELECTRICAL ENGINEERING)
May 2009
Copyright 2009 Michelle Christine Hauer
ii
Acknowledgements
This work is the culmination of a long journey that I could never have
completed without the patience, support, and guidance of many excellent people.
First, I would like to thank my adviser, Dr. Armand R. Tanguay, Jr., who
gave me the opportunity, the time, and the guidance to grow into the type of scholar
and teacher I wanted to become. It has been both a blessing and a wonder to be able
to work with him on such a meaningful research project. I also wish to thank my
colleagues within OMDL for their support and friendship as well as their many
contributions to this work. In particular, thanks go to Dr. Patrick Nasiatka, Noelle R.
B. Stiles, Dr. Lormen Lue, Ben McIntosh, Kwan Suwanmonkha, Pamela Lee,
Dr. Ashish Ahuja, Dr. Joshua Wyner, and Satsuki Takahashi, all of whom I count
among the brightest and kindest people I know. In addition to their direct
contributions to this research and to the IOC project as a whole, they made all the
hard work, late nights, conference road trips, and marathon EE 529 presentation
sessions fun and memorable. I want to give an extra big thank you to Patrick, who
deserves a second mention for being available to help with anything at any time, for
making research look easy, for being great company in the lab, for knowing just
about everything about anything, and for being willing to share it all.
I would also like to thank Professors James Weiland, Mark Humayun, and
Keith Jenkins for serving on my committee. Their insightful comments and
questions are reflected throughout this work. I must also express my deepest
gratitude to my first adviser, and committee member, Dr. Alan E. Willner, who
iii
introduced me to every aspect of scientific research, the most important of which
was the value of perseverance. The many lessons I learned and friendships I gained
during the time I spent in OC Lab will serve me for the rest of my life.
I also want to acknowledge my colleagues and supervisors at Raytheon for
their endless patience and flexibility while I worked a constantly shifting part-time
schedule throughout graduate school. I cannot express enough gratitude to them for
allowing me to stick around and develop a practical, working knowledge of my field.
I especially want to thank Edward Lyons, Richard Belansky, Cecil Vergel de Dios,
Craig Poindexter, David Stufflebean, John De Hollander, and Robert Oshiro. I am
exceedingly fortunate to have such wonderful mentors and friends. Of everything I
learned from them, the most important was the pleasure of working with good
people.
Lastly, I would like to express my deepest appreciation to my family and
friends for their endless encouragement and patience throughout the duration of this
work. I am especially grateful to my writing buddy and personal cheerleader Ann
Gustafson who let me call her everyday at five o’clock to report my progress when I
was working through the first few chapters. Two of my biggest heroes and
supporters were my in-laws, Fred and Rita Hauer. I marvel at how lucky I was to
gain two best friends and parents when I married their son. I want to thank them for
listening when I needed to talk, for giving me valuable advice during the hard times,
and for always being on my side. Finally, I want to express my heartfelt gratitude to
my amazing and talented husband Brian, who unfailingly believed in me and cheered
me on through every moment of this process.
iv
Table of Contents
Acknowledgements ii
List of Tables vi
List of Figures ix
Abstract xxiii
Chapter 1: Introduction 1
1.1 Statement of the problem 1
1.2 Intraocular retinal prosthesis (IRP) system description 3
1.3 Prior art for capturing and relaying images to retinal prostheses 6
1.4 Motivation for an intraocular camera (IOC) 9
1.5 Context of this thesis within the overall intraocular camera (IOC)
project 12
1.6 Organization of the thesis 13
Chapter 2: Intraocular camera design issues and first prototypes 19
2.1 Introduction 19
2.2 Implications of visual psychophysics studies of prosthetic vision 21
2.3 Camera placement and configuration options 26
2.4 Surgical and physiological constraints 31
2.5 First generation prototype: Aspherical lens mounted in a webcam 34
2.6 Analytical and ray tracing analyses of the first generation
prototype 37
2.6.1 Depth of field experiments and calculations 37
2.6.2 Comparison of experimental and simulated images of eye
chart letters 40
2.7 Second generation prototype: Aspherical lens and commercial
CCD packaged for first surgical implantation in a canine eye 44
2.8 Surgical implantation of mechanical models in porcine eyes 45
2.9 Continued development of placement, size, and mass constraints 47
2.10 Summary 51
Chapter 3: Video capture and image processing tools for prosthetic vision 57
3.1 Introduction 57
3.2 Image processing functions for prosthetic vision simulation 58
3.3 Integration of video capture tools with prosthetic vision
simulation software 62
3.4 Summary 65
Chapter 4: Analysis of the lens design space for an intraocular camera 67
4.1 Introduction 67
4.2 Incorporation of an accurate schematic eye model 71
4.3 Focal length, resolution, and field of view 76
v
4.4 Lens materials and the need for an optical window 83
4.5 F-number, image brightness, and aberrations 91
4.5.1 Modulation transfer function (MTF) and RMS spot
diameter 104
4.5.2 Imaging performance of IOC lens systems as a function of
f-number 111
4.6 Summary 116
Chapter 5: Refractive lens systems 122
5.1 Introduction 122
5.2 Single-element, aspherical refractive lens design at f/1 over a 40°
field of view 123
5.2.1 Lens optimization and imaging performance 129
5.2.2 Geometrical ray aberrations 135
5.2.3 Ray aberration plots and field curves 146
5.2.4 Fabrication and test of polymer aspherical lens 153
5.3 Comparison of f/1 systems optimized over 20° and 40° fields of
view 161
5.4 Optimal focus distance to provide best depth of field for retinal
prosthesis subjects 169
5.5 Sensitivity to variations in surgical placement 178
5.6 Tolerances to manufacturing and alignment errors and corneal
variations 192
5.7 Incorporation of multiple refractive lens elements 201
5.8 Consideration of the spectral transmittance of the eye and the
response of CMOS imaging sensors 210
5.9 Summary 216
Chapter 6: Incorporation of diffractive lenses 224
6.1 Introduction 224
6.2 Kinoform lens theory and modeling 227
6.3 Hybrid refractive/diffractive lens design 239
6.4 Purely diffractive lens design 251
6.4.1 Wide field diffractive lens configuration and
monochromatic performance 253
6.4.2 Incorporation of a higher-order diffractive lens with
material dispersion compensation for polychromatic
operation 264
6.5 Summary 281
Chapter 7: Summary and future research directions 288
7.1 Summary 288
7.2 Future research directions 292
Bibliography 299
vi
List of Tables
Table 2-1 Present goals for the size, placement, and mass parameters of
the intraocular camera. 48
Table 4-1 Comparison of the optical surface parameters of four major
schematic eye models (From [14]). 72
Table 4-2 Resolution required at the intraocular camera image plane to
match the electrode pitch of the epiretinal microstimulator
array at the retina for five relevant stimulator array
configurations. 79
Table 4-3 Typical values for the refractive index, Abbe V number, and
density of common optical glasses and polymers in
comparison with the high-index, high density PBH71 glass
used in the first three IOC prototype camera generations.
The values for Zeonex®, the currently preferred lens material
for the IOC, are also shown [27-29]. 87
Table 4-4 Properties of selected optical grade polymers (From [28]).
Zeonex® grades E48R and 690R are cyclic olefin polymers. 89
Table 4-5 Typical brightness values for common indoor and outdoor
settings (After [37-39]). 97
Table 5-1 Surface parameters for the IOC system shown in Figure 5-1
when focused at a 20 cm object distance. “Thickness”
indicates the axial distance from the vertex of the current
surface to the vertex of the next surface. “Material” refers to
the optical medium following the surface. The quantities R,
K, A, and B are defined in the aspherical surface sag formula
shown in Equation (5-1). 126
Table 5-2 First order optical parameters for the IOC system shown in
Figure 5-1. 127
Table 5-3 MTF values of the radial and tangential ray fans in Figure
5-2. 131
Table 5-4 Surface parameters for the IOC system shown in Figure 5-16. 163
Table 5-5 Comparison of the first order optical properties of the IOC
optical system optimized over a 40° FOV (Figure 5-1) with
the IOC optical system optimized over a 20° FOV (Figure
5-16). 164
vii
Table 5-6 Typical manufacturing tolerances for molded polymer optics
(from G-S Plastic Optics, Rochester, NY, and [16]). The
second column, labeled “Value,” contains the corresponding
values for the custom polymer lens shown in Figure 5-10. 193
Table 5-7 Typical manufacturing tolerances for the fused silica optical
window (from Valley Design Corp., Santa Cruz, CA [17]). 194
Table 5-8 Surface parameters for the two-lens IOC system shown in
Figure 5-26, when focused at a 20 cm object distance.
“Thickness” indicates the axial distance from the vertex of
the current surface to the vertex of the next surface.
“Material” refers to the optical medium following the
surface. The quantities R, K, A, and B are defined in the
aspherical surface sag formula shown in Equation (5-1). 204
Table 5-9 First order optical parameters for the two-lens IOC system
shown in Figure 5-26. 205
Table 5-10 Set of wavelengths and weights used for the extended
wavelength spectral analyses performed in Code V®
software. The weights shown are based on estimations of the
combination of the spectral transmittance of the cornea and
aqueous humor, and a typical CMOS imaging sensor with a
color filter array. 211
Table 5-11 Surface parameters for the IOC system shown in Figure 5-16
when focused at a 20 cm object distance. “Thickness”
indicates the axial distance from the vertex of the current
surface to the vertex of the next surface. “Material” refers to
the optical medium following the surface. The quantities R,
K, A, and B are defined in the aspherical surface sag formula
shown in Equation (5-1). 215
Table 6-1 Surface parameters for the IOC system shown in Figure 6-5
when focused at a 20 cm object distance. “Thickness”
indicates the axial distance from the vertex of the current
surface to the vertex of the next surface. “Material” refers to
the optical medium following the surface. The quantities R,
K, A, and B are defined in the aspherical surface sag formula
shown in Equation 5.1. 245
Table 6-2 First order optical parameters for the IOC system shown in
Figure 6-5. 246
Table 6-3 Comparison of the MTF values of the radial and tangential
ray fans at 25 lp/mm for the purely refractive (Figure 5-33)
and hybrid refractive/diffractive (Figure 6-6) IOC designs. 248
viii
Table 6-4 Required f/# of the diffractive lens to achieve a given system
f/# and unvignetted semi field of view, following from
Equation (6-26). 257
Table 6-5 Diffraction efficiencies within a 16 μm spot diameter for the
cases plotted in Figure 6-21. 279
ix
List of Figures
Figure 1-1 Cross section of the human eye showing a schematic
enlargement of the cellular layers of the retina. In AMD and
RP, the photoreceptors at the back of the retina degenerate
and become unusable while the inner retinal neurons tend to
remain functional (From http://www.webvision.
med.utah.edu/). 2
Figure 1-2 Retinal prosthesis system diagram with an external camera
for image acquisition and processing. The placement of
either a subretinal and epiretinal electrode array is illustrated
in the inset (From [4]). 4
Figure 1-3 Fundus photo showing a 4 × 4 epiretinal electrode array
surgically implanted on the retina of a 74-year old male with
retinitis pigmentosa who had complete blindness in this eye
for more than 50 years (From [9]). 6
Figure 1-4 Schematic illustration of an envisioned next-generation
retinal prosthesis system with an external camera mounted on
a pair of eyeglasses for image acquisition, and a pair of
antenna coils for bidirectional transmission of data and power
[with the necessary circuitry integrated on a System on a
Chip (SoC)]. In the configuration shown, one coil is
mounted externally on a pair of eyeglasses, and the other is
mounted inside the ocular cavity (it may also be placed under
the scalp, near the eye) (From http://bmes-
erc.usc.edu/research/retinal-prosthesis-testbed.htm). 9
Figure 1-5 Schematic illustration of a retinal prosthesis system with an
intraocular camera placed in the location of the crystalline
lens (From [31]). 11
Figure 2-1 Simulation of human perception of a kitchen scene (shown in
full resolution in Figure 2-2) with a 4 × 4 array of square
pixels (a) color, (b) grayscale, (c) grayscale with gridlines at
a 50% duty cycle (From [3]). 23
Figure 2-2 Photograph of a kitchen scene (top row), in color, and block
pixellated down to three different array sizes (Row 2). The
application of 33% Gaussian post-blur is shown for each case
in Row 3 (From [3, 18, 19, 23-25]). 24
x
Figure 2-3 Photo of a bus (upper left) pixellated using low fill-factor
sampling (upper right) resulting in severe aliasing problems
that are not resolved by post-blurring the image (lower left).
The lower right image was significantly pre-blurred (40%
Gaussian) prior to the low fill-factor sampling to reduce
aliasing artifacts, and then post-blurred (From [3, 18, 19, 23-
25]). 26
Figure 2-4 Illustration of the human eye (From http://iei.ico.edu/
patients/fo_eyeanatomy.html, Illinois Eye Institute). 27
Figure 2-5 Illustration of potential placements for the haptic support
elements for an intraocular device located in the position of
the natural crystalline lens. The diagram (After [13]) shows
the VisionCare Implantable Miniature Telescope (IMT™ by
Dr. Isaac Lipshitz) implanted in the lens capsule and
supported by the iris. An alternative placement for the haptic
support elements would be in the ciliary sulcus, just in front
of the lens capsule. 34
Figure 2-6 Commercial web camera (upper left) with its lens system
removed and modified to accept the prototype aspherical lens
(upper right) for qualitative video image evaluation. The 3.1-
mm focal length aspherical lens demonstrated excellent depth
of field and sufficient imaging performance for the
intraocular camera application. The bottom image is a frame
of a video taken in an office setting (From [18]). 36
Figure 2-7 Code V® ray traced model of the modified webcam focused
at 2.5 cm from the first surface of the lens showing incoming
ray fans at 0° (red rays) and 5° (blue rays). 40
Figure 2-8 (a), (b), and (c) video frames of Snellen eye charts with the
final frame showing a line of 1 mm tall letters; (d) Monte-
Carlo ray trace image of a 1 mm tall “E” at 2.5 cm [picture
inverted for direct comparison with video frame in (c)] (From
[3, 23-25]). 42
Figure 2-9 Ray traced images of (a) a 2 mm tall “E” at 2.5 cm, and (b) a
37 cm tall “E” at 16 ft (size chosen to maintain the image
height on the sensor). 43
Figure 2-10 Second generation prototype of the intraocular camera
designed for surgical implantation in a canine eye (From [18,
19, 23-25]). 44
xi
Figure 2-11 (a) First mechanical model of an intraocular camera
suspended by two PMMA haptics. The device is 6.9 mm
long by 4.5 mm in diameter, with a mass of 250 mg in air.
(b) Photo of the model being implanted in the capsular bag in
a porcine eye. (c) Second model being implanted in sulcus in
a porcine eye. This device is shorter at 5.5 mm long by
4.5 mm in diameter, and has a mass of 270 mg in air. (d)
After successful implantation, the device remained firmly in
place while the eye was repeatedly poked, manipulated, and
shaken. 46
Figure 2-12 Size comparisons for the (a) 3
rd
generation IOC and (b) the
current 4
th
generation IOC (From [24, 25]). 50
Figure 3-1 Screenshot of the custom graphical user interface
implemented in MATLAB to process still images for
simulating pixellated prosthetic vision. 58
Figure 3-2 Illustration of the pixellation function with the ability to
simulate the non-unity fill factor of most CMOS and CCD
pixels. The values in the photosensitive area of each super
pixel are averaged to determine the super pixel color and
brightness. The software also allows for random super-pixel
dropouts in order to simulate electrodes that do not elicit
visual percepts (photo courtesy of
http://www.mikelevin.com). 59
Figure 3-3 Examples of the various customized processing functions that
can be applied for prosthetic vision simulations and
psychophysical experiments. The top four photos
demonstrate the different options for the gridding function.
The bottom two photos show an image that has gone through
an entire series of processing functions. Note that the car is
much more recognizable when appropriate pre- and post-blur
are applied despite the grid and dropouts (note that the
contrast has been enhanced in the final image to show these
effects more clearly in the printed image). 62
Figure 3-4 Screenshot of the LabVIEW program for queuing several
experimental videos to process (for overnight processing),
each with individually selected parameters. 64
xii
Figure 3-5 Frames from a video that are pixellated to 20 × 30 super
pixels (top row) in comparison with the same video frames
when pre-blurred by 30% before pixellation to 20 × 30 super
pixels, and then post-blurred by 40% (bottom row). The
images on the left show a FedEx truck driving by, and the
images on the right show a sign that reads, “DO NOT
BLOCK DRIVEWAY.” 66
Figure 4-1 Diagram and surface parameters of the Liou and Brennan
schematic eye model (After [8]). The optical power of the
schematic eye is 60.35 diopters (effective focal length =
16.57 mm), and the axial length is 23.95 mm. 73
Figure 4-2 Implementation of the Liou and Brennan schematic eye
model in Code V® showing two fans of rays from an infinite
object distance at 0° and 5° coming to a focus at the retina.
The surface numbers correspond to the table in Figure 4-1
(note that this diagram is inverted top to bottom as compared
with the corresponding diagram in Figure 4-1). 74
Figure 4-3 Comparison of the sine-wave polychromatic tangential
modulation transfer function (MTF) for a 4-mm pupil. 75
Figure 4-4 (a) Spherical aberration in diopters (D) as a function of ray
height across the pupil for several schematic eye models,
plotted along with mean experimental data. The Liou and
Brennan model more accurately matches the lower spherical
aberrations observed in the measurements. (b) Contributions
of the cornea and crystalline lens to the total spherical
aberration of the eye. (From [8]). 76
Figure 4-5 Nodal points of an optical system. The quantity y denotes
the height at the image plane for object rays entering the
system at angle θ. In the average human eye, the distance
from NP2 to the retina is about 17 mm. In the intraocular
camera, the distance from NP2 to the image sensor is
expected to be between 2.0 and 2.3 mm. 78
Figure 4-6 Dependency of the intraocular camera pixel pitch on the size
of the retinal microstimulator array and the camera’s rear
nodal distance (which is approximately equal to its focal
length). Data points are from Cases 2, 3, and 4 in Table 4-2. 80
xiii
Figure 4-7 (a) Custom designed IOC lens system with a high-index glass
aspherical lens located in air, placed behind a fused silica
window. The system operates at f/0.75 and is optimized over
a ±20° field of view. The IOC is focused at a 20-cm object
distance. The lens is 2.73 mm thick and weighs 95 mg
(calculated), which exceeds the IOC mass goals. The image
distance is only 0.44 mm, measured from the rear vertex of
the lens. (b) Polychromatic spot diagram and RMS spot
diameters are shown over a 20° semi-field of view. 86
Figure 4-8 Five IOC lens designs, each optimized to operate at a specific
f-number (f/0.7, f/1.0, f/1.4, f/2, and f/2.8). Each complete
lens system includes the cornea, aqueous humor, and an
optimized polymer aspherical lens housed in air behind a
250 μm thick silica glass window. The aperture stop is
located on the rear surface of the window. The front of the
window is located 2 mm behind the cornea and the distance
from the front of the window to the image plane is
constrained to ≤ 3.5 mm. Rays are shown at five semi-field
angles: 0°, 5°, 10°, 15°, and 20°. Note that the f/0.7 system is
only optimized over a ±10° field of view, though rays are
traced out to +20° for comparison. Also, the upper marginal
rays at 10°, 15° and 20° do not fill the aperture in the f/0.7
system because the lens diameter is constrained to 2.8 mm,
causing severe vignetting for these rays. (Continued on next
page.) 94
Figure 4-9 Comparison of the illuminance at the sensor plane as a
function of f-number and semi-field angle for diffraction
limited lenses (a and b) and the five IOC lens designs (c and
d) shown in Figure 4-8. Aberrations present in the IOC
lenses cause blurring and a more severe falloff in brightness
at large field angles. The source is a perfectly diffuse
uniform screen placed one meter from the camera, covering a
40° (±20°) field of view. Four lighting conditions are shown.
The results are an average of the intensity levels at red,
green, and blue wavelengths. 98
Figure 4-10 Illuminance across a central slice of the image plane for the
“step tablet” target in low light (10 lux; 3.18 cd/ms) as a
function of f-number and field angle for the five IOC lens
designs. The noise is due to the collection of a finite number
of rays in each bin of a 200 × 200 grid at the image plane.
(Continued on next page.) 101
xiv
Figure 4-11 Illustration of typical sinusoidal MTF curves (curves shown
are for an f/3.5 triplet lens). 108
Figure 4-12 Illustration of tangential (or meriodonal) rays and sagittal (or
radial) rays emanating from an off-axis object point on the y-
axis. 110
Figure 4-13 Polychromatic MTF curves and spot diagrams of five IOC
systems optimized to operate at f-numbers ranging from f/0.7
to f/2.8 (focused on an object at 20 cm). Colored lines in the
MTF plots represent different semi-field angles (red: 0°,
green: 5°, blue: 10°, orange: 15°, pink: 20°) with solid lines
for the tangential rays, and dashed lines for radial (or sagittal)
rays. The colors in the spot diagrams correspond to red,
green, and blue wavelengths. (Continued on next page.) 112
Figure 4-14 Radial and tangential MTF at 25 lp/mm as a function of
f-number and field angle (data points are from the MTF
curves shown in Figure 4-13). 115
Figure 5-1 Custom designed IOC lens system with a Zeonex E48R
polymer aspherical lens located in air, behind a fused silica
window. A 2-mm diameter aperture stop is located on the
rear surface of the window. The system operates at f/0.96
with a focal length of 2.1 mm and is optimized over a ±20°
field of view. The IOC is focused at a 20-cm object distance.
The lens is 2.22 mm thick and weighs 9 mg (by calculation)
with a 2.5 mm edge diameter. The image distance is
0.94 mm, measured from the rear vertex of the lens. 124
Figure 5-2 MTF and polychromatic spot diagram with RMS spot
diameters for the lens system shown in Figure 5-1, focused at
a 20 cm object distance. The colored lines in the MTF plot
represent different semi-field angles (red: 0°, green: 5°, blue:
10°, orange: 15°, pink: 20°) with solid lines for the tangential
rays, and dashed lines for the radial, or sagittal, rays. The
colors in the spot diagrams correspond to red, green, and blue
wavelengths. 130
Figure 5-3 Example of MTF curves generated during the lens
optimization process. The curves on the left (a and c) show
the performance after optimizing the system to minimize
transverse ray aberrations. The curves on the right (b and d)
show the performance after additional optimizations were
made to improve the MTF (the object distance is infinity).
(Figure continued on next page.) 132
xv
Figure 5-4 Coordinate system showing a chief ray and a general (skew)
ray from a common object point at height h intersecting the
image plane at different points. The chief ray intersects the
image plane at height h′. 136
Figure 5-5 Concept of a ray intercept curve (showing third-order
spherical aberration for a lens focused at the paraxial image
plane). 146
Figure 5-6 Transverse ray intercept curves showing the shapes
associated with certain individual and combined aberrations
(After [11]). 147
Figure 5-7 Ray intercept curves for the IOC lens design shown in Figure
5-1 The horizontal axis is the ray height at the entrance
pupil, and the vertical axis is the ray intersection at the image
plane. 149
Figure 5-8 (a) Astigmatic field curves (“S” is for sagittal and “T” is for
tangential) and (b) distortion plot as a function of semi-field
angle from 0° to 20° at 559 nm for the lens design shown in
Figure 5-1. The horizontal axis range for the astigmatic field
plot is ±0.08 mm, and is ±5% for the distortion plot. 151
Figure 5-9 Field curvature concept. The dotted line at the location of the
image plane is drawn to roughly indicate the tangential
astigmatic field curve shown in Figure 5-8(a). The location
of the image plane is a compromise among the best focal
spots (at which the RMS wavefront errors are a minimum)
for the tangential and sagittal ray fans across the field of view
and wavelength spectrum. 152
Figure 5-10 (a) CAD drawing used for fabrication of three custom
polymer IOC lenses for use in the next generation IOC
prototype (from G-S Plastic Optics, Rochester, NY). (b)
Photograph of one of the fabricated lenses, showing the
residual 5-mm diameter flange used in fabrication and
maintained for lens testing purposes. 154
xvi
Figure 5-11 Profilometer test results for the front and rear aspherical
surfaces of one of three fabricated polymer lenses, showing
that the surface figure (power) error and surface irregularity
are much less than the specified 1λ and 0.5λ, respectively (at
633 nm). These data curves are a measure of the deviation
from the specified aspherical profiles of the two surfaces.
The vertical axis is scaled in units of waves (λ) at 633 nm,
such that 1.00λ = 0.633 μm. The horizontal axis is scaled in
units of mm, covering the full 2.5 mm clear aperture of the
lens. 155
Figure 5-12 Grayscale and color images captured with the prototype lens
shown in Figure 5-10, when placed in air behind a 2-mm
aperture stop. The images formed by the lens were relayed
by a 10×, 0.25 NA microscope objective to a commercial
sensor that was connected to a frame grabber card in a
computer. 157
Figure 5-13 Grayscale video image frame of text on a book cover
captured by the lens-sensor combination. The sensor
exhibited lower than specified contrast. 158
Figure 5-14 Ray diagram of the custom aspherical lens in air, imaging
through the coverglass on the OmniVision Technologies
OV6920 CMOS sensor array. Note that this is different from
the as-designed lens configuration, in which the lens is
located behind the biological cornea, aqueous humor, and a
fused silica window, and without a coverglass on the image
sensor array. 159
Figure 5-15 Comparison of an experimentally obtained image of a USAF
resolution chart using the custom IOC lens and OV6920
sensor (left), with the image predicted by a Monte-Carlo ray-
traced simulation (right) using the in-air lens model shown in
Figure 5-14. 160
Figure 5-16 Custom designed f/1 aspherical lens optimized over a 20°
(±10°) FOV (rays are traced in the figure out to +20° for
comparison with the previous design, shown in Figure 5-1,
which was optimized over a 40° FOV). 162
xvii
Figure 5-17 MTF and polychromatic spot diagram with RMS spot
diameters for the lens system shown in Figure 5-16 focused
at a 20 cm object distance. The colored lines in the MTF plot
represent different semi-field angles (red: 0°, green: 5°, blue:
10°, orange: 15°, pink: 20°) with solid lines for the tangential
rays, and dashed lines for the radial, or sagittal, rays. The
colors in the spot diagrams correspond to red, green, and blue
wavelengths. 166
Figure 5-18 Simulated images of a USAF resolution chart that spans a
40° horizontal and vertical field of view for the two IOC lens
design cases. 167
Figure 5-19 Evaluation of the depth of field as a function of the distance
at which the camera is focused for the f/0.96 IOC system
optimized over ±20°. The plots show the RMS spot diameter
as a function of object distance across a ±20° field for six
focus distances: (a) 5 cm, (b) 10 cm, (c) 15 cm, (d) 20 cm, (e)
50 cm, and (f) 1 m. The hyperfocal distance for a 30 μm
allowable blur spot diameter is 15 cm (Case (c)). (Continued
on next page.) 172
Figure 5-20 Evaluation of the depth of field as a function of the distance
at which the camera is focused for the narrow FOV IOC
system optimized over ±10°. The plots show the RMS spot
diameter as a function of object distance across a ±10° field
for four focus distances: (a) 5 cm, (b) 10 cm, (c) 15 cm, and
(d) 20 cm. (Continued on next page.) 177
Figure 5-21 Sensitivity of the IOC imaging performance as a function of
the distance between the posterior cornea and the front of the
camera (for the f/0.96 ±20° IOC optical system). (a) and (b)
RMS spot diameter; (c) and (d) tangential MTF at 25 lp/mm;
(e) and (f) radial MTF at 25 lp/mm. Semi-field angles from
0° to 20° along the vertical (y) axis are shown (note that the
radial and tangential MTF performance essentially reverse if
the field angles are oriented along the horizontal (x) axis
instead, as the system is rotationally symmetric). (Continued
on next page.) 182
Figure 5-22 Sensitivity of the IOC imaging performance to shifts in
placement along the vertical (y) and horizontal (x) axes. The
conditions are the same as stated in the caption for Figure
5-21. (Continued on next page.) 186
xviii
Figure 5-23 Sensitivity of the IOC imaging performance to vertical tilts
(within the yz plane, about the x axis) and horizontal tilts
(within the xz plane, about the y axis). The conditions are the
same as stated in the caption for Figure 5-21. (Continued on
next page.) 190
Figure 5-24 Diagram illustrating wedge tolerance in terms of TIR (total
indicated reading). 195
Figure 5-25 Surface figure (or power) and irregularity as measured by
fringes either with a test plate or an interferometric test (After
[21, 23]). 196
Figure 5-26 Custom designed IOC lens system with two Zeonex E48R
polymer aspherical lenses located in air, behind a fused silica
window. A 1.73 mm diameter aperture stop is located at the
rear surface of the first lens. The system operates at f/1 with
a focal length of 2.1 mm and is optimized over a ±20° field
of view. The IOC is focused at a 20 cm object distance.
Both lenses are 0.75 mm thick and have masses of 3 mg and
2 mg, respectively (calculated). The image distance is
1.13 mm, measured from the rear vertex of the second lens to
the image plane. The fused silica window is 2.8 mm in
diameter and the two lenses are 2.5 mm in diameter (see text
for explanation). 203
Figure 5-27 MTF and polychromatic spot diagram with RMS spot
diameters for the two-lens IOC system shown in Figure 5-26,
focused at a 20 cm object distance. The colored lines in the
MTF plot represent different semi-field angles (red: 0°,
green: 5°, blue: 10°, orange: 15°, pink: 20°) with solid lines
for the tangential rays, and dashed lines for the radial, or
sagittal, rays. The colors in the spot diagrams correspond to
red, green, and blue wavelengths. 206
Figure 5-28 Ray intercept curves for the two-lens IOC design shown in
Figure 5-26. The horizontal axis is the ray height in the
entrance pupil, and the vertical axis is the ray intersection at
the image plane. 207
Figure 5-29 (a) Astigmatic field curves (“S” is for sagittal and “T” is for
tangential) and (b) distortion plot as a function of semi-field
angle from 0° to 20° at 559 nm for the lens design shown in
Figure 5-26. The range for the horizontal axis of the
astigmatic field plot is ±0.08 mm, and is ±5% for the
distortion plot. 208
xix
Figure 5-30 Comparison of imaging a USAF resolution chart that spans
40° horizontal and vertical fields of view for the two-lens
design shown in Figure 5-26, with the single-lens design
shown in Figure 5-1. 209
Figure 5-31 Comparison of the MTF and polychromatic spot diagrams for
the lens system shown in Figure 5-1 using (a) the photopic
spectrum, and (b) the extended wavelength spectrum The
colored lines in the MTF plot represent different semi-field
angles (red: 0°, green: 5°, blue: 10°, orange: 15°, pink: 20°)
with solid lines for the tangential rays, and dashed lines for
the radial, or sagittal, rays. The colors in the spot diagrams
correspond to red, green, and blue wavelengths. 213
Figure 5-32 Custom designed f/1 aspherical IOC lens system optimized
over a 40° (±20°) FOV using an extended wavelength
spectrum that has been weighted to approximate the
combination of the spectral transmittance of the cornea and a
typical color CMOS imaging sensor (for comparison with the
previous design shown in Figure 5-1, which was optimized
using photopic spectral weights). A 2-mm aperture stop is
located on the rear surface of the window (not shown). 214
Figure 5-33 MTF and polychromatic spot diagram with RMS spot
diameters for the lens system shown in Figure 5-32, focused
at a 20 cm object distance. The colored lines in the MTF plot
represent different semi-field angles (red: 0°, green: 5°, blue:
10°, orange: 15°, pink: 20°) with solid lines for the tangential
rays, and dashed lines for the radial, or sagittal, rays. The
colors in the spot diagrams correspond to different
wavelengths. 216
Figure 6-1 Reduction of a refractive lens to a kinoform by removing
multiples of 2π phase at the design wavelength, λ
0
. 227
Figure 6-2 Determination of the zone boundaries for a kinoform lens
with focal length f
0
at the design wavelength, λ
0
. 231
Figure 6-3 Comparison of a conventional, single-order diffractive lens
with a higher-order version of the same lens that is
constructed with zones that are p = 5 waves deep. 233
Figure 6-4 Concept of color correction in a hybrid refractive/diffractive
lens. 240
xx
Figure 6-5 (a) Hybrid IOC lens system with a diffractive lens placed at
the location of the 2-mm diameter aperture stop on the rear
surface of the fused silica window. The system operates at
f/1 with a focal length of 2.2 mm and is optimized over a
±20° field of view. (b) Diffractive lens relief profile from the
center to the edge of the lens. 244
Figure 6-6 MTF and polychromatic spot diagram with RMS spot
diameters for the lens system shown in Figure 6-5 when
focused at a 20 cm object distance. The colored lines in the
MTF plot represent different semi-field angles (red: 0°,
green: 5°, blue: 10°, orange: 15°, pink: 20°) with solid lines
for the tangential rays, and dashed lines for the radial, or
sagittal, rays. The colors in the spot diagrams represent
different wavelengths. 247
Figure 6-7 Polychromatic MTF at 25 lp/mm as a function of defocus
distance for the tangential and sagittal (radial) rays at each
field angle. The nominal focus position at 0.0 corresponds to
an image distance of 0.6359 mm (used for the MTF curves
and spot diagrams plotted in Figure 6-6). 249
Figure 6-8 Comparison of the ray intercept curves for the purely
refractive (Figure 5-32) and hybrid (Figure 6-5) IOC lens
designs. The horizontal axis is the ray height at the entrance
pupil, and the vertical axis is the ray intersection at the image
plane (the range is ±0.025 mm). The line colors correspond
to different wavelengths. 251
Figure 6-9 Wide field diffractive lens configuration with an aspherical
corrector plate placed at the aperture stop, which is located in
the front focal plane of a spherical diffractive lens (after
[20]). For the IOC application, the diffractive lens may be
constructed as a higher-order kinoform for polychromatic
imaging. 255
Figure 6-10 Ray diagrams illustrating the issue of vignetting in an f/1.4
wide field diffractive lens system with a 1.4 mm focal length.
(a) To achieve no vignetting, the diffractive lens must operate
at f/0.7. (b) If the diffractive lens diameter is limited so that it
operates at f/1, then significant vignetting occurs for rays
incident between 10° and 20°. 258
xxi
Figure 6-11 (a) Wide field diffractive IOC lens diagram showing rays
traced through the system at 0°, 5°, 10°, 15°, and 20°. The
system operates at f/1.4 with a focal length of 1.4 mm, and is
optimized over a ±20° field of view. (b) and (c) Physical
relief profiles from the center to the edge of the lens for the
aspherical corrector plate (1 mm diameter clear aperture) and
the higher-order diffractive lens (1.4 mm diameter clear
aperture). 261
Figure 6-12 Monochromatic MTF and spot diagram showing the RMS
spot diameters for the lens system shown in Figure 6-11
when focused at a 20 cm object distance. The colored lines
in the MTF plot represent different semi-field angles (red: 0°,
green: 5°, blue: 10°, orange: 15°, pink: 20°) with solid lines
for the tangential rays, and dashed lines for the radial, or
sagittal, rays. The color in the spot diagram represents the
design wavelength of 550 nm. 263
Figure 6-13 Chromatic dispersion of fused silica across the visible
spectrum at 26 °C (after [47]). 266
Figure 6-14 Diffraction efficiency as a function of wavelength and
diffraction order, m, for (a) a 13
th
-order kinoform lens
(p = 13) and (b) a 32
nd
-order kinoform lens, p = 32, in fused
silica with λ
0
= 550 nm. 266
Figure 6-15 Analytically computed on-axis polychromatic MTF curves
for a higher-order diffractive singlet lens with (a) p = 13 and
(b) p = 32. 268
Figure 6-16 Ray diagram of the wide field diffractive lens configuration
used for the scalar diffraction theory analyses. 270
Figure 6-17 Surface relief profiles of the two optical elements in the
system shown in Figure 6-16: (a) the aspherical corrector
plate, (b) the higher-order diffractive lens when constructed
with p = 13 waves, (c) the higher-order diffractive lens when
constructed with p = 32 waves. In each case, only half the
lens diameter is shown. 271
Figure 6-18 On-axis diffraction efficiency within a 16 μm spot diameter
as a function of wavelength. 273
Figure 6-19 Surface relief profile of a portion of the compound surface
required to achromatize the p = 32 diffractive lens in the
system shown in Figure 6-16. 275
xxii
Figure 6-20 Comparison of the on-axis point spread functions for the
p = 13 and p = 32 wide field diffractive lens configurations,
with and without the addition of a material dispersion
compensating surface. Plots for the individual wavelengths
are overlaid. The set of wavelengths plotted corresponds to
the maximum and minimum diffraction efficiencies within
the visible band (17 wavelengths for the p = 13 case and 40
wavelengths for the p = 32 case). 276
Figure 6-21 Comparison of the polychromatic point spread functions as a
function of field angle for the p = 32 diffractive lens
configuration, for cases with and without the addition of a
material dispersion compensating surface. 278
xxiii
Abstract
The focus of this thesis is on the design and analysis of refractive, diffractive,
and hybrid refractive/diffractive lens systems for a miniaturized camera that can be
surgically implanted in the crystalline lens sac and is designed to work in
conjunction with current and future generation retinal prostheses. The development
of such an intraocular camera (IOC) would eliminate the need for an external head-
mounted or eyeglass-mounted camera. Placing the camera inside the eye would
allow subjects to use their natural eye movements for foveation (attention) instead of
more cumbersome head tracking, would notably aid in personal navigation and
mobility, and would also be significantly more psychologically appealing from the
standpoint of personal appearances. The capability for accommodation with no
moving parts or feedback control is incorporated by employing camera designs that
exhibit nearly infinite depth of field. Such an ultracompact optical imaging system
requires a unique combination of refractive and diffractive optical elements and
relaxed system constraints derived from human psychophysics. This configuration
necessitates an extremely compact, short focal-length lens system with an f-number
close to unity. Initially, these constraints appear highly aggressive from an optical
design perspective. However, after careful analysis of the unique imaging
requirements of a camera intended to work in conjunction with the relatively low
pixellation levels of a retinal microstimulator array, it becomes clear that such a
design is not only feasible, but could possibly be implemented with a single lens
system.
1
Chapter 1
INTRODUCTION
1.1 Statement of the problem
Blindness due to degenerative retinal disease afflicts millions of people
worldwide. This is a sad and, as of yet, incurable problem. Many people who suffer
from this type of blindness lose their vision because the photosensitive rods and
cones located at the back of the retina degenerate and are no longer able to convert
light into electrical impulses (Figure 1-1).
Retinitis Pigmentosa (RP) and Age-Related Macular Degeneration (AMD)
are two common forms of this type of visual impairment. RP represents a family of
hereditary retinal diseases that affect one in 4,000 live births [1]. A person with RP
tends to lose their peripheral vision first, leaving them with an extremely limited and
progressively nonexistent visual field. Age-related macular degeneration primarily
affects the area of the retina responsible for sharp, central vision and is the main
cause of vision loss among adults over age 65 in developed countries [2]. According
to the National Eye Institute, over 1.6 million Americans over the age of 50 suffer
from late-stage AMD [3].
Age-related vision loss is a severe international health issue that is rapidly
growing worse as the population ages. The promise of a cure lies mainly with gene
and drug therapies, but these prospects lie in the distant future. Fortunately, recent
advances in electronic implants that bypass damaged photoreceptor cells and
2
electrically stimulate the remaining healthy retinal neurons show promise for
restoring functional vision to the blind [4]. Current versions of these implants are
driven by an external camera system that is mounted on the subject’s head. This
type of system requires the subject to slowly scan their head in order to shift their
direction of gaze and foveate on objects in their environment. The focus of this
thesis is on the development of an optical imaging system for a miniaturized video
camera that can be placed inside the eye, that is uniquely suited to operate in the
intraocular space, and that is designed to work in conjunction with the
microstimulator array within the retinal prosthesis, as well as the subject’s natural
eye movements.
Figure 1-1 Cross section of the human eye showing a schematic enlargement of the
cellular layers of the retina. In AMD and RP, the photoreceptors at the back of the
retina degenerate and become unusable while the inner retinal neurons tend to
remain functional (From http://www.webvision.med.utah.edu/).
3
1.2 Intraocular retinal prosthesis (IRP) system description
The intraocular retinal prosthesis takes the form of a multielectrode
microstimulator array that is proximity coupled with the retinal surface. Several
research teams throughout the world are investigating different implementations of
retinal prostheses. A diagram illustrating the general components of a retinal
prosthesis system is shown in Figure 1-2. Images from an external video camera are
transmitted to a portable visual processing unit (e.g., a battery powered unit worn on
a belt pack) that converts the images into a set of electrical stimulation signals that
may be transmitted wirelessly by either RF or optical means to a microstimulator
array driver circuit and thence to the retinal microstimulator array. A comprehensive
review of these retinal prosthesis systems including results of human subject trials
and the current state of the art in retinal stimulation research can be found in
References [4, 5].
Retinal prostheses fit into two main categories: (1) subretinal, and (2)
epiretinal implants, as shown in the inset of Figure 1-2. Subretinal microstimulator
arrays are implanted in the outer retina, at the natural position of the photoreceptors.
This is accomplished by temporarily (locally) detaching the retina in order to slip the
array into position, a delicate procedure that is potentially disruptive to the retinal
tissue. A key motivation for a subretinal implant is that the stimulation current is
applied early in the retinal pathway to preserve the information processing of the
inner retina before being transmitted to the brain. Recent studies suggest though,
that the neuronal layers of the inner retina undergo a significant amount of
4
remodeling after photoreceptor death, making it unclear as to whether or not a
subretinal implant will actually receive this benefit [6].
Figure 1-2 Retinal prosthesis system diagram with an external camera for image
acquisition and processing. The placement of either a subretinal and epiretinal
electrode array is illustrated in the inset (From [4]).
Alternatively, the electrode array can be implanted epiretinally, on the inner
surface of the retina. This placement bypasses the inner nerve layers and directly
stimulates the retinal ganglion cells. The intraocular retinal prosthesis (IRP) co-
invented by Dr. Mark S. Humayun of the Doheny Eye Institute at the USC Keck
School of Medicine, [4, 7] uses an epiretinal microstimulator array. This device is
the centerpiece of an ongoing research effort at USC in the Retinal Prosthesis
Testbed of the NSF Biomimetic MicroElectronic Systems Engineering Research
Center (BMES ERC).
5
Second Sight® Medical Products, Inc. (Sylmar, CA), a corporate partner of
the BMES ERC at USC, has developed two generations of an epiretinal prosthesis
system that are currently in FDA approved clinical studies. The first generation
visual prosthesis comprises a 4 × 4 epiretinal electrode array, and is currently
implanted in six human subjects who previously exhibited minimal or no light
perception in the implanted eye. These devices were implanted between 2002 and
2004 and have thus far demonstrated five years of chronic operability. Figure 1-3
shows a fundus photo of a 4 × 4 electrode array surgically implanted on the retina of
a 74-year old male with retinitis pigmentosa who was completely blind in this eye
for more than 50 years [8]. Electrical stimulation of individual electrodes with
biphasic current pulses elicits visual percepts in these subjects that are typically
described as yellow-white round spots. Though the array has only sixteen
electrodes, the subjects in this study have demonstrated the ability to detect motion,
perceive shapes, and identify different objects from within a set with the aid of
scanning head movements [8, 9]. Clinical studies with these subjects are ongoing
and continue to demonstrate the effectiveness of using an epiretinal microstimulator
array to reliably generate percepts in the visual field that spatially correspond with
the stimulation sites on the retina [10]. In January, 2007, Second Sight® Medical
Products received FDA approval to begin clinical trials of a second generation
implant with a 60-electrode (6 × 10) microstimulator array [11, 12]. This device is
being implanted in human subjects at several clinical trial sites throughout the United
States, Mexico, and Europe, and will allow researchers to determine the benefits of
delivering higher resolution images to the blind.
6
Figure 1-3 Fundus photo showing a 4 × 4 epiretinal electrode array surgically
implanted on the retina of a 74-year old male with retinitis pigmentosa who had
complete blindness in this eye for more than 50 years (From [9]).
1.3 Prior art for capturing and relaying images to retinal prostheses
A number of alternative approaches for capturing and relaying images to an
intraocular visual prosthesis have been proposed and developed. Some of the earliest
(and, in some cases, still used) systems relied on readily available and easy-to-
integrate external commercial cameras and PDA-like devices for image processing
and battery power. These systems provided the best flexibility in terms of image
resolution, device availability, and cost. The need to miniaturize these bulky
components quickly led to the use of smaller, eyeglass-mounted sensor systems that
typically contain the necessary optics, the image sensor, battery, and the telemetry
link for unidirectional transmission to the electrical stimulation array [13, 14].
One of the more advanced external image sensors contains a set of 128 × 128
CMOS-compatible photodiodes in a hexagonal layout (or 400 × 300 pixels in a
rectangular array) with associated readout electronics [15]. Each pixel in the array
7
has seven decades of illuminance dynamic range due to an on-pixel logarithmic
circuit, and is designed for random accessibility that assists in efficient computation
of receptive field functions. An off-chip (but eyeglass-integrated) telemetry
transmission unit communicates with an intraocular microstimulator via a receiver
unit mounted in the crystalline lens sac region.
Although these systems have excellent performance as image sensors, and in
many cases, actually have optimized features for use in retinal implants, all require
bulky control electronics, contain large and bulky multi-element lens systems, and
require external battery packs.
An alternative approach for possible sight restoration in patients who have an
optically functioning crystalline or aphakic intraocular lens with sufficiently clear
optical paths to form an in-focus picture on the back of the eye is subretinal
implantation of a photosensitive retinal stimulator array. By strategically placing a
microelectrode-tipped microphotodiode array (MPDA) between the neural retina and
the retinal pigment epithelium layer, light intensities of a visual scene are converted
into electrical signals that may trigger responses in the second- and third-order
neurons in the retina. Early devices using this approach contain approximately 5,000
to 7,000 single photodiodes on a 3 mm diameter silicon substrate [16-18]. Human
implantation studies of a 5,000-pixel subretinal MPDA showed improvement in
visual perception of brightness, contrast, color, and shape in subjects with advanced
RP, but much of the improvement was in portions of the visual field not
corresponding with the array placement on the retina. It is still unclear whether a
8
light-powered subretinal MPDA can generate sufficient stimulus currents to elicit
visual percepts.
The implantable miniature telescope (IMT™) [19-21] is an attempt to treat
macular degeneration by using a miniature telescopic lens 4.6 mm long and 3.0 mm
in diameter [22-24]. This lens magnifies the central visual field so that it covers a
larger area on the retina. As such, this device is intended for use by subjects who
still have functional peripheral vision. The telescope is passive and does not include
any electronic or powered elements. The device’s small size allows it to be
surgically inserted in the space previously occupied by the crystalline lens. Since the
IMT™ is bulkier than many standard cataract lens implants, a more robust surgical
technique is required.
The main advantage of the IMT™ is that it alleviates the need for head
mounted magnifiers that can cause image stability and directionality problems.
Furthermore, this device allows for the vestibular ocular reflex (VOR) to occur
naturally so that a stable retinal image is maintained during head rotation. If VOR is
not able to occur, substantial image motion in both eyes will limit object sensitivity
and recognition ability [25, 26].
Our approach to an intraocular imaging system integrates a small, yet
powerful optical lens assembly with a custom CMOS active pixel sensor array into a
hybridized package capable of being surgically inserted into the cavity previously
occupied by the crystalline lens. This novel optical system enables the eventual
possibility of a completely self-contained retinal prosthesis within the ocular cavity
9
and avoids the problems commonly associated with head-mounted cameras and
displays, as discussed in the following section.
1.4 Motivation for an intraocular camera (IOC)
As mentioned above, current retinal prostheses utilize an external camera for
image acquisition. An envisioned next-generation system under development by
researchers in the BMES ERC, is depicted in Figure 1-4. The video camera is
mounted on the bridge of a pair of eyeglasses. The signal from the camera is
transmitted to a battery-powered image processor worn on the subject’s belt that
converts the video signal into an electrical stimulation pattern. A pair of antenna
coils, one mounted on the eyeglasses and the other implanted either in the ocular
Figure 1-4 Schematic illustration of an envisioned next-generation retinal prosthesis
system with an external camera mounted on a pair of eyeglasses for image
acquisition, and a pair of antenna coils for bidirectional transmission of data and
power [with the necessary circuitry integrated on a System on a Chip (SoC)]. In the
configuration shown, one coil is mounted externally on a pair of eyeglasses, and the
other is mounted inside the ocular cavity (it may also be placed under the scalp, near
the eye) (From http://bmes-erc.usc.edu/research/retinal-prosthesis-testbed.htm).
10
cavity (as shown in the Figure 1-4), or under the scalp, is envisioned to be used for
bidirectional transmission of both data and power to a telemetry chip in the eye that
delivers the stimulus signals to the microstimulator array [27-29].
The use of an external video camera requires the subject to move his or her
head in order to visually scan the environment. This is a slow and unnatural way to
shift one’s gaze and can cause a conflict between the visual system and the body’s
internal sense of balance. Most importantly, it does not permit the subject to use the
natural combination of head and eye movements that is native to the way the
occulomotor system works to rapidly fixate on objects and accurately guide motor
actions [30]. Further, with a head mounted camera, the image will bounce and
oscillate as the subject walks, rides in a car, or nods their head, in spite of the natural
ability to keep one’s gaze fixed and stable in the presence of head and body motions.
To avoid slow head scanning, the subject could conceivably wear an eye-tracking
system that extracts the appropriate regions of an image from a wide field external
camera in correspondence with the subject’s eye movements, but this approach adds
extra hardware, weight, and power requirements to the system. To address these
issues, we propose to locate the camera inside the eye cavity to allow for natural
foveation with reduced head movements [31-33].
The concept of a retinal prosthesis utilizing an intraocular camera in place of
the external camera is shown schematically in Figure 1-5. The camera is located in
place of the crystalline lens, for reasons explained in the next chapter. This
configuration necessitates an extremely compact, short focal-length lens system with
an f-number close to unity. Initially, these constraints appear highly aggressive from
11
an optical design perspective. However, after careful analysis of the unique imaging
requirements of a camera intended to work in conjunction with the relatively low
pixellation levels of a retinal microstimulator array, it becomes clear that such a
design is not only feasible, but could possibly be implemented with a single lens
system [31-33]. Two prototype cameras have been designed and tested that provide
a proof of principle for the intraocular camera concept. These first prototypes were
constructed using a carefully selected commercial lens and sensor.
Figure 1-5 Schematic illustration of a retinal prosthesis system with an intraocular
camera placed in the location of the crystalline lens (From [31]).
The main subject of this thesis is the further refinement, analysis, and
customization of the intraocular lens system using refractive, or refractive and
diffractive elements to achieve an ultra-compact, surgically implantable imaging
system to work in conjunction with current and future generations of the retinal
prosthesis microstimulator array.
12
1.5 Context of this thesis within the overall intraocular camera (IOC) project
The intraocular camera research effort is led by Dr. Armand R. Tanguay, Jr.,
in the Optical Materials and Devices Laboratory (OMDL) at USC. Our team
contributes to the Retinal Prosthesis Testbed within the NSF Biomimetic
Microelectronic Systems Engineering Research Center (BMES ERC) at USC. We
also frequently collaborate with ophthalmological surgeons and biomedical
engineers within the Doheny Eye Institute (DEI) to provide a comprehensive and
interdisciplinary approach to the fundamental analysis, design, and development of
all aspects of an intraocular camera. The placement of a miniaturized camera
assembly in the biological environment of the eye poses several unique challenges
and presents many opportunities for novel research directions. We have learned, for
example, that a camera that is placed inside the eye, and is designed to provide
limited visual information to a coarsely pixellated retinal microstimulator array to
restore functional vision to the blind, in many ways does not follow traditional
camera design principles.
These efforts require frequent interaction with the rest of the team, as our
individual progress and results impact the design and understanding of other
components of the camera system. Our research associate, Dr. Patrick J. Nasiatka, is
working on the image sensor array, supporting electronics, telemetry, camera
housing design, and haptic mounting efforts. We work with the surgical team to
experiment with various surgical implantation procedures that constrain the camera’s
allowable dimensions, mass, and materials. In collaboration with other graduate
research assistants in our group, Dr. Nasiatka assembled and tested the first three
13
intraocular camera prototypes using off the shelf components and a novel hermetic
sealing technique for proof-of-principle investigations. A fourth generation camera
containing a custom lens assembly, described in Chapter 5 of this thesis, is currently
in development. Noelle R. B. Stiles, an undergraduate research assistant in our
group, is responsible for visual psychophysics research on pixellated vision, and the
development of image processing techniques to optimally restore functional vision to
those with retinal prosthetic implants. She also conducts experiments with the
intraocular camera prototypes to better understand the effects of depth of field,
motion, blurring, and color on prosthetic vision. The results of her research have
several implications for the optical design constraints of the camera, and we have
worked in collaboration to further develop these experimental tools.
My primary role on the team is to evaluate the fundamental and technological
possibilities and limitations for the optical system of this unique biophotonic imaging
device. This includes the development of novel lens designs using refractive, as well
as both refractive and diffractive, components to be used in conjunction with the
human corneal lens to meet a set of unusual imaging requirements in a very small
form factor.
1.6 Organization of the thesis
This chapter described the current state of the art in retinal prostheses for
restoring functional vision to the blind. The motivation for replacing the external
head-mounted camera in current prostheses with an intraocular camera to allow for
natural foveation was also presented. The next chapter covers the prior art in our
14
research group that is relevant to the development of the optical system design
constraints for the intraocular camera. This begins with a description of results from
visual psychophysical studies of prosthetic vision that imply relaxed imaging
requirements for the IOC. The key design issues for an IOC are then discussed,
including camera placement within the eye, power dissipation, size, mass, and
surgical implantation. The latter half of Chapter 2 contains results from the first few
IOC prototypes, which demonstrated the feasibility of an intraocular camera for
retinal prostheses and laid the path to our current fourth generation design. The third
chapter contains a description of a set of customized video capture and image
processing tools developed in MATLAB and LabVIEW to support visual
psychophysics experiments related to prosthetic vision. In Chapter 4, we explore the
optical design space for the intraocular camera. The constraints and tradeoffs
concerning focal length, f-number (aperture), resolution, field of view, lens materials,
lens configuration, and image brightness are evaluated in light of the unique
application of prosthetic vision. From this analysis, a general set of optical system
goals and specifications are developed that are used in Chapters 5 and 6 to design
and evaluate specific lens designs. Chapter 5 focuses on refractive lens systems and
includes the details of a custom polymer lens design that has been fabricated and is
currently being integrated into the fourth generation IOC prototype. The
incorporation of diffractive lens elements into the intraocular camera to further
reduce the IOC mass and to provide additional degrees of freedom for correcting
image aberrations is covered in Chapter 6. The final chapter contains a summary of
the thesis and a discussion of future research directions
15
Chapter 1 References
[1] E. Margalit and S. R. Sadda, “Retinal and optic nerve diseases,” Artificial
Organs, vol. 27, no. 11, pp. 963-974, November 2003.
[2] C. A. Curcio, N. E. Medeiros, and C. L. Millican, “Photoreceptor loss in age-
related macular degeneration,” Investigative Ophthalmology and Visual
Science, vol. 37, no. 7, pp. 1236-1249, June 1996.
[3] “Vision problems in the U.S.—Prevalence of adult vision impairment and age-
related eye diseases in America,” National Eye Institute, 2002.
[4] J. D. Weiland, W. Liu, and M. S. Humayun, “Retinal prosthesis,” Annual
Reviews of Biomedical Engineering, vol. 7, pp. 361-401, March 2005.
[5] J. I. Loewenstein, S. R. Montezuma, and J. F. Rizzo III, “Outer retinal
degeneration An electronic retinal prosthesis as a treatment strategy,” Archives
of Ophthalmology, vol. 122, no. 4, pp. 587-596, April 2004.
[6] B. W. Jones, C. B. Watt, and R. E. Marc, “Retinal remodeling,” Clinical and
Experimental Optometry, vol. 88, no. 5, pp. 282-291, September 2005.
[7] M. S. Humayun, E. de Juan, Jr., and R. J. Greenberg, “Visual prosthesis and
method of using same,” US Patent 5935155, August 10, 1999.
[8] J. D. Weiland, D. Yanai, M. Mahadevappa, R. Williamson, B. V. Mech, G. Y.
Fujii, J. Little, R. J. Greenberg, E. de Juan Jr., and M. S. Humayun, “Visual
task performance in blind humans with retinal prosthetic implants,” in
Proceedings of the Annual International Conference of the IEEE Engineering
in Medicine and Biology Society (EMBC), vol. 2, 2004, pp. 4172-4173.
[9] M. S. Humayun, J. D. Weiland, G. Y. Fujii, R. Greenberg, R. Williamson, J.
Little, B. Mech, V. Cimmarusti, B. G. Van, G. Dagnelie, and E. de Juan Jr.,
“Visual perception in a blind subject with a chronic microelectronic retinal
prosthesis,” Vision Research, vol. 43, no. 24, pp. 2573-2581, November 2003.
[10] M. J. McMahon, A. Caspi, J. D. Dorn, K. H. McClure, M. S. Humayun, and R.
J. Greenberg, “Quantitative assessment of spatial vision in Second Sight retinal
prosthesis subjects,” Frontiers in Optics, The Annual Meeting of the Optical
Society of America, San Jose, CA, 2007, paper FThP1.
[11] Second Sight Medical Products, “Second Sight completes U.S. phase I
enrollment and commences European clinical trial for the Argus II retinal
implant,” Press Release, February 2008. Available online: http://www.2-
sight.com/press-release2-15-final.html.
16
[12] Second Sight Medical Products, “Ending the journey through darkness:
Innovative technology offers new hope for treating blindness due to retinitis
pigmentosa,” Press Release, January 2007. Available online: http://www.2-
sight.com/Argus_II_IDE_pr.htm.
[13] R. Eckmiller, M. Becker, and R. Hunermann, “Dialog concepts for learning
retina encoders,” in Proceedings of the IEEE International Conference on
Neural Networks, vol. 4, 1997, pp. 2315-2320.
[14] M. Schwarz, B. J. Hosticka, R. Hauschild, W. Mokwa, M. Scholles, and H. K.
Trieu, “Hardware architecture of a neural net based retina implant for patients
suffering from retinitis pigmentosa,” in Proceedings of the IEEE International
Conference on Neural Networks, vol. 2, 1996, pp. 653-658.
[15] M. Schwarz, R. Hauschild, B. J. Hosticka, J. Huppertz, T. Kneip, S. Kolnsberg,
L. Ewe, and H. K. Trieu, “Single-chip CMOS image sensors for a retina
implant system,” IEEE Transactions on Circuits and Systems II: Analog and
Digital Signal Processing, vol. 46, no. 7, pp. 370-377, July 1999.
[16] A. Y. Chow and N. S. Peachey, “The subretinal microphotodiode array retinal
prosthesis,” Ophthalmic Research, vol. 30, no. 3, pp. 195-196, May 1998.
[17] A. Y. Chow, V. Y. Chow, K. H. Packo, J. S. Pollack, G. A. Peyman, and R.
Schuchard, “The artificial silicon retina microchip for the treatment of vision
loss from retinitis pigmentosa,” Archives of Ophthalmology, vol. 122, no. 4, pp.
460-469, April 2004.
[18] E. Zrenner, A. Stett, S. Weiss, R. B. Aramant, E. Guenther, K. Kohler, K.-D.
Miliczek, M. J. Seiler, and H. Haemmerle, “Can subretinal microphotodiodes
successfully replace degenerated photoreceptors?,” Vision Research, vol. 39,
no. 15, pp. 2555-2567, July 1999.
[19] S. S. Lane, B. D. Kuppermann, I. H. Fine, M. B. Hamill, J. F. Gordon, R. S.
Chuck, R. S. Hoffman, M. Packer, and D. D. Koch, “A prospective multicenter
clinical trial to evaluate the safety and effectiveness of the implantable
miniature telescope,” American Journal of Ophthalmology, vol. 137, no. 6, pp.
993-1001, June 2004.
[20] I. Lipshitz, A. Loewenstein, M. Reingerwitz, and M. Lazar, “An intraocular
telescopic lens for macular degeneration,” Ophthalmic Surgery and Lasers,
vol. 28, pp. 513-517, June 1997.
[21] E. Peli, I. Lipshitz, and G. Dotan, “Implantable miniaturized telescope (IMT)
for low vision,” in Vision Rehabilitation: Assessment, Intervention and
Outcomes, C. Stuen, A. Arditi, A. Horowitz, M. A. Lang, B. Rosenthal, and K.
Seidman, Eds., Lisse: Swets & Zeitlinger, 2000, pp. 200-203.
17
[22] J. L. Demer, F. I. Porter, J. Goldberg, H. A. Jenkins, and K. Schmidt,
“Adaptation to telescopic spectacles: Vestibulo-ocular reflex plasticity,”
Investigative Ophthalmology and Visual Science, vol. 30, pp. 159-170, January
1989.
[23] G. M. Gauthier and D. A. Robinson, “Adaptation of the human vestibular
ocular reflex to magnifying lenses,” Brain Research, vol. 92, pp. 331-335,
April 1975.
[24] L. A. Spitzberg, R. T. Jose, and C. L. Kuether, “Behind the lens telescope: A
new concept in bioptics,” Optometry and Vision Science, vol. 66, pp. 616-620,
September 1989.
[25] E. Peli, E. Fine, and A. Labianca, “The detection of moving features on a
display: The interaction of direction of motion, orientation, and display rate,”
in Technical Digest of Papers, Society for Information Display, SID-98, 1998,
pp. 1033-1036.
[26] E. Peli, “The optical functional advantages of an intraocular low-vision
telescope,” Optometry and Vision Science, vol. 79, no. 4, pp. 225-233, April
2002.
[27] W. Liu, K. Vichienchom, M. Clements, S. C. DeMarco, C. Hughes, E.
McGucken, M. S. Humayun, E. de Juan, Jr., J. D. Weiland, and R. Greenberg,
“A neuro-stimulus chip with telemetry unit for retinal prosthetic device,” IEEE
Journal of Solid-State Circuits, vol. 35, no. 10, pp. 1487-1497, October 2000.
[28] W. Liu and M. S. Humayun, “Retinal prosthesis,” in Digest of Technical
Papers, IEEE International Solid-State Circuits Conference (ISSCC), 2004, pp.
218-219.
[29] M. Sivaprakasam, L. Wentai, M. S. Humayun, and J. D. Weiland, “A variable
range bi-phasic current stimulus driver circuitry for an implantable retinal
prosthetic device,” IEEE Journal of Solid-State Circuits, vol. 40, no. 3, pp.
763-771, March 2005.
[30] M. Land, N. Mennie, and J. Rusted, “The roles of vision and eye movements in
the control of activities of daily living,” Perception, vol. 28, no. 11, pp. 1311-
1328, November 1999.
[31] P. Nasiatka, A. Ahuja, N. R. B. Stiles, M. C. Hauer, R. N. Agrawal, R. Freda,
D. Guven, M. S. Humayun, J. D. Weiland, and A. R. Tanguay, Jr., “Intraocular
camera for retinal prostheses,” Annual Meeting of the Association for Research
in Vision and Ophthalmology (ARVO), Ft. Lauderdale, FL, 2005, poster B480.
18
[32] P. Nasiatka, M. C. Hauer, N. R. B. Stiles, J.-C. Lue, S. Takahashi, R. N.
Agrawal, R. Freda, M. S. Humayun, J. D. Weiland, and A. R. Tanguay, Jr.,
“Intraocular camera for retinal prostheses,” Annual Meeting of the Association
for Research in Vision and Ophthalmology (ARVO), Ft. Lauderdale, FL, 2006,
poster B554.
[33] M. C. Hauer, P. Nasiatka, N. R. B. Stiles, J.-C. Lue, R. N. Agrawal, J. D.
Weiland, M. S. Humayun, and A. R. Tanguay, Jr., “Intraocular camera for
retinal prostheses: Optical Design,” Frontiers in Optics, The Annual Meeting
of the Optical Society of America, San Jose, CA, 2007, paper FThT1.
19
Chapter 2
INTRAOCULAR CAMERA DESIGN ISSUES
AND FIRST PROTOTYPES
2.1 Introduction
The fundamental goal of our team’s research effort is to design, fabricate, and
test an intraocular camera (IOC) for use in conjunction with an epiretinal
microstimulator array. As discussed in the introductory chapter, replacing the
extraocular camera with an intraocular camera will allow for normal foveation,
enabling the subject to use a natural combination of head and eye movements to
rapidly and accurately fixate on objects in their environment.
While the motivation for developing an intraocular camera for retinal
prostheses is clear, the feasibility of doing so is not as obvious. A brief examination
of the sensitive biological environment and unique application of prosthetic vision
indicates that traditional optical system design rules for digital cameras do not
directly apply to the development of an intraocular camera for retinal prostheses.
Instead, the required system configuration and imaging properties are uniquely
influenced by:
(1) the refractive power and aberrations of the biological cornea,
(2) aggressive surgical and physiological constraints that limit the possible size,
shape, mass, location, and power consumption of the camera,
20
(3) results of visual psychophysics tests on minimum allowable pixellation levels
and pre-/post-pixellation image processing,
(4) the specific implementation (size and pitch) of the microstimulator array, and
(5) feedback from subjects in clinical trials on what they actually see when
implanted with current generation visual prostheses.
This last point is especially poignant in that it clarifies the uncomfortable fact that we
cannot really know or determine what will work best for these subjects until the first
intraocular camera and next-generation high-resolution electrode arrays are
implanted in humans. Nevertheless, results from current clinical trials with human-
implanted retinal prostheses make it abundantly clear that this is a promising
research task well worth undertaking. In the meantime, there are several
implementations that we can envision for an intraocular camera, and a
comprehensive analysis and exploration of this unique design space is needed so that
we are prepared to make the best design decisions possible at this point. We can
then refine the design when we have the opportunity to ask a blind person what they
actually see when implanted with the first intraocular camera.
This chapter covers the initial studies, experiments, and prototype efforts
carried out by our team in close collaboration with ophthalmological surgeons to
address the feasibility of an intraocular camera and to develop our present design
parameters and research questions. These initial studies have led to several
important observations that in turn enable us to form the following three hypotheses
that guide our current research efforts:
21
Scientific hypothesis – Human psychophysics can tolerate low pixellation levels.
Surgical/biomedical hypothesis – An intraocular camera can be surgically
implanted in the location of the natural crystalline lens, and can be maintained
there chronically.
Technological hypothesis – A very short focal length, low f-number, compact
intraocular camera can be designed and packaged to provide sufficient resolution.
2.2 Implications of visual psychophysics studies of prosthetic vision
An important, and perhaps unique, feature of the intraocular camera research
effort is the comprehensive interdisciplinary approach we have taken. This enables
the consideration of novel camera designs that may be unconventional for standard
imaging systems, but are nonetheless ideally suited to the application of prosthetic
vision. Our intraocular camera design strategy therefore incorporates
psychophysical testing results with consideration of the properties of the imaging
lens, the image sensor array, the design layout and pad distribution of the electrode
array, and the nature of electric field and current spreading in retinal tissue. With
this systems level approach, several effects that could be perceived as limitations
from a more limited research perspective may actually prove to be beneficial when
examined more closely. This section includes results from psychophysical studies
that were carried out in our group by Noelle R. B. Stiles and Dr. Armand R.
Tanguay, Jr. to reveal optimal pixellation and image pre- and post-processing
requirements that have important implications for the optical system design of the
22
intraocular camera, such as relaxed imaging requirements and the importance of
optical pre-blur.
As the first human implants were performed with 4 × 4 microstimulator
arrays (for the Second Sight® Argus I epiretinal device), and the current generation
comprises a 6 × 10 array [1, 2], it is clear that we are operating in the very low
pixellation limit. The images in Figure 2-1 give a sense of how a common visual
scene (shown later) might appear when downsampled to only 16 pixels (4 × 4). The
left image is in color, which could at least enable the viewer to potentially
distinguish some features in a scene (e.g., if outdoors, blue might be sky, green could
be grass, and gray might be a sidewalk). However, color coding is not presently
feasible with the prosthesis, because the stimulation electrodes are large relative to
individual ganglion cells and thereby emit current pulses that stimulate multiple
retinal cells that code for various color differences simultaneously. The second
image, shown in grayscale, is perhaps more realistic [Figure 2-1(b)]. Furthermore,
the electrodes have large gaps between them, which may result in regions of minimal
retinal stimulation, as simulated by the black grid in Figure 2-1(c). While the
subjects have not (so far) reported seeing crisp, gridded, grayscale images like the
one depicted here, this is still indicative of the severely low resolution images
provided to them, which are clearly below the threshold needed to recognize objects
and navigate through various environments. As noted in Chapter 1, it was in fact
quite surprising to discover that these subjects could learn to recognize certain
objects over time given how hopeless this would seem to a sighted individual
looking at the images in Figure 2-1.
23
(a) (b) (c)
Figure 2-1 Simulation of human perception of a kitchen scene (shown in full
resolution in Figure 2-2) with a 4 × 4 array of square pixels (a) color, (b) grayscale,
(c) grayscale with gridlines at a 50% duty cycle (From [3]).
The question remains, what is the minimum allowable number of pixels
required to provide functional vision? In their initial psychophysical investigations,
Noelle Stiles and Dr. Tanguay used pixellated test images to measure the recognition
abilities of sighted human subjects as a function of the degree of pixellation of both
foreign and familiar images, as well as a function of the degree of post-pixellation
blur. They found that array sizes of approximately 25 × 25 (625 total pixels) appear
to be sufficient for object recognition, navigation, collision avoidance, and
locomotion [3]. These results are in substantial agreement with earlier results by
Humayun [4] and Cha [5-7] using different psychophysical testing protocols. To
their initial surprise, they further found that blurring the pixellated images
significantly after pixellation, particularly in the case of gridded images (to simulate
gaps between electrodes within the array), radically improves perceptual recognition
tasks, most likely by removing false edges and allowing natural object edges to
become more prominent. These results are illustrated in Figure 2-2. The topmost
picture is a high resolution image of a kitchen scene (this is the original image used
24
for the images shown previously in the 16-pixel simulations in Figure 2-1). The
second row in the figure shows the same scene when downsampled to 16 × 16,
25 × 25, and 32 × 32 square pixels, respectively. At 32 × 32 (1024 pixels), the basic
Original (1.06 Megapixels, JPEG compressed, color)
16 × 16 25 × 25 32 × 32
16 × 16, 33% blur 25 × 25, 33% blur 32 × 32, 33% blur
Figure 2-2 Photograph of a kitchen scene (top row), in color, and block pixellated
down to three different array sizes (Row 2). The application of 33% Gaussian post-
blur is shown for each case in Row 3 (From [3, 18, 19, 23-25]).
25
image features would be easily recognizable to a person familiar with the
environment (center island, walkways, pantry doors, table with flower arrangement).
However, the images in the third row show the greatly improved perception possible
when the pixellated images are significantly post-blurred. Here, the details in the
16 × 16 image are remarkably recognizable in comparison with both the 16 × 16 and
32 × 32 non-blurred images. This is an excellent result, as it implies that current and
field spreading during retinal stimulation may have an unintended positive
perceptual impact on post-operative visual acuity.
These results have implications for the post-image-acquisition blurring that
may occur at the retina due to current spreading at the electrode-tissue interface. It is
also of interest to understand the effects of imperfect sampling of the original images
captured by the camera. Figure 2-3 shows the effects of using low-fill-factor
sampling sensor arrays in forming pixellated images in the very low pixellation limit.
In this case, a photo of a bus (upper left image) is first pixellated at low fill factor
(upper right image), then blurred (lower left image), showing strong aliasing
artifacts. The incorporation of a 40% pre-pixellation blur, followed by pixellation
(sampling), and finally followed by 40% post-pixellation blur yields a much
improved image (lower right image). This result has a direct implication for the
design constraints used for the camera in that a fairly significant amount of optical
blur is not only tolerable, but also useful for reducing aliasing artifacts when the
image is heavily downsampled in the image processing stage.
26
Original (color) 25 × 38
25 × 38, 40% post-blurred 40% pre-blurred, 25 × 38, 40% post-blurred
Figure 2-3 Photo of a bus (upper left) pixellated using low fill-factor sampling
(upper right) resulting in severe aliasing problems that are not resolved by post-
blurring the image (lower left). The lower right image was significantly pre-blurred
(40% Gaussian) prior to the low fill-factor sampling to reduce aliasing artifacts, and
then post-blurred (From [3, 18, 19, 23-25]).
These are early results of an ongoing research effort in the area of visual
psychophysics as it relates to the intraocular camera and prosthetic vision.
Additional image processing and video capture tools were recently developed to
incorporate new functions to analyze the effects of motion and color in low-
resolution prosthetic vision. These will be described in the next chapter.
2.3 Camera placement and configuration options
The problem of placing a camera inside the eye to incorporate natural
foveation into the retinal prosthesis system raises several questions about
27
conceivable placement locations and camera configurations. First, it is not sufficient
to simply consider the volume of space available for camera hardware within the 2.5-
cm spherical cavity of the eye. For reference, the basic anatomy of the human eye is
depicted in Figure 2-4. The entire camera system must be small enough to be
inserted through a standard and safe surgical incision, ideally similar to those made
during routine cataract replacement procedures. The camera must also be shaped
such that it will not contact and cause damage to any part of the corneal endothelium
or other sensitive eye tissues during or after implantation.
Several placement configurations for an intraocular camera were considered
and analyzed, ultimately resulting in the choice to place the entire camera system in
the location of the natural crystalline lens. Before arriving at this decision, the most
favored option had the camera lens located in place of the crystalline lens with the
Figure 2-4 Illustration of the human eye (From http://iei.ico.edu/patients/
fo_eyeanatomy.html, Illinois Eye Institute).
28
sensor placed at the retina, collocated with the microstimulator array. This
configuration would allow the optical system to have a focal length similar to that of
the human eye, about 17 mm from the back of the lens to the retina. The f-number
(the ratio of the effective focal length to the lens aperture diameter) would then be in
the range of f/3 to f/8, depending on the biological pupil size and chosen lens
aperture. From a lens design perspective, this focal length and f-number would make
it fairly straightforward to design a lens system that generates sharp, high-resolution
images over a relatively wide field of view. After all, these dimensions are less
aggressive than those found in many small, commercially available digital cameras.
However, even with the high sensitivity of modern CMOS and CCD sensors, it is
unlikely that there would be sufficient illumination at the sensor plane for imaging in
low light conditions at these f-numbers. More importantly though, it was determined
that this configuration would likely produce retinal damage due to heat dissipation
by the sensor and its associated electronics when placed so close to the fragile retinal
wall. Furthermore, the curvature of the retina argues for both a curved and flexible
image sensor array/microstimulator combination to avoid retinal folds and possible
retinal detachment.
To avoid these problems, the idea of suspending the sensor in the mid-
vitreous was considered. The vitreous humor is an excellent heat sink, and this
configuration would allow a shorter, but still reasonable, focal length and lower
f-number. It was determined, though, that there is no good way to safely mount and
stabilize the imaging chip in the middle of the eye. The retinal walls are extremely
fragile and extend nearly to the iris, prohibiting the option of attaching haptic
29
supports to the wall of the mid-eyeball. This configuration would also suffer from
image stabilization problems associated with the sensor chip moving relative to the
lens as the eyeball moves and rotates. This would, in fact, be an issue with any
configuration that places the lens and sensor on physically separate mechanical
mounts rather than packaged as a single, stable optical system.
Further analyses showed that it would actually be optimal from a
physiological perspective to package the camera as a small, single device and
implant it in place of the crystalline lens, assuming this is feasible. There are several
reasons why placing the camera in the location of the natural crystalline lens makes
sense: (1) heat is dissipated into both aqueous and the vitreous humors, and the
camera is far from the fragile retina, (2) there are well-defined surgical techniques
for removing the crystalline lens and replacing it with an intraocular lens, and (3) the
crystalline lens sac is ideal for supporting the device with haptic arms to hold it in
place with minimal post-operative movement or trauma.
For the optics, this means that, while the natural crystalline lens has 17 mm to
focus incoming light onto the retina, the intraocular camera would need to perform
this task in under 3 mm, and perhaps even less. The f-number will be close to unity,
making this a very fast lens system. From an optical design perspective, this task is
now a significant challenge because of the extremely short focal length required in
combination with a short overall system length. For an optical designer, a short
(effective) focal length is challenging, but achievable as long as there is sufficient
room to incorporate multiple lenses. This system does not have that room, nor can it
likely accommodate the mass of multiple lenses. In fact, a single-lens system would
30
be the most desirable because of its low weight and simplicity. So, while this
placement makes the most sense surgically and biologically, we were initially
skeptical about providing sufficient image quality. However, given that the imaging
requirements of a prosthetic vision system are significantly relaxed, it was deemed
worthwhile to further evaluate the feasibility of placing the entire camera system in
the location of the crystalline lens. Results from our initial prototypes, described in
the following sections, demonstrate the feasibility of this concept and lead to our
technological hypothesis (stated earlier): that a very short focal length, compact
intraocular camera can be designed and packaged to provide sufficient resolution for
a retinal prosthesis.
Additional imaging issues for the intraocular camera are the retention of
accommodation and the eventual provisioning of both central and peripheral vision.
The human crystalline lens accommodates for near and far distances by changing its
shape. On average, a normally sighted person uses this accommodation to see
images clearly from a near point of approximately 25 cm to a far point of infinity [8].
The intraocular camera must also be designed to meet its performance goals over
such a range of object distances. Videos taken with the first intraocular camera
prototype revealed that the small focal length of the intraocular camera in
conjunction with the relaxed imaging requirements of the visual prosthesis
application enable an extremely long depth of field, exceeding that of normally
sighted eyes. Another imaging requirement that makes this camera significantly
different than standard digital cameras, is that only the portion of the sensor
corresponding to the central visual field (approximately ±5 degrees) requires high
31
resolution imaging, while the sensor area corresponding to the periphery only
requires sufficient performance for basic shape and motion detection, much like the
human visual system. Since a person is considered legally blind when their visual
field of view drops below 20 degrees (±10 degrees), future-generation electrode
arrays will span at least this 20-degree field of view, and may also incorporate
additional electrodes in the periphery of the retina that could be stimulated when
motion is detected in the peripheral visual field.
2.4 Surgical and physiological constraints
The intraocular camera must meet aggressively low weight requirements.
For placement within the crystalline lens sac, the camera must be sufficiently
lightweight to be held stable by a set of haptics, similar to those used for intraocular
lenses. The adult crystalline lens is reported to weigh approximately 250 mg,
providing a potential upper limit for the entire system weight [9]. The choice of lens
and packaging materials will play a significant role in meeting this goal. These
materials must also be biocompatible. Final constraints on size, shape, weight, and
haptics design are being defined through analysis, surgical team experience, and with
the aid of surgical implantation experiments using mechanical models of intraocular
camera prototypes.
The introduction into the eye cavity of a camera sensor and any
corresponding image processing circuitry will require power and dissipate heat.
Most important in this regard is avoiding heat-induced damage to the fragile retinal
tissue. One study in canine eyes showed that a dissipation of 50 mW at the retina
32
induced immediate visible whitening of the retinal tissue, with permanent damage
seen for powers of 100 mW or more [10, 11]. This is a key factor that argues against
the placement of a camera sensor at the retina in conjunction with the
microstimulator array, even though this placement would allow for a much longer
focal length in the design of the camera’s lens system. The same study also showed
that a 500 mW heater placed mid-vitreous in the eye for two hours reached a steady
state within 60 minutes and caused no damage to the biological tissues (a
temperature rise of approximately 5° C was observed in the vitreous and
approximately 2° C near the retina). The vitreous humor acts as an excellent heat
sink, making heat a less significant, but still important, concern for the power
requirements of the intraocular camera when placed in the crystalline lens sac, well
away from the fragile retinal surface. The experimental results described above were
reproduced in simulation using a detailed 3D numerical model of the eye that was
developed by a team of researchers at North Carolina State University, led by
Dr. Gianluca Lazzi, a fellow member of the BMES ERC (headquartered at USC)
[12]. This paper shows simulation results for the temperature rise induced by the
electronics, electrode array, and telemetry system of different implementations of the
extraocular camera version of the retinal prosthesis system. Our team is working
with Dr. Lazzi’s group to model the temperature rise of the ocular media due to the
power dissipation of various intraocular camera designs (different housing materials,
geometries, sensors, and electronics) when placed in the crystalline lens sac.
Preliminary results indicate that an extreme case of an IOC dissipating 75 mW at the
rear surface of the package (with no thermal management included in the housing
33
design) generates a thermal rise of less than 0.5° C at the retinal surface. This is
roughly three times the power dissipation estimated for the fourth generation IOC
prototype that is currently in development.
Before the lens design characteristics of the intraocular camera can be
specified, it must be decided precisely where and how the camera will be implanted
in the location previously occupied by the crystalline lens. These choices will limit
the allowable weight and dimensions of the camera, which in turn affect the range of
apertures and focal lengths possible. There are two primary haptic placement
options being considered for surgically implanting the intraocular camera in the
location of the natural lens: in the capsular bag (“in the bag”) or in the ciliary sulcus
(“in sulcus”). As an example from the prior art, the Implantable Miniature Telescope
(IMT™ by Dr. Isaac Lipshitz) from VisionCare is implanted in the capsular bag as
shown in Figure 2-5 [13]. In this case, the haptics (the supporting arms that secure
the device in place) are located at the equatorial plane of the natural crystalline lens.
This plane, often termed the “lens haptic plane,” is located approximately 4 mm
behind the posterior cornea [14, 15]. Alternatively, the haptics can be placed in the
ciliary sulcus, about 0.5 mm in front of the lens haptic plane and just behind the iris,
as indicated in the figure.
The specific design of the camera package, supporting haptics, and surgical
placement is being determined through analyses, vibration table experiments, and
surgical experiments in animal eyes. Research on the packaging and image sensor is
being led by Dr. Armand R. Tanguay, Jr. and our research associate, Dr. Patrick J.
34
In sulcus
IMT™
In the lens capsule
In sulcus
IMT™
In the lens capsule
Figure 2-5 Illustration of potential placements for the haptic support elements for an
intraocular device located in the position of the natural crystalline lens. The diagram
(After [13]) shows the VisionCare Implantable Miniature Telescope (IMT™ by
Dr. Isaac Lipshitz) implanted in the lens capsule and supported by the iris. An
alternative placement for the haptic support elements would be in the ciliary sulcus,
just in front of the lens capsule.
Nasiatka, both experts in optical materials and devices. Initial surgical studies were
performed by Dr. June Kim, a visiting ophthalmological surgeon working with Dr.
Mark S. Humayun at the USC Doheny Retina Institute. Results of his experiments
are described in Section 2.8. These results and those of ongoing experiments are
being used to further understand and quantify the constraints on size, shape, mass,
housing materials, haptic mounts, and precise placement of the intraocular camera.
2.5 First generation prototype: Aspherical lens mounted in a webcam
To date, three intraocular camera prototypes have been developed and a
surgically-implantable fourth-generation device is in development. The first
generation prototype was designed for qualitative video image analysis and as a
proof-of-principle that a single aspherical lens would prove sufficient for the
intraocular camera imaging requirements. The second was designed and packaged
35
for a first demonstration of surgical implantation in a canine eye. The third
generation focused on decreasing IOC size and weight, while also significantly
reducing the number of input/output wires and power consumption of the imaging
sensor. These first three prototypes used a single, commercially available glass
aspherical lens originally designed for laser-diode coupling. The fourth generation
will incorporate the first customized lens system, lightweight materials, an ultra-low
power imaging chip and control electronics, and a significantly reduced form factor
from previous generations.
Initial analyses of surgical constraints and imaging requirements to match
epiretinal microstimulator arrays resulted in the selection of the Lightpath®
Technologies Model #370330 aspherical lens for use in a first prototype to
demonstrate and test the properties and feasibility of the intraocular camera concept.
This commercially available lens is designed for laser diode coupling into optical
fibers in the infrared. However, a ray-tracing analysis indicated that it would provide
sufficient white-light imaging performance to evaluate the general imaging
properties of a short-focal length intraocular camera. The lens has a 5-mm clear
aperture, 3.1-mm effective focal length in air (f/0.6), 2.57-mm center thickness, and
is constructed of high-index O’Hara PBH71 glass with n = 1.93 at 559 nm (enabling
an aqueous humor-to-lens index difference of 1.93 − 1.33 = 0.60, similar to typical
air-glass interfaces that usually have index differences of 0.5 to 0.6). With its glass
mounting flange, the total lens weight is approximately 336 milligrams
(density = 6.05 g/cm
3
). Photographs of the Lightpath® Technologies aspherical lens
placed in a modified webcam for qualitative analysis of video images are shown in
36
Figure 2-6, along with a video frame taken in an office setting. The multi-lens
system that originally came with the webcam was removed and replaced with the
single-element plano-convex Lightpath® Technologies lens. The threading in the
mounting tube allowed the lens to be translated relative to the color CCD sensor for
focusing.
Figure 2-6 Commercial web camera (upper left) with its lens system removed and
modified to accept the prototype aspherical lens (upper right) for qualitative video
image evaluation. The 3.1-mm focal length aspherical lens demonstrated excellent
depth of field and sufficient imaging performance for the intraocular camera
application. The bottom image is a frame of a video taken in an office setting (From
[18]).
37
Several videos were taken in different settings to qualitatively demonstrate
the imaging properties of a single element, short focal length aspherical lens. The
videos showed that the imaging performance was more than sufficient to perform
indoor and outdoor tasks such as reading, locating a pair of keys on a desk, and
driving, even when the lens was operated in air and not within the aqueous humor,
and without the additional refracting power of the corneal lens. The observed results
exceed the performance that could be achieved with the grayscale and coarse
pixellation levels of currently envisioned retinal prostheses, indicating that a single-
element aspherical lens design customized to work in conjunction with the prosthesis
is quite feasible, and that further miniaturization beyond the size of this first
prototype intraocular camera seems possible.
2.6 Analytical and ray tracing analyses of the first generation prototype
2.6.1 Depth of field experiments and calculations
A key result of the experimental videos taken with the modified-webcam
prototype is the nearly infinite depth of field exhibited by the short focal length lens.
Video experiments, analytical modeling, and numerical simulation of the camera’s
depth of field were performed by Noelle Stiles, Dr. Nasiatka, and Dr. Tanguay.
They determined that when the lens is focused at an object distance very close to the
camera (about 2.5 cm from the lens), objects remain in focus (with acceptable blur
for this application) from a distance of approximately 0.5 cm to infinity, a near depth
of field limit that exceeds that of a normal human eye (which is approximately 25 cm
to infinity). They found this result to be surprising because it is commonly
understood in photography that a low f-number lens has a relatively limited depth of
38
field. A closer look revealed that this very large depth of field is the result of the
short focal-length and small aperture of the lens—a property becoming more well
known with the advent of small digital cameras in portable devices. This result is
significant to the lens design portion of the IOC research because it helps determine
the best object distance to focus at during analyses and optimizations. The laws of
geometrical optics can be used to derive the equations for the depth of field [16]. If
the camera is focused at an object distance, s
o
, then the nearest object distance that
remains in focus, as defined by the minimum allowable blur at the sensor plane, is
()()
( )
2
o o,hyperfocal
o
o,near
2
o,hyperfocal o o
2 #
ss f
sf
s
ssf fs f f d
−
==
+− +−
, (2-1)
in which f is the focal length, (f/#) is the f-number (equal to f / D
.
), d is the minimum
allowable diameter of the circle of confusion (blur spot diameter), D is the diameter
of the entrance pupil, and the following quantity is defined as the hyperfocal
distance:
() ()
22
o,hyperfocal
##
f ffD
sf
df df d
=+ = . (2-2)
The farthest object distance that remains in focus is
()
o o,hyperfocal
o,far o,far
o,hyperfocal o
, if denominator 0
2
ss f
ss
ssf
−
==∞ ≤
−−
. (2-3)
The front depth of field is therefore s
o
− s
o, near
and the rear depth of field is s
o, far
− s
o
.
The total depth of field is
39
o,far o,near
Depth of field s s = − (2-4)
All object distances are measured from the first principal point of the optical system.
When the lens is focused at the hyperfocal distance, the depth of field extends from
half the hyperfocal distance to infinity. For a given f-number, the hyperfocal
distance scales as f
2
. For a short focal-length lens, the hyperfocal distance is very
close to the lens and varies inversely with the allowable blur circle diameter. The
intraocular camera is expected to have a focal length of approximately 2 mm, an f/#
in the range of 0.8 to 1.2, and a relatively large allowable blur circle diameter
(potentially 20 to 100 microns depending on the target retinal microstimulator array).
This means that the hyperfocal distance will be on the order of 3.5 to 25 cm. This
translates to a depth of field that ranges from half these hyperfocal distances (1.5 to
12.5 cm) to infinity.
The significance of this result for the development of an intraocular camera
cannot be underestimated. Placement of the camera in the crystalline lens sac not
only mitigates the problem of heat dissipation, but the corresponding need for a short
focal length results in a nearly infinite depth of field. This negates the need for a
focusing mechanism, and satisfies the need for accommodation. Since close objects
remain in focus, the subject will be able to clearly examine small details or letters
that are difficult to identify at normal viewing distances (25 cm) by simply bringing
the object within a few centimeters of their eye.
40
2.6.2 Comparison of experimental and simulated images of eye chart letters
The modified-webcam prototype lens system was modeled using commercial
ray tracing software (Code V®) to provide quantitative analyses to accompany and
compare with the qualitative conclusions derived from experimental videos. A ray
traced model of the Lightpath® Technologies aspherical lens mounted in the
webcam and focused at 2.5 cm is shown in Figure 2-7, showing a fan of rays drawn
at 0° and 5° field angles. The shape parameters for the aspherical lens used in the
model were taken from the vendor data sheet for the Lightpath® Technologies
Model #370330 aspherical lens. The mounting tube limits the field of view to
approximately 38°. The Lightpath® Technologies lens is designed for convex-plano
operation, but in order to fit the lens in the webcam’s mounting tube, it had to be
placed backwards with the plano side facing object space. In this reversed
orientation, the clear aperture is limited to ~2 mm because rays outside this aperture
are close to total internal reflection at the rear aspherical surface (reaching TIR at
~3 mm diameter for on-axis rays). The mounting flange also partially stops down
the lens. The f-number is approximately f/1.5 for this case. An encircled energy
Sensor
focused for
objects at 2.5 cm
Lightpath
aspherical lens
Threaded tube for focusing
3 mm 1.5 cm
Light rays
Figure 2-7 Code V® ray traced model of the modified webcam focused at 2.5 cm
from the first surface of the lens showing incoming ray fans at 0° (red rays) and 5°
(blue rays).
41
analysis shows that 20% of the energy in an on-axis blur spot at the image plane is
contained within a 30 µm diameter and 76% of the energy is within a 118 µm
diameter. These values closely match the experimental estimates made by Noelle
Stiles and Dr. Tanguay by measuring the widths of blurred edges in various test
videos. The combination of the experimental videos and the ray traced models
provides a helpful correlation between the image quality observed and the
corresponding quantitative lens performance.
It was also of interest to develop an analytical tool that could be used to
generate and evaluate bitmap images for direct qualitative comparison with the video
images, and for evaluating how letters and line pairs from standard eye and bar
charts will look when viewed with the system. This was accomplished with a macro
written in Code V® that uses a built-in function in the software called the
“Illumination” function [17]. The macro reads in an image file that can be scaled,
rotated, and placed at any location in object space. The software then performs a
Monte-Carlo ray trace from the object to the image plane. Rays are launched from
randomly selected points on the object toward the first lens surface at randomly
selected angles. Depending on the details in the object, anywhere from 1 to 20
million rays need to be traced to achieve sufficient resolution at the image plane (it
takes about one minute to trace 5 million rays through this two surface system on a
2.5 GHz PC).
Another video was recorded using the prototype camera in a low-light indoor
environment. A Snellen eye chart was brought toward the camera from across the
room (at approximately 16 feet), switched with an eye chart with smaller lines at a
42
few feet, and then brought all the way up to 2.5 cm in front of the lens [Figure 2-8(a),
(b), and (c)]. The camera was focused at 2.5 cm before the video was taken. The
very last frame of the video shows the image of a line of 1 mm tall letters on the
Snellen eye chart. For comparison, a 12-pt. font has (primarily capital) letters that
(a) (b)
1 mm tall letters at 2.5 cm
1 mm “E” at 2.5 cm (image inverted)
0.5 mm
0.5 mm
(c) (d)
Figure 2-8 (a), (b), and (c) video frames of Snellen eye charts with the final frame
showing a line of 1 mm tall letters; (d) Monte-Carlo ray trace image of a 1 mm tall
“E” at 2.5 cm [picture inverted for direct comparison with video frame in (c)] (From
[3, 23-25]).
43
are about 4 mm tall. These 1 mm tall letters are blurred but still readable, and letters
that are two to four times as tall are easily read when placed 2.5 cm from the lens. A
model of the webcam was created in software, and focused at 2.5 cm, as shown in
Figure 2-7. The Monte Carlo ray tracing routine was used to simulate the imaging of
Snellen letters at the same object distances used in the video. The object is a high-
resolution (1200 × 1200) black letter “E” on a white background, drawn to exactly
match the shape and proportions used on a Snellen eye chart (5 bars high and 5 bars
wide). Figure 2-8(e) shows the 1 mm tall letter “E” at 2.5 cm from the camera, after
being inverted for direct comparison with the “E” in the video frame (the far left
letter) in Figure 2-8(d). The resulting image in the ray tracing software closely
matches the letter shown in the video. In Figure 2-9(a) and (b), the ray tracing
software is used to show the image of a 2 mm tall “E” at 2.5 cm, which is clearly
readable, and a 37 cm tall “E” at 16 feet (chosen to maintain the same image size on
2 mm “E” at 2.5 cm
0.5 mm
0.5 mm
37 cm “E” at 16 ft
0.5 mm
0.5 mm
(a) (b)
Figure 2-9 Ray traced images of (a) a 2 mm tall “E” at 2.5 cm, and (b) a 37 cm tall
“E” at 16 ft (size chosen to maintain the image height on the sensor).
44
the sensor), illustrating the essentially infinite depth of field of the camera. As noted
before, all of these experimental and simulation results are for the case of (1) a
commercial, non-optimized aspherical lens, (2) operated in air instead of within the
eye using the biological cornea, and (3) with the lens reversed from the orientation
that minimizes spherical aberrations. As a consequence, these results are especially
encouraging.
2.7 Second generation prototype: Aspherical lens and commercial CCD
packaged for first surgical implantation in a canine eye
A second generation prototype was constructed using the Lightpath®
Technologies aspherical lens mounted on a Panasonic CCD image sensor array
Figure 2-10 Second generation prototype of the intraocular camera designed for
surgical implantation in a canine eye (From [18, 19, 23-25]).
45
designed for endoscopic applications and sealed in a novel three-element hermetic
package [18, 19]. This prototype was used for acute testing during a first surgical
implantation in a canine eye, as shown in Figure 2-10. The sensor is attached to a
ribbon cable that connects to a circuit board to power the sensor and convert the
video frames into an NTSC signal for display on a television monitor. The ribbon
cable was folded and secured to the package so that it could transition through the
sclera of the eye during surgery while the camera was placed within the ocular
cavity. The camera was successfully inserted into the eye by an ophthalmological
surgeon. Video images as seen through the corneal surface were viewed on a TV
monitor. The folding of the ribbon cable caused the camera to press up against the
corneal surface, degrading the anticipated image quality, but the surgery was deemed
a successful first implantation of an operational intraocular camera.
2.8 Surgical implantation of mechanical models in porcine eyes
Two surgical implantations of mechanical models of the intraocular camera
were implanted in porcine eyes by Dr. June Kim, a visiting surgeon at the USC
Doheny Eye Institute. The biology of a porcine, or pig, eye is similar to that of a
human in many respects and serves as a good model for this type of implantation
study. The first model was a plastic cylinder 6.9 mm long by 4.5 mm wide, with a
mass of 250 mg in air. It was fabricated at the USC Doheny Retina Institute’s Eye
Concepts Laboratory. Figure 2-11(a) shows the model suspended in air between two
metal slabs, using PMMA haptics for support. Several factors influenced the initial
specifications of this mechanical model. These factors included: the mass and size
of the natural crystalline lens (approximately 250 mg and up to 5 mm thick [9, 20]);
46
analysis of the space needed for the optics, sensor, and housing; recommendations
from the surgical team; and prior experience from the second generation prototype
implant in a canine eye. For the first surgical test, the model was implanted in the
capsular bag (crystalline lens sac). The surgeon first made a 150° scleral incision,
then cut open the anterior lens capsule and extracted the crystalline lens following
phacoemulsification. The posterior capsule membrane was also cut open to
accommodate the device length. The device was then inserted and supported by its
(a) (b)
(c) (d)
Figure 2-11 (a) First mechanical model of an intraocular camera suspended by two
PMMA haptics. The device is 6.9 mm long by 4.5 mm in diameter, with a mass of
250 mg in air. (b) Photo of the model being implanted in the capsular bag in a
porcine eye. (c) Second model being implanted in sulcus in a porcine eye. This
device is shorter at 5.5 mm long by 4.5 mm in diameter, and has a mass of 270 mg in
air. (d) After successful implantation, the device remained firmly in place while the
eye was repeatedly poked, manipulated, and shaken.
47
haptics, which push against the edges of the capsular bag. The incision was then
closed and sutured. A video frame taken during the implantation is shown in Figure
2-11(b). The surgeon reported that the device contacted and damaged the cornea
during insertion and recommended that the device length be reduced. A shorter
model was fabricated and used for a second surgical test. In this test, the surgeon
placed the haptics supporting the test device in the ciliary sulcus. This second model
was 5.5 mm long and had a mass of 270 mg. The diameter and haptics were the
same as before. The model was inserted successfully with no damage to the cornea.
The insertion is shown in Figure 2-11(c). After the eye was sutured, the surgeon
aggressively poked, prodded, and shook the eye to qualitatively evaluate the stability
of the implanted camera [Figure 2-11(d)]. It remained remarkably stable and
centered throughout the manipulations, though destabilization at the level of image
pixel sizes was not monitored. Additional acute and chronic surgical tests with
significantly smaller, lower mass mechanical models (to match our fourth generation
IOC design concept) are being performed to continue this evaluation.
2.9 Continued development of placement, size, and mass constraints
The experience gained from these surgical tests allows us to further refine our
specifications of the camera size, placement, and mass. These parameters in turn
influence the optical lens design possibilities. Post-operative analyses and
discussions with the surgical team led to the table of current parameter goals for the
intraocular camera shown in Table 2-1.
48
It should be noted that these parameters are continuing to evolve as more
tests are completed and various design concepts are evaluated. The current length
constraint of < 4.5 mm is to avoid damaging the cornea during and after
implantation. The surgeons would prefer the camera to be as short as possible. This,
however, implies a severely short focal length and, consequently, a reduced image
IOC Parameter Current Goal
Total device length < 4.5 mm (as short as possible)
Optical system length < 3.5 mm
Device diameter TBD based on selected housing geometry;
< 10 mm in perimeter
=> 3.2 mm circular diameter or 2.5 mm on a side for square profile
Implantation method In the capsular bag; haptic supports
Placement Front surface ≥ 2 mm behind posterior cornea
Total device mass < 75 mg
Lens mass < 20 mg
Table 2-1 Present goals for the size, placement, and mass parameters of the
intraocular camera.
height on the sensor, making the optical design significantly more challenging. This
will be discussed further in the following chapters, and leads to the need for novel
optical design solutions. Studies of anterior-chamber phakic intraocular lenses (IOLs
placed in front of the iris for patients who still have their natural lens), and results
from surgical trials with the IMT™ indicate that it is desirable to have a clearance of
at least 2 mm behind the posterior cornea to avoid damage to the corneal
endothelium during and after implantation [21, 22]. If the haptics are mounted
slightly anterior on the device for balance, and the haptic plane is located
approximately 4 mm from the posterior cornea, then a camera length of 4.5 mm
implies that the front face of the package will reside about 2 mm from the posterior
49
cornea. To allow for a longer device, we could conceive of a design in which the
camera housing is even more asymmetrically mounted with respect to the haptic
plane. The center of mass will in any case most likely be forward of the centerline of
the camera, such that the haptics are attached closer to the front of the device with
the portion of the camera behind the haptics longer than the portion in front. This
would allow for a device that is longer than 4.5 mm if the surgical procedure could
be worked out such that there is no contact with the cornea during implantation.
This concept is still under consideration for future surgical testing, though it appears
that we can meet the 4.5 mm length constraint at this time.
The initial goal for the mass of the entire camera was < 250 mg in air, to
roughly match the mass the natural crystalline lens. As mentioned previously, initial
surgical studies showed qualitatively that 250 mg and 270 mg camera models
remained firmly in place after implantation. However, recent developments and
quantitative experiments using a vibration table, special fixturing, and a high speed
camera to evaluate a novel haptic mounting scheme have led to a significant decrease
in the expected (and desired) mass of the IOC from 250 mg to approximately 75 mg.
It is also useful to note that when the camera is immersed in the liquid cavity of the
eye, it will effectively weigh less than it does in air due to buoyancy effects. For
example, the IMT™ is reported to weigh 96 mg in air and only 46 mg in the aqueous
humor [21]. For the optical lens portion of the system, the current design goal is
aggressively set at < 20 mg in air (not including the image sensor, electronics, or
housing). This leaves sufficient margin for the mass of the rest of the package. The
50
incorporation of lightweight lens materials is therefore critical to the optical design
and will be covered in detail in Chapter 4.
Results from experiments with the third generation camera [23], as depicted
in Figure 2-12(a), and initial designs for a fourth generation camera indicated that the
size and mass can and should be reduced well beyond the initial estimates used to
develop the first prototypes [24, 25]. The camera was initially expected to be about
the size of a Tylenol® tablet (nine millimeters in diameter and up to five or six
millimeters thick). Recent advances in the package design, and an increased
understanding of the challenges of surgically implanting an active device in the eye,
led to the new goal of developing a camera that is smaller than a BB gun pellet and
significantly lower in mass than 250 mg. In fact, we often note that the fourth
generation IOC is about a third the size of a Tic-Tac®. A cylindrical mass-
equivalent mechanical model of the fourth generation intraocular camera (without
haptic supports) is shown in comparison with a Tylenol® tablet in Figure 2-12(b).
This particular model is 3.2 mm in diameter and 4.5 mm long.
4
th
gen.
Tylenol
®
(a) (b)
Figure 2-12 Size comparisons for the (a) 3
rd
generation IOC and (b) the current 4
th
generation IOC (From [24, 25]).
51
The significantly reduced size of the fourth generation package is primarily
driven by the desire to limit the surgical incision to under 5 linear millimeters. It is
also important for the surgeons to be able to see around the implanted camera post-
surgically in order to inspect the condition of the microstimulator array at the retina.
An analysis of the geometry of a camera that is 10 mm in perimeter and 4.5 mm in
length, implanted in place of the crystalline lens, indicated that these dimensions will
allow a surgeon to inspect the retina with an ophthalmoscope by dilating the pupil to
see around the device.
2.10 Summary
This chapter covered the prior art from our research group that led to the
development of our current understanding of the design issues for developing an
intraocular camera for retinal prostheses. Experiments in visual psychophysics
showed that, when combined with the appropriate amount of image pre- and post-
pixellation blur, as few as 600 to 1000 pixels are needed in a prosthetic vision system
to provide sufficient resolution for basic navigation and object recognition. This
implies relaxed imaging constraints for the camera that may allow for a very
compact, single-lens design. In collaboration with our team of ophthalmological
surgeons, we determined that the optimal placement for the IOC is in the location of
the natural crystalline lens. While other placement concepts would provide more
ease and flexibility in the camera engineering, this location allows for good thermal
dissipation, provides a stable place to mount the camera with haptic supports, and
enables a surgical procedure that is similar to a cataract surgery or implantation of an
intraocular lens.
52
Experience gained in surgical and benchtop experiments with early IOC
prototypes and mass-equivalent models indicated that the entire device should be less
than 4.5 mm in length by about 3 mm in diameter, and have a total mass less than
75 mg. These stringent size and mass constraints imply an optical system that will
likely have room for only a single lens element with an extremely short focal length
of about 2 mm. Our first two IOC prototypes used a commercial aspherical lens with
a 2.1 mm focal length, fabricated from a high-index glass to allow for additional
light bending at the aqueous-humor-to-glass interface. In spite of initial skepticism
about being able to provide sufficient imaging with a single element, short focal
length lens, these prototypes demonstrated that, not only can a small aspherical lens
provide sufficient imaging for a retinal prosthesis, but the compact optical system
provides a nearly infinite depth of field as well. However, this high-index lens is far
too heavy at 336 mg for the IOC system, leading to the need to evaluate other
options that use lighter weight materials.
The following chapter describes a laboratory tool for capturing video images
from IOC prototypes and processing them with customized software routines to
support more advanced visual psychophysics experiments being conducted by
members of our research team. These tools help to further our understanding of
prosthetic vision so that we may optimize the IOC and other components of a retinal
prosthesis system to provide the most functional form of vision possible within the
technological limitations. Chapter 4 then revisits the IOC design goals and issues
developed in this chapter that have direct implications for the optical lens system.
The optical design space for an IOC implanted in the crystalline lens sac is fully
53
explored to develop a corresponding set of optical design goals and specifications
that are then used in Chapters 5 and 6 to evaluate specific design implementations.
54
Chapter 2 References
[1] Second Sight Medical Products, “Ending the journey through darkness:
Innovative technology offers new hope for treating blindness due to retinitis
pigmentosa,” Press Release, January 2007. Available online: http://www.2-
sight.com/Argus_II_IDE_pr.htm.
[2] Second Sight Medical Products, “Second Sight completes U.S. phase I
enrollment and commences European clinical trial for the Argus II retinal
implant,” Press Release, February 2008. Available online: http://www.2-
sight.com/press-release2-15-final.html.
[3] N. R. B. Stiles, M. C. Hauer, P. Lee, P. Nasiatka, J.-C. Lue, J. D. Weiland,
M. S. Humayun, and A. R. Tanguay, Jr., “Intraocular camera for retinal
prostheses: Design constraints based on visual psychophysics,” Frontiers in
Optics, The Annual Meeting of the Optical Society of America, San Jose, CA,
2007, poster JWC46.
[4] M. S. Humayun, “Intraocular retinal prosthesis,” Transactions of the
American Ophthalmological Society, vol. 99, pp. 271-300, November 2001.
[5] K. Cha, K. Horch, and R. A. Normann, “Simulation of a phosphene-based
visual field: visual acuity in a pixelized vision system,” Annals of Biomedical
Engineering, vol. 20, no. 4, pp. 439-449, July 1992.
[6] K. Cha, K. Horch, and R. A. Normann, “Reading speed with a pixelized
vision system,” Journal of the Optical Society of America A, vol. 9, no. 5, pp.
673-677, May 1992.
[7] K. Cha, K. Horch, and R. A. Normann, “Mobility performance with a
pixelized vision system,” Vision Research, vol. 32, no. 7, pp. 1367-1372, July
1992.
[8] E. Hecht, Optics, 4th ed., Boston: Addison Wesley, 2001, Chapter 5.
[9] S. Siik, “Lens autofluorescence: in aging and cataractous human lenses,
clinical applicability.” Ph.D. Dissertation, University of Oulu, 1999.
[10] M. S. Humayun, J. D. Weiland, B. Justus, C. Merrit, J. Whalen, D.
Piyathaisere, S. J. Chen, E. Margalit, G. Fujii, R. J. Greenberg, E. de Juan,
Jr., D. Scribner, and W. Liu, “Towards a completely implantable, light-
sensitive intraocular retinal prosthesis,” in Proceedings of the Annual
International Conference of the IEEE Engineering in Medicine and Biology
Society (EMBC), vol. 4, 2001, pp. 3422-3425.
55
[11] D. V. Piyathaisere, E. Margalit, S. J. Chen, J. S. Shyu, S. A. D’Anna, J. D.
Weiland, R. R. Grebe, L. Grebe, G. Fujii, S. Y. Kim, R. J. Greenberg, E. de
Juan, Jr., and M. S. Humayun, “Heat effects on the retina,” Ophthalmic
Surgery, Lasers and Imaging, vol. 34, no. 2, pp. 114-120, March 2003.
[12] K. Gosalia, J. Weiland, M. Humayun, and G. Lazzi, “Thermal elevation in
the human eye and head due to the operation of a retinal prosthesis,” IEEE
Transactions on Biomedical Engineering, vol. 51, no. 8, pp. 1469-1477,
August 2004.
[13] E. Peli, I. Lipshitz, and G. Dotan, “Implantable miniaturized telescope (IMT)
for low vision,” in Vision Rehabilitation: Assessment, Intervention and
Outcomes, C. Stuen, A. Arditi, A. Horowitz, M. A. Lang, B. Rosenthal, and
K. Seidman, Eds., Lisse: Swets & Zeitlinger, 2000, pp. 200-203.
[14] S. Norrby, “Using the lens haptic plane concept and thick-lens ray tracing to
calculate intraocular lens power,” Journal of Cataract and Refractive
Surgery, vol. 30, no. 5, pp. 1000-1005, May 2004.
[15] S. Norrby, E. Lydahl, G. Koranyi, and M. Taube, “Clinical application of the
lens haptic plane concept with transformed axial lengths,” Journal of
Cataract and Refractive Surgery, vol. 31, no. 7, pp. 1338-1344, July 2005.
[16] A. R. Greenleaf, Photographic Optics, New York: The MacMillan Company,
1950, pp. 25-27.
[17] “LUM - Illumination analysis,” in Code V® 9.70 Reference Manual, Optical
Research Associates, Pasadena, CA, 2006, Chapter 23.
[18] P. Nasiatka, A. Ahuja, N. R. B. Stiles, M. C. Hauer, R. N. Agrawal, R. Freda,
D. Guven, M. S. Humayun, J. D. Weiland, and A. R. Tanguay, Jr.,
“Intraocular camera for retinal prostheses,” Annual Meeting of the
Association for Research in Vision and Ophthalmology (ARVO), Ft.
Lauderdale, FL, 2005, poster B480.
[19] P. Nasiatka, A. Ahuja, N. R. B. Stiles, M. C. Hauer, R. N. Agrawal, R. Freda,
D. Guven, M. S. Humayun, J. D. Weiland, and A. R. Tanguay, Jr.,
“Intraocular camera for retinal prostheses,” Frontiers in Optics, The Annual
Meeting of the Optical Society of America, Tucson, AZ, 2005, paper FThI4.
[20] M. J. Stafford, “The histology and biology of the lens,” Optometry Today, pp.
23-30, January 2001.
56
[21] S. S. Lane, B. D. Kuppermann, I. H. Fine, M. B. Hamill, J. F. Gordon, R. S.
Chuck, R. S. Hoffman, M. Packer, and D. D. Koch, “A prospective
multicenter clinical trial to evaluate the safety and effectiveness of the
implantable miniature telescope,” American Journal of Ophthalmology, vol.
137, no. 6, pp. 993-1001, June 2004.
[22] M. Pop, Y. Payette, and M. Mansour, “Ultrasound biomicroscopy of the
Artisan phakic intraocular lens in hyperopic eyes,” Journal of Cataract and
Refractive Surgery, vol. 28, no. 10, pp. 1799-1803, October 2002.
[23] P. Nasiatka, M. C. Hauer, N. R. B. Stiles, J.-C. Lue, S. Takahashi, R. N.
Agrawal, R. Freda, M. S. Humayun, J. D. Weiland, and A. R. Tanguay, Jr.,
“Intraocular camera for retinal prostheses,” Annual Meeting of the
Association for Research in Vision and Ophthalmology (ARVO), Ft.
Lauderdale, FL, 2006, poster B554.
[24] P. Nasiatka, M. C. Hauer, N. R. B. Stiles, J.-C. Lue, S. Takahashi, R. N.
Agrawal, M. S. Humayun, J. D. Weiland, and A. R. Tanguay, Jr., “An
intraocular camera for retinal prostheses,” Frontiers in Biomedical Devices
Conference (BioMed), Irvine, CA, 2007, paper 38109.
[25] P. Nasiatka, M. C. Hauer, N. R. B. Stiles, N. Suwanmonkha, M. Leighton, J.-
C. Lue, M. S. Humayun, and A. R. Tanguay, Jr., “An intraocular camera for
provision of natural foveation in retinal prostheses,” Annual Fall Meeting of
the Biomedical Engineering Society (BMES), Los Angeles, CA, 2007, poster
P6.96.
57
Chapter 3
VIDEO CAPTURE AND IMAGE PROCESSING TOOLS FOR
PROSTHETIC VISION EXPERIMENTS
3.1 Introduction
Initial psychophysical studies performed in our group on pixellated prosthetic
vision were carried out using image processing functions in Adobe Photoshop®,
which sufficed for basic pixellation and blurring of still images. To support more
advanced psychophysical studies using both still and motion video images, several
MATLAB scripts were developed to flexibly implement custom image processing
functions unique to psychophysical experiments relating to epiretinal prosthetic
vision. It was also of interest to apply these unique functions to video images
captured with intraocular camera prototypes. To this end, a LabVIEW program was
developed to interface with various video capture cards in order to acquire images
and videos from IOC prototypes. A second LabVIEW program was developed to
easily post-process a queue of video streams by calling the aforementioned
MATLAB functions, with a unique set of user-specified parameters for each. These
custom programs allow our team to evaluate the effects of pixellation, color, motion,
pre- and post-pixellation blur, misalignments between the locations of physical
stimulator electrodes and elicited percepts, dead electrodes, and spaces between
pixels due to insulating gaps between the microstimulator electrodes. Several
parameters associated with the simulation of each of these effects can be varied, and
it is simple to incorporate new functions into the software.
58
3.2 Image processing functions for prosthetic vision simulation
With the assistance of Pamela Lee, a participant in the NSF Research
Experience for Undergraduates (REU) program at USC, a graphical user interface
was developed in MATLAB that allows the user to easily load a still image, specify
a set of desired processing parameters, and then successively apply different image
processing functions, and then view the processed image on screen. A screenshot of
the program is shown in Figure 3-1.
Figure 3-1 Screenshot of the custom graphical user interface implemented in
MATLAB to process still images for simulating pixellated prosthetic vision.
The three primary functions of the MATLAB program are pixellation,
gridding, and Gaussian blur. For each of these main functions, various parameters
and options can be specified. For example, there is a fill-factor option and a random
59
dropout feature within the pixellation function. Also, the grid function includes the
ability to apply random, local shifts in position to each super pixel. Each of these
functions are described in detail below.
The pixellation function is accomplished by dividing the image into a
specified number of “super pixels.” The number of super pixels is intended to
correspond to the number of electrodes in a microstimulator array. The pixel values
within each super pixel are replaced with the average value of the pixels within a
specified subregion of the super-pixel area of the original image (the actual number
of pixels in the image is not reduced, except to crop off any excess pixels at the right
and bottom edges of the overall image that do not evenly divide into the desired
number of super pixels). A fill factor between 0 and 1 may be specified,
corresponding to the ratio of the area of the specified subregion to that of the super
Super
pixel
Photosensitive
area
Dropout
Figure 3-2 Illustration of the pixellation function with the ability to simulate the
non-unity fill factor of most CMOS and CCD pixels. The values in the
photosensitive area of each super pixel are averaged to determine the super pixel
color and brightness. The software also allows for random super-pixel dropouts in
order to simulate electrodes that do not elicit visual percepts (photo courtesy of
http://www.mikelevin.com).
60
pixel. Fill factors less than unity reproduce the effects of aliasing that can occur in
CCD and CMOS sensors due to the non-unity fill factor of the photosensitive area in
each pixel. Thus, the brightness and color sensed by the small photosensitive area
determines the value applied to the entire pixel. The random dropout feature, when
enabled, simulates the chance that one or more electrodes may not elicit a visual
percept for one of a number of reasons. These functions are illustrated in Figure 3-2.
The Gaussian blur function can be used to pre-blur the image before
pixellation to lessen or eliminate aliasing artifacts, and for post-blurring the image
after pixellation, and in some cases after applying a grid function. The user specifies
the horizontal and vertical dimensions, in pixels, of the Gaussian convolution kernel,
as well as a blur radius (standard deviation, σ). The two-dimensional convolution
kernel, as a function of the pixel numbers along each dimension (n
1
, n
2
) and the blur
radius σ is:
( )
22
12
12
2
(, ) exp
2
nn
hn n
σ
⎡ ⎤
−+
⎢ ⎥
=
⎢ ⎥
⎣ ⎦
. (2-1)
The values of the pixel numbers, (n
1
, n
2
), are (0, 0) at the center of the convolution
kernel. The kernel is typically square, with n
1,max
= n
2,max
, but a rectangular kernel is
allowed and would result in asymmetric blurring.
The grid function simulates the insulated spaces between the edges of
electrodes at the retina. The user specifies a duty cycle for the grid, such that a duty
cycle of 50% corresponds to a gap between electrodes that is equal to the electrode
width, 0% generates no grid, and 100% results in a completely black image. After
61
an image is gridded, the local positions of the super pixels can be randomized within
the limits of the original super pixel area before gridding. This causes a random
jitter in the super pixel locations that represents the possibility of a subject’s visual
system generating percepts at slightly different locations than the actual stimulation
sites on the retina. The random location of each gridded super pixel within its
original super-pixel area can be generated using a uniform or Gaussian probability
distribution with a specified standard deviation. The first case generates more
extreme results because there is an equal probability for the gridded super pixel to be
located anywhere within its confined square. In the second case, the Gaussian
probability distribution function is centered on the super pixel, and a typical value
for the standard deviation is 0.2 so that the gridded super pixels are distributed about
the center of their corresponding original super pixels, with a 20% standard
deviation.
Images demonstrating the different possibilities of the gridding function are
shown in the upper four photos in Figure 3-3. The bottom two photos show the
original image before and after having the entire series of functions applied using
values that Noelle Stiles and Dr. Tanguay experimentally determined to be nearly
optimal for object recognition at this level of pixellation.
62
No randomization
Gaussian randomization, σ = 0.1
Original
Uniform randomization
33% pre-blur, pixellate to 19 × 10, 10% random
dropouts, Gaussian randomization (σ = 0.1),
50% duty cycle grid, 40% post-blur
Original
Figure 3-3 Examples of the various customized processing functions that can be
applied for prosthetic vision simulations and psychophysical experiments. The top
four photos demonstrate the different options for the gridding function. The bottom
two photos show an image that has gone through an entire series of processing
functions. Note that the car is much more recognizable when appropriate pre- and
post-blur are applied despite the grid and dropouts (note that the contrast has been
enhanced in the final image to show these effects more clearly in the printed image).
3.3 Integration of video capture tools with prosthetic vision simulation
software
A flexible laboratory set up for capturing videos was developed to support
intraocular camera testing and visual psychophysics experiments with motion videos.
A dedicated computer was set up with two video capture cards from National
63
Instruments Corporation, one color (Model PCI 1405), and the other grayscale
(Model PCI 1410). Both of these cards are compatible with analog image sensor
arrays; a digital capture card may be added in the near future to interface with digital
camera sensors. To interface with custom, analog, intraocular camera sensors that do
not adhere to standard video protocols like NTSC and PAL, the grayscale capture
card has a programmable clock rate and a high dynamic range (10 bits, or 1,024 gray
levels). The enhanced dynamic range will support future experiments on dynamic
range compression and lateral adaptation algorithms that may be highly
advantageous for use with an intraocular camera.
As for the case of still image capture, a program was written in LabVIEW
that interfaces with the capture cards and allows the user to specify recording
parameters and to capture videos using either of the two capture cards. An additional
LabVIEW program allows the user to load a saved video and process it frame-by-
frame using any of the functions and parameters described previously for the
MATLAB still-image processing program. In this program, the data for each video
frame is passed from LabVIEW to MATLAB for processing, and then passed back
to LabVIEW to be saved in a processed video file. In this way, LabVIEW provides
the communication with the video capture hardware and an easy-to-program user
interface, while MATLAB is responsible for the mathematical processing of the
image frames. This combination utilizes the best features of both programs.
64
The amount of blurring required for prosthetic vision simulations requires
unusually large convolution kernels (kernel sizes of 40 × 40 pixels are not
uncommon). Since convolution is a time consuming computation, it can often take
up to 30 seconds to process each frame of a video (on a general purpose computer),
depending on the selected parameters. For this reason, the LabVIEW program that
Figure 3-4 Screenshot of the LabVIEW program for queuing several experimental
videos to process (for overnight processing), each with individually selected
parameters.
65
post-processes experimental videos (via MATLAB) was modified to allow the user
to input a queue of videos to process, with individual parameters specified for each
video in the queue. This new queuing feature enables the user to set up a series of
videos to process that can be left to run overnight. A screenshot of this LabVIEW
program is shown in Figure 3-4.
3.4 Summary
These tools allow for rapid testing and characterization of current and future
intraocular camera prototypes. The image capture cards can be interfaced with
analog camera sensors currently under evaluation, and the video processing tools
enable further experiments in visual psychophysics to analyze the effects of motion
and color in prosthetic vision. The answers to these questions, in turn, assist in
making decisions for the development of the intraocular camera. Two frames from a
video that was recorded and processed by Pamela Lee and Noelle Stiles to
demonstrate the capabilities of this new laboratory tool, as well as the significant
results of recent psychophysics experiments on motion video images, are shown in
Figure 3-5. The blur parameters applied to the video are nearly optimal for 20 × 30
pixellation and demonstrate a remarkable improvement in object recognition relative
to a straight pixellated image with no blur applied.
66
Figure 3-5 Frames from a video that are pixellated to 20 × 30 super pixels (top row)
in comparison with the same video frames when pre-blurred by 30% before
pixellation to 20 × 30 super pixels, and then post-blurred by 40% (bottom row). The
images on the left show a FedEx truck driving by, and the images on the right show a
sign that reads, “DO NOT BLOCK DRIVEWAY.”
67
Chapter 4
ANALYSIS OF THE LENS DESIGN SPACE FOR
AN INTRAOCULAR CAMERA
4.1 Introduction
Designing a camera for a retinal prosthesis for the blind poses a unique and
interesting imaging question – what is good enough? Unlike a conventional video
camera, the images captured by this camera will be pixellated, processed, and
converted into a set of neuronal stimuli to excite visual percepts in the blind that
resemble the scenes in front of them. A diffraction-limited, high-resolution image
over a wide field of view (FOV) is therefore not warranted for this application.
Rather, it is the size and pitch of the electrodes on the epiretinal microstimulator
array that will ultimately limit the resolution and visual field of the system.
Furthermore, supplying high contrast at low light levels is also important. It
therefore seems reasonable to trade resolution for a decrease in f-number to achieve
greater illumination at the sensor plane. An intraocular camera must also be
designed to work in conjunction with the refractive power and inherent aberrations
of the corneal lens. These unique features enable us to consider a miniaturized
imaging system with a lower f-number and shorter optical system length than found
in even the smallest available digital cameras for portable devices like mobile phones
and PDAs (which typically have 5 to 10 mm optical system lengths) [1-6].
For any theoretical optical system design, there are numerous ways to
accomplish a given design goal. Fortunately, the practical parameter space is
68
reduced by constraints on performance characteristics, materials, intended device
environment, package size and mass, manufacturing and test methods, cost, and any
number of other application and end-user driven constraints. Certain limitations are
fundamental, while others are technological. After applying these boundary
conditions, the designer is left with a reduced range of parameters to explore that
lead to several possible design implementations, each with its tradeoffs.
Accordingly, the intraocular camera concept presents several known constraints,
possible design choices, and corresponding tradeoffs. These include:
Field of view
Match to microstimulator array size and pitch
Legal blindness limit (20 degree minimum field of view)
Extended field of view to accommodate image processing algorithms
Central versus peripheral visual field resolution requirements
Effective focal length tradeoff with blur spot size (aberrations)
f-number (effective focal length divided by entrance pupil diameter)
Lens diameter; incorporation of aperture stop
Optical throughput
Imaging performance (aberrations)
Glass as compared with polymer lens materials
Mass
Size
Refractive index
Biocompatibility
69
Antireflection coatings
Manufacturing and prototyping methods
Cost
Refractive as compared with hybrid refractive/diffractive lenses
Simplicity tradeoffs against complexity in manufacturing, test, and
alignment
Incorporation of additional lens surfaces
Correction of aberrations (degrees of freedom)
Reduction in mass
Several assumptions can be made about the optical system that derive from
the experience gained during the prototyping efforts and research to date described in
Chapter 2, and summarized again here. Regarding device diameter, surgical
experiments indicate that it is desirable for the scleral incision that is made to insert
the device into the eye be approximately 5 mm wide, or less (to facilitate a safe and
repeatable operation). This means the maximum allowable device diameter is 3.18
mm for a circular package, or 2.5 mm on a side for a square package. This assumes
that the haptic elements can be folded, that the device is inserted longitudinally, and
that the slit in the sclera is fairly rigid and does not stretch when opened. These are
all valid assumptions according to our team of ophthalmological surgeons. For this
reason, lens designs with apertures less than 2.5 mm are a best match to this
constraint, though apertures up to 3 mm will be considered, as surgical implantation
experiments are ongoing. Likewise, our current understanding of the surgical and
physiological constraints limit the length of the package to 4.5 mm. Consequently,
70
the optical system length is limited to approximately 3.5 mm (to leave margin for the
sensor and packaging). This directly affects the space available for lenses and
implies a very short focal length. This presents a challenge, as optical designers
generally prefer both a longer focal length and sufficient room to incorporate
additional lens surfaces to increase the degrees of freedom for reducing aberrations
across the field of view and wavelength spectrum. Furthermore, experiments using
mass-equivalent models for the IOC have indicated that a total device mass of less
than 75 mg is desirable (close to the mass of commercial intraocular lenses) to
ensure long-term stability. The goal for the mass of the lens portion of the system is
less than 20 mg, which is more than an order of magnitude lighter than the
commercial glass lens used in our first three prototypes. This requirement suggests
the incorporation of lightweight polymer lenses into the IOC, as well as the potential
advantages of employing diffractive elements to increase the optical design degrees
of freedom without adding significant mass to the system.
Given these assumptions about the optical system, this chapter explores the
tradeoffs and implications of selecting different focal lengths, apertures, and lens
materials for the IOC. Also, since the IOC must work in conjunction with the optical
surfaces of the eye, the implementation of an accurate schematic eye model in a
computer-aided lens design program is covered. Further refinements and analyses
associated with specific design implementations using refractive and diffractive lens
elements are then covered in the following two chapters.
71
4.2 Incorporation of an accurate schematic eye model
During the initial stages of this project, OSLO® ray tracing software was
used to model the imaging properties of spherical and aspherical lenses inside a first
order model of the human eye, called the Gullstrand schematic eye model [7]. In this
model, the optical surfaces of the eye are approximated with spherical curvatures.
The first-order paraxial properties of this model are accurate, but the aberrations do
not match clinical measurements of real human eyes. Furthermore, the chromatic
dispersion properties of the eye materials are not included. As a first step to
developing custom lens designs for the IOC, a more accurate schematic eye model is
needed. This is critical as the corneal surfaces are actually aspherical, and the
chromatic properties of the cornea and aqueous humor will sum with those of any
lens system that is modeled inside the eye.
Several schematic eye models were surveyed from the literature, resulting in
the selection of the Liou and Brennan schematic eye model published in 1997 [8].
This model is one of three schematic eye models that is commonly used, in addition
to the Gullstrand first-order eye model [7-11]. The properties of these four models
were nicely summarized by Chen, et al., who faced the same decision of which
model to use for simulations of a photorefractive eye testing instrument [12]. They
used both the Navarro and the Liou and Brennan eye models for their simulations,
and compared the results with photorefractive tests on human subjects. Interestingly,
they found that the Liou and Brennan eye model, which has a lower spherical
aberration than the Navarro eye model, more accurately predicted the results for
Caucasian subjects, whereas the Navarro eye model more accurately matched data
72
from Asian subjects. These results are understandable due to large variations in the
spherical aberrations of eyes for people of different ages, genders, and ethnicities
[13].
Table 4-1 Comparison of the optical surface parameters of four major schematic
eye models (From [14]).
The Liou and Brennan eye model was chosen because it was developed to
represent the ocular anatomy of the eye as closely as possible by using large sets of
clinical data. The model includes the asphericities of the anterior and posterior
cornea, as well as of the crystalline lens. The crystalline lens is modeled using a
gradient index profile. The chromatic dispersion of both the surfaces and of the
ocular media are included. A Gaussian apodization of the pupil simulates the Stiles-
Crawford effect at the retina (the incident-angle dependence of the retina’s light
73
sensitivity). The pupil is decentered by 0.5° nasally, as is the case for real human
eyes. A schematic illustration of the Liou eye model and a table of its surface
properties are shown in Figure 4-1. In the table, the “Thickness” parameter specifies
the distance along the optical axis from the specified surface to the next following
surface.
Surface Anatomy
Conic
Constant
Radius
(mm)
Thickness
(mm)
Optical
Medium
Refractive
Index
(at 555 nm)
1 Anterior cornea −0.18 7.77 0.50 Cornea 1.3760
2 Posterior cornea −0.60 6.40 3.16 Aqueous 1.3360
3 Pupil (ant. lens) −0.94 12.40 1.59 Aqueous Gradient
4 Lens division NA Infinite 2.43 Lens Gradient
5 Posterior lens 0.96 −8.10 16.27 Vitreous 1.3360
6 Retina NA NA NA
Figure 4-1 Diagram and surface parameters of the Liou and Brennan schematic eye
model (After [8]). The optical power of the schematic eye is 60.35 diopters
(effective focal length = 16.57 mm), and the axial length is 23.95 mm.
The Liou and Brennan schematic eye was modeled and validated against the
published data using CodeV® ray tracing software. The ray tracing diagram of the
eye is shown in Figure 4-2. The fovea centralis, where the eye has the highest
74
resolution vision, is located 5° temporal from the optical axis of the eye. The
imaging properties of the schematic eye were therefore evaluated at 5° for
comparison with the published research.
1
2
3
4
5
6
0 deg.
5 deg.
Figure 4-2 Implementation of the Liou and Brennan schematic eye model in Code
V® showing two fans of rays from an infinite object distance at 0° and 5° coming to
a focus at the retina. The surface numbers correspond to the table in Figure 4-1 (note
that this diagram is inverted top to bottom as compared with the corresponding
diagram in Figure 4-1).
A comparison of the modulation transfer function (MTF) of our Code V®
implementation of the Liou and Brennan schematic eye with the published Liou and
Brennan eye data (from their optical ray tracing model, also implemented using Code
V®), as well as with averages of clinical measurements of young-adult eyes is shown
in Figure 4-3. The current Code V® implementation matches well with this
published data, indicating that the schematic eye model has been implemented
correctly.
75
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 10 20304050 6070
Spatial Frequency (cycles per degree)
Modulation
Liou schematic eye
Code V model of Liou eye
Measurements by Artal, et al.
Measurements by Navarro, et al.
Figure 4-3 Comparison of the sine-wave polychromatic tangential modulation
transfer function (MTF) for a 4-mm pupil.
A key property of the Liou and Brennan schematic eye model is that it
matches clinical results of the ocular spherical aberration better than previous
models, which had higher spherical aberration, as shown in Figure 4-4(a). The
contributions of the cornea and lens to the total transverse spherical aberration are
shown in Figure 4-4(b). The use of accurate properties for the corneal surface in
designing the optical system is important because the crystalline lens will be
replaced with the intraocular camera.
76
(a) (b)
Figure 4-4 (a) Spherical aberration in diopters (D) as a function of ray height
across the pupil for several schematic eye models, plotted along with mean
experimental data. The Liou and Brennan model more accurately matches the
lower spherical aberrations observed in the measurements. (b) Contributions of the
cornea and crystalline lens to the total spherical aberration of the eye. (From [8]).
4.3 Focal length, resolution, and field of view
The intraocular camera has the unique imaging property that the minimum
required resolution is driven by the relatively low pixellation levels of the retinal
prosthesis array rather than by the size of digital sensor pixels or the foveal
resolution of the human eye. The field of view of the optical system is set in part by
human factors (the desirable angular field of view for visual acuity, navigation, and
mobility), in part by the size of the microstimulator array, and in part by the size of
the image sensor array. Wider field of view can be incorporated only at a cost in
either system complexity (shapes of the optical elements, number of optical
elements, size of the sensor) or aberration correction (blur spot size). As discussed
in Chapter 2, visual psychophysics experiments indicate that as few as 25 × 25 pixels
(625 total pixels) are sufficient for many object recognition, navigation, collision
avoidance, and locomotion tasks [15-19]. Accordingly, a current goal for future
77
generations of the epiretinal microstimulator array is to provide between 625
(25 × 25) and 1024 electrodes (32 × 32) over a 20° field of view (±10°), centered on
the macular region of the retina. This field of view is chosen to satisfy the visual
field criterion of the definition of legal blindness in the United States, which states
that the subject’s visual field must be at least 20° [20, 21]. By comparison, the first-
and second-generation epiretinal microstimulator arrays that are currently
undergoing human clinical studies have 16 (4 × 4) and 60 (10 × 6) electrodes,
respectively.
To answer the question of “what is good enough” with respect to the
camera’s imaging performance, we need to examine the resolution required at the
sensor plane to at least match the electrode pitch of the prosthetic array on the retina.
The required resolution can be computed by relating the angular span of the
electrode pitch at the retina to the corresponding angular span of the pixel pitch at the
camera’s sensor plane. Since image height scales directly with focal length, the pixel
pitch is dependent on the focal length of each particular optical system design. To
accommodate a package that is less than 4.5 mm in length, it is reasonable to assume
that the optical system will need to be no more than 3.5 mm long (measured from the
most anterior surface to the focal plane) to leave sufficient room for the image sensor
array and housing. Experience has shown that with a 3.5 mm optical system length,
for an intraocular camera placed in the aqueous humor, the focal length will be
around 2 mm for any particular lens design. This is one parameter for which there is
little design freedom.
78
As illustrated in Figure 4-5, two nodal points can be determined for any
optical system, one on the object side (the front nodal point, or NP1) and one on the
image side (the rear nodal point, or NP2). Any ray incident on the optical system
whose projected ray path passes through the front nodal point at a given angle will
appear to emerge from the rear nodal point at the same angle.
θ
θ
y
NP1
NP2
Nodal Points
Figure 4-5 Nodal points of an optical system. The quantity y denotes the height at
the image plane for object rays entering the system at angle θ. In the average human
eye, the distance from NP2 to the retina is about 17 mm. In the intraocular camera,
the distance from NP2 to the image sensor is expected to be between 2.0 and
2.3 mm.
For simple optical systems (one or two lenses), the rear nodal distance (NP2
to the image plane), is nearly equal to the system’s effective focal length. For the
average human eye, the distance from the rear nodal point to the retina is about
17 mm. Since future generations of the microstimulator array are expected to cover
a 20° visual field, the largest linear dimension of the array will be about 6 mm
(2 × 17 mm × tan10°). This 6 mm length normally measured along the height or
width of the array, although the specific arrangement of the electrodes for future
high-density prostheses is still under development. For this reason, the angular span
per electrode and corresponding pixel pitch at the intraocular camera (IOC) sensor
plane are computed for several existing and potential microstimulator array
79
configurations in Table 4-2. In each case, a rectilinear grid of electrodes is assumed,
though other arrangements are also being considered. The pixel pitch is computed
for a range of rear nodal distances of the camera lens. For the IOC, the distance from
the rear nodal point to the sensor plane is nearly equal to its focal length, and is
expected to be in the range of 2.0 to 2.3 mm, given the size constraints of the
problem. For this reason, the results for the 2 mm nodal distance are printed in bold.
Microstimulator
array resolution
(N x N)
Microstimulator
array size on
retina (mm)
Electrode pitch
at retina
(mm)
Electrode angular
pitch at retina
(deg)
IOC NP2
to sensor
(mm)
Corresponding pitch
at IOC sensor
(μm)
Case 1: 32 x 32 microstimulator array (4.2 mm x 4.2 mm with 6 mm diagonal)
32 x 32 4.20 0.13 0.44 1.00 7.72
1.50 11.58
2.00 15.44
2.50 19.30
3.00 23.16
Case 2: 32 x 32 microstimulator array (6 mm x 6 mm with 8.5 mm diagonal)
32 x 32 6.00 0.19 0.63 1.00 11.03
1.50 16.54
2.00 22.06
2.50 27.57
3.00 33.09
Case 3: 25 x 25 microstimulator array (6 mm x 6 mm with 8.5 mm diagonal)
25 x 25 6.00 0.24 0.81 1.00 14.12
1.50 21.18
2.00 28.24
2.50 35.29
3.00 42.35
Case 4: 10 x 6 microstimulator array (6 mm diagonal)
10 x 10 6.00 0.60 2.02 1.00 35.29
1.50 52.94
2.00 70.59
2.50 88.24
3.00 105.88
Case 5: 4 x 4 microstimulator array covering 11.3 degrees on a side (implanted case)
4 x 4 3.40 0.85 2.86 1.00 50.00
1.50 75.00
2.00 100.00
2.50 125.00
3.00 150.00
Table 4-2 Resolution required at the intraocular camera image plane to match the
electrode pitch of the epiretinal microstimulator array at the retina for five relevant
stimulator array configurations.
A plot corresponding to the data in Table 4-2, showing the dependency of the
pixel pitch at the sensor plane on the camera’s rear nodal distance for three
80
microstimulator array configurations, is shown in Figure 4-6. These results relate to
the required modulation transfer function (MTF) and allowable blur spot diameters
of the lens system.
0
20
40
60
80
100
120
0.5 1.0 1.5 2.0 2.5 3.0 3.5
Distance from rear nodal point to IOC sensor plane
(~focal length) (mm)
Corresponding pixel pitch at camera
sensor plane (
μ
m)
10 x 6 electrode array
25 x 25 electrode array (6 x 6 mm)
32 x 32 electrode array (6 x 6 mm)
Expected range of IOC
rear nodal distances
Figure 4-6 Dependency of the intraocular camera pixel pitch on the size of the
retinal microstimulator array and the camera’s rear nodal distance (which is
approximately equal to its focal length). Data points are from Cases 2, 3, and 4 in
Table 4-2.
In a conventional digital camera, the resolution of the lens system, which is
related to the modulation transfer function (MTF), is typically designed to exceed or
match the Nyquist spatial sampling frequency of the digital sensor’s pixel array.
This means that the lens is designed to resolve at least one line-pair (a single pair of
black and white bars) for each two pixels on the sensor (e.g., the lens would need to
resolve spatial frequencies up to at least 83 lp/mm for a monochrome sensor with
6 μm pixels). For color sensors with a mosaic color filter array superimposed on the
81
pixels, the spatial sampling rate of the sensor is set by the color filter array pattern in
addition to that of the pixel pitch [22]. Often, the cutoff frequency of the lens (i.e.,
the first spatial frequency at which the contrast, or MTF, goes to zero) exceeds this
resolution and passes higher spatial frequencies than the sensor can resolve. This
undersampling at the sensor plane leads to aliasing artifacts in the resulting images.
Consequently, most digital cameras either use smaller pixels, at the cost of increased
photon noise, or employ an optical blur filter at the sensor plane in addition to
electronic analog and/or digital low pass filtering to eliminate aliasing [22-25].
The situation with the intraocular camera is atypical because of the very low
resolution of the microstimulator array. The IOC lens may, in fact, be designed to
pass a lower spatial frequency than the Nyquist rate of the imaging sensor, meaning
the image will actually be oversampled at the sensor. This could pose several
advantages for the intraocular camera. Since, in this case, the camera’s focal spots
will blur across several sensor pixels, aliasing artifacts due to undersampling can be
reduced, if not completely avoided. In this scenario, the outputs of several pixels can
be binned together to provide the color or grayscale value of the “super pixel”
corresponding to a given microstimulator electrode. The summing of the outputs of
multiple sensor pixels (or the ability to use larger pixels) results in a better signal-to-
noise ratio and mitigates the need for power-consuming high gain levels in the sensor
electronics.
Consider in particular the case of a 25 × 25 microstimulator array. For a
25 × 25 array covering a 6 mm × 6 mm area on the retina, the electrode pitch is 0.81°
82
in angle space (Table 4-2). The pixel pitch at the camera’s sensor plane
corresponding to 0.81° is 28 μm for a camera with a 2 mm rear nodal distance. For
the higher-density case of a 32 × 32 array, also covering a 6 mm × 6 mm area on the
retina, the pixel pitch at the IOC sensor plane decreases to 22 μm. If the 6-mm
dimension is instead along the diagonal, meaning that 32 electrodes are now
squeezed into a 4.2 mm height and width, the corresponding pixel pitch goes down
to 15 μm. All of these values are at least 2 to 14 times the typical pixel pitch of
CMOS sensors (pixel sizes typically range from 2 to 8 μm), indicating that the IOC
can and should be designed to take advantage of pixel binning in the sensor (or to
incorporate a sensor with larger pixels).
The pixel pitches computed above for 6 mm × 6 mm arrays with 625 to 1024
electrodes indicate that the lens system can tolerate blur spot diameters in the range
of 20 to 28 μm, which correspond to spatial resolutions of 25 to 18 line-pairs/mm
(lp/mm) at the sensor plane. As will be discussed later, the MTF provides a more
comprehensive metric of image performance than RMS spot size alone, and we have
found that an IOC lens system with an MTF value ≥ 0.5 at 25 lp/mm is sufficient to
meet the requirements of the envisioned 1000-electrode prosthesis, and in general
corresponds to an RMS spot diameter of ≤ 30 µm. These values must be maintained
over at least a 20° field of view (±10°), and will be used as metrics for judging the
performance of various lens systems in this and later chapters.
Lens designs that maintain this level of imaging performance, or close to it,
over a wider field of view than is actually needed by the microstimulator array (for
83
example, over 40°, twice the required FOV by the retinal prosthesis) are also of
considerable interest. If the images captured by the IOC are sent first to a VPU
(visual processing unit), a wider field of view would facilitate the implementation of
processing algorithms that benefit from information in the peripheral field. An
example of this would be a system that augments the prosthesis to alert subjects (by
audible or other means) to important obstacles or motion that occur outside the
prosthetic’s field of view. In this case, a few large electrodes could be placed in the
periphery of the retina to help subjects perceive gross movements in their
surroundings. It may also be desirable to maintain the central image quality, or close
to it, over the peripheral field for use by the VPU algorithms. For these reasons, the
imaging performance of the intraocular camera over both the central 20° field of
view and a wider 40° field of view will be evaluated for various lens design
concepts.
4.4 Lens materials and the need for an optical window
Small diameter aspherical lenses can be fabricated using either glass or
polymer materials. Most optical fabrication houses specialize in one or the other, but
not both. As described in Chapter 2, the first IOC prototypes incorporated a short
focal length aspherical lens (from Lightpath Technologies, Inc.) fabricated from
O’Hara PBH71 glass. This high index glass (n = 1.92) enabled sharp light bending
at the interface between the aqueous humor (n = 1.34) and the lens. However, high
index glasses are very dense (6.05 g/cm
3
for PBH71 glass) and result in heavy lenses
that are impractical for the IOC. For example, the high index lens used in the
84
prototype camera had a measured mass of approximately 336 mg. To be fair, this
mass included a 5-mm diameter glass flange that is outside the clear aperture. If this
excess glass were to be removed such that the lens diameter is reduced to ~3 mm, it
would still weigh close to 100 mg. We also designed a custom lens using this high
index glass with a 2.5 mm edge diameter and calculated the mass to be 95 mg (as
shown in Figure 4-7 and discussed below). It should be noted that comparisons
between measured masses of various glass and polymer lenses are typically within
2% of their calculated masses. Since our most recent experiments with mass-
equivalent camera models indicated that it is desirable for the entire package to
weigh less than 75 mg, high index glasses do not appear to be a viable option for the
IOC.
High index glasses also exhibit significantly greater chromatic dispersion
than standard index glasses and polymer materials (n = 1.4 to 1.6). The chromatic
dispersion of a refractive material is often characterized by its Abbe V-number,
which is equal to (n
d
− 1) / (n
F
− n
C
), in which n
is the index of refraction at three
characteristic wavelengths, λ
d
= 587.6 nm (helium yellow), λ
F
= 486.1 nm (hydrogen
blue), and λ
C
= 656.3 nm (hydrogen red). Materials with Abbe V-numbers less than
39 are considered highly dispersive.
A thin glass window can be added at the front of a hermetically-sealed
package so that the lens is housed in air or partial vacuum (n = 1.0), and will not be
in contact with the biological media of the eye. The glass window will likely be
fabricated from polished fused silica, which is biocompatible for long-term ocular
85
implantation per ISO 10993 [26]. This enables the use of lower index glass and
polymer materials for the lens (n = 1.4 to 1.6), in order to minimize the effects of
chromatic aberration.
The continued use of high index PBH71 glass (n = 1.92) was still considered
for this windowed configuration. It was thought that the extreme bending afforded
by the 0.92 index difference between the high index glass and air might allow for a
much thinner, lower mass, lens design. However, a study of optimized IOC lens
designs using a high index glass in the windowed configuration produced the
counterintuitive result of lenses that are still thick (2.5 to 2.7 mm) and thus too
massive (~95 mg) for the IOC application. Nevertheless, it was found that the use of
a high-index glass lens in this windowed configuration produced imaging results at
f/0.75 that are only achievable at f-numbers ≥ 1 with lower index lens materials
(n ≈ 1.5), as shown in Figure 4-7 (imaging performance with lower index lenses is
shown later).
Given the unacceptably large mass of high index glass lenses, the current
preference is to use a polymer lens housed in the windowed package described
above. There are several advantages to using an optical polymer rather than a glass
material (with similar refractive index) for the IOC lens. In comparison with glass,
optical polymers are extremely lightweight (at least 2 to 3 times less dense), less
costly, more versatile (producing an aspherical polymer lens is inherently no more
difficult than producing a spherical one), and inexpensive to prototype (< $1,000 per
polymer lens prototype compared with > $5,000 for a glass lens prototype).
86
(a) (b)
Figure 4-7 (a) Custom designed IOC lens system with a high-index glass aspherical
lens located in air, placed behind a fused silica window. The system operates at
f/0.75 and is optimized over a ±20° field of view. The IOC is focused at a 20-cm
object distance. The lens is 2.73 mm thick and weighs 95 mg (calculated), which
exceeds the IOC mass goals. The image distance is only 0.44 mm, measured from
the rear vertex of the lens. (b) Polychromatic spot diagram and RMS spot diameters
are shown over a 20° semi-field of view.
Customized IOC lens designs incorporating optical polymers result in lenses with
masses that are significantly less than 30 mg (including the weight of the fused silica
window), ten times less massive than the glass lens used in the first IOC prototypes.
Several coating options are also available for polymer optics, and high quality optical
surfaces can be achieved. For small diameter aspherical lenses, the only significant
advantage of glass over polymers is a wider selection of refractive index and
chromatic dispersion combinations. The reason small glass lenses are more costly to
prototype is because aspherical glass surfaces cannot be machined and must be
molded instead, whereas polymer lens prototypes can be diamond turned. It
typically costs between $5,000 and $20,000 to develop a mold for a custom lens
design. A comparison of the refractive indices, Abbe V-numbers, and densities of
87
common optical glasses and plastics, including the specific properties of O’Hara
PBH71 glass and an optical polymer called Zeonex®, is shown in Table 4-3.
Glass Index Abbe, V
d
*
Density Comments
O’Hara PBH71 1.92 21.3 6.05 g/cm
3
First prototype IOC lens
material (high index glass)
Common optical
glasses
1.5 to 1.9 30 to 70 2.5 to 4.5 g/cm
3
Crown, flint…
Common optical
polymers
1.5 to 1.75
32 to 48 typical;
some choices
between 50 and 60
1.0 to 1.5 g/cm
3
Cyclic olefin polymers
(Zeonex®), acrylic,
polycarbonate,
polystyrene, high index
(1.6 to 1.7) monomers…
Zeonex® 1.53 55.8 1.01 g/cm
3
Currently preferred IOC
lens material
*
V
d
< 39 is considered highly dispersive
Table 4-3 Typical values for the refractive index, Abbe V number, and density of
common optical glasses and polymers in comparison with the high-index, high
density PBH71 glass used in the first three IOC prototype camera generations. The
values for Zeonex®, the currently preferred lens material for the IOC, are also shown
[27-29].
Several high quality optical polymers are readily available. After evaluating
several options, the currently preferred material for the IOC lens is a cyclic olefin
polymer called Zeonex® (from Zeon Corporation) that has excellent optical, thermal,
and mechanical properties. These properties include: extremely low density
(1.01 g/cm
3
), low chromatic dispersion (V
d
= 55.8), low moisture absorption, low
thermal expansion, excellent visible light transmittance, low impurities, and
excellent chemical resistance [27-29]. Detailed properties of Zeonex® and
comparisons with other optical polymers can be found in the literature, and from the
Zeon Chemicals corporate website. A table of important optical properties for
several optical grade polymers is reproduced below from a paper by W. S. Beich
[28].
88
Zeonex® grade E48R is commonly used for small lens applications such as
DVD pickup lenses and mobile phone camera lenses. This material was also used in
a demonstration of a miniaturized objective lens for an in vivo fiber optic confocal
microscope [30]. Another variation of this polymer (Zeonex® 690R) is approved for
use in medical-grade applications (USP Class VI approved) such as syringes, vials,
and biomedical optical lenses. The optical properties of these two grades of
Zeonex® are essentially identical (see “cyclic olefin polymers” column in Table 4-4)
and lenses can be fabricated from either one, though the E48R grade is more
commonly used in polymer lens fabrication houses. Both grades are being
considered for use in the IOC optical system.
89
Table 4-4 Properties of selected optical grade polymers (From [28]). Zeonex® grades E48R and 690R
are cyclic olefin polymers.
90
Several companies specialize in fabricating and testing custom polymer
optics. Both aspherical and diffractive surface profiles (as well as hybrid surfaces
that are both aspherical and diffractive) are manufacturable on small diameter lenses
using injection molding or single-point diamond turning [27, 28, 31, 32]. Diamond
turning is used to rapidly fabricate prototype lenses in polymer materials. With
diamond turning, a polymer blank can be machined directly, avoiding the costly step
of developing a custom mold for each unique lens design to be evaluated. In this
way, a number of potential designs can be rapidly fabricated and tested in small
quantities, often for less than the cost of developing a single mold (whereas a mold is
required to fabricate small aspherical glass lenses, as glass cannot be machined). An
uncoated diamond-turned polymer lens typically costs less than $1000 in low
quantities (typically two to five units), with delivery times of four to six weeks.
There may also be some non-recurring engineering costs associated with
programming the diamond turning machine and for any custom tooling required to
hold and machine the lens from a blank. A reflective or refractive null element may
also be machined to facilitate interferometric testing of the prototype lens [33, 34].
Once a polymer lens design is finalized, it then becomes more economical to
fabricate a mold that can be used to produce larger quantities of the lens. This mold
creation process typically costs between $8,000 and $20,000, depending on the
complexity of the design and required tolerances. Once a mold is made, the cost
typically drops to a few dollars per lens.
91
4.5 F-number, image brightness, and aberrations
It is important for the intraocular camera to work well in both dark and bright
ambient lighting conditions without requiring a large amount of power-consuming
electrical gain. This will ensure that the subject receives useful images in common
everyday environments ranging from sunny days, to normally illuminated rooms, to
moonlit nights (without the aid of auxiliary lighting like the lights that are often
mounted on conventional video cameras). Providing sufficient dynamic range is
largely an issue of the image sensor array technology and support electronics, often
in conjunction with image processing. However, the aperture, or, more accurately,
the f-number of the optics will determine the light gathering power of the camera
system. The f-number is the ratio of the effective focal length to the entrance pupil
diameter of the lens system. For a fixed focal length lens, using a larger aperture
(i.e., a lower f-number) will provide a brighter image at the sensor plane. Thus, the
combination of the sensitivity of the image sensor array and the f-number of the
optics will determine whether or not the overall system is capable of a given
performance in low light environments. To provide sufficient image brightness in
dim environments, the IOC may require an aggressively low f-number that is close
to, or even less than f/1, in combination with a high-sensitivity, high-dynamic range
image sensor array. However, off-axis aberrations are highly difficult to control for
imaging systems operating near f/1, and the blur due to aberrations reduces the local
image brightness in the image. This will be especially true for the IOC since its
extremely short physical length limits the number of optical surfaces that can be
92
employed to correct aberrations at low f-numbers. For comparison, typical cell
phone cameras employ at least two lens elements and have an f-number of 2.8 to
provide decent, near-megapixel imaging over a 50° to 60° field of view [1-6], but at
the cost of significantly degraded performance in low light conditions.
An analysis of the illuminance at the IOC sensor plane for various lighting
conditions and realistic IOC lens designs at different f-numbers (modeled in the
biological environment of an eye) will facilitate the selection of an appropriate image
sensor technology, and will influence the determination of the optimal pixel size and
required gain levels. It will also help us to understand if a particular intraocular
camera design will provide sufficient brightness for retinal prosthesis subjects to
function in common indoor environments with low light levels.
For a camera system, the ideal (diffraction-limited) irradiance distribution in
Watts/(m
2
⋅nm) at the sensor plane can be expressed as [35]:
()
( ) ( )
()()
22
,;
,; , ; ,
141 #
ideal
TRxy
xy
Ex y L
mm
mf
πλ λ
λ λ
⎛⎞
=
⎜⎟
⎝⎠ +−
(4-1)
in which T(λ) is the spectral transmittance of the cornea-aqueous-humor-camera
system, R(x, y; λ) is the relative illumination factor (for an ideal thin lens, this is
equal to unity for on-axis image points, and falls off with the fourth power of the
cosine of the incidence angle at the sensor plane), m is the magnification of the
optical system (m < 0 and |m| << 1 for the IOC), f/# is the system f-number (effective
focal length/entrance pupil diameter), and L(x, y; λ) is the spectral radiance of the
93
light source at a given location in object space in units of Watts/(m
2
⋅sr⋅nm). These
are radiometric units. Photometric units of cd/(m
2
⋅nm) for L(x, y; λ), and lux/nm for
E(x, y; λ), may also be used.
The expression above does not include the effects of geometrical aberrations
or diffraction. These effects can be included by taking the convolution of the ideal
image and the point spread function of the lens system [36]:
() ( ) ,; ( , ; ) ,;
actual ideal
Ex y PSF x y Ex y λλ λ =∗ (4-2)
Using Code V®, the ideal image and point spread functions are easily
computed for IOC lens systems, modeled inside the schematic eye described
previously. The software provides an illumination function that computes the
illuminance across the image plane given a specified optical radiance profile in
object space. This function is used to provide an accurate prediction of the
illumination levels at the camera’s sensor plane for various lighting conditions and
camera f-numbers for different IOC lens designs. The root mean square (RMS) blur
spot diameters and modulation transfer function (MTF) are used to evaluate the
degree of aberrations present for each case.
For this analysis, five IOC lens systems (shown in Figure 4-8) with f-numbers
of 0.7, 1.0, 1.4, 2.0, and 2.8, respectively, were designed and optimized to minimize
the RMS spot diameter and produce high MTF values for spatial frequencies up to
25 line-pairs/mm across a 40° field of view (±20°). The sole exception is that of the
f/0.7 case, for which the field of view was limited to 20° (±10°) during optimization,
94
f/0.7 f/1.0
f/1.4
Figure 4-8 Five IOC lens designs, each optimized to operate at a specific f-number
(f/0.7, f/1.0, f/1.4, f/2, and f/2.8). Each complete lens system includes the cornea,
aqueous humor, and an optimized polymer aspherical lens housed in air behind a
250 μm thick silica glass window. The aperture stop is located on the rear surface of
the window. The front of the window is located 2 mm behind the cornea and the
distance from the front of the window to the image plane is constrained to ≤ 3.5 mm.
Rays are shown at five semi-field angles: 0°, 5°, 10°, 15°, and 20°. Note that the f/0.7
system is only optimized over a ±10° field of view, though rays are traced out to +20°
for comparison. Also, the upper marginal rays at 10°, 15° and 20° do not fill the
aperture in the f/0.7 system because the lens diameter is constrained to 2.8 mm, causing
severe vignetting for these rays. (Continued on next page.)
95
f/2.0 f/2.8
Figure 4-8, Continued.
because a wider field of view was not feasible with the assumed constraints at this
f-number. The system becomes non-isoplanatic outside ±10° at f/0.7, meaning that a
small change in the position of a point in the object plane causes a large, nonlinear
change in the point spread function (PSF) at the image plane. This reduces the
accuracy of PSF-based image computations, which assume negligible variations in
the PSF over small patches in the image plane. The designs were implemented using
a configuration (presented in detail in the next chapter) comprising a Zeonex® E48R
polymer refractive lens (n = 1.53, V
d
= 55.8) with aspherical surfaces located in air
behind a fused silica optical window. For each case, the optical system length was
constrained to ≤ 3.5 mm and the f-number was controlled by the diameter of a
circular aperture stop located in front of the lens, on the rear surface of the optical
window. The distances from the stop to the lens and from the lens to the image, the
lens thickness, and the aspherical surface curvatures were allowed to vary during
optimization, but were constrained to manufacturable values. This single-lens
96
configuration was used because it is the least complex, and most near-term, lens
design for the IOC.
Two Lambertian radiance profiles were used to evaluate the image brightness
across the sensor plane for each IOC lens design, as well as for a set of ideal,
diffraction-limited lenses. The first is a square, uniform source. The other is a series
of nine equally spaced vertical bars ranging in brightness from white (max
brightness; grayscale value of 255) to black (zero brightness; grayscale value of 0),
hereinafter termed the “step tablet” source. Both sources are modeled as perfectly
diffuse screens placed one meter from the eye-camera system, covering a 40° vertical
and horizontal field of view (±20°). The corresponding image size is approximately
1.6 × 1.6 mm at the sensor plane (the rear nodal distance and focal length in all cases
are approximately 2.2 mm). Five lighting conditions were assumed: 10 lux (dim
corridor or streetlamps), 100 lux (dim indoor setting), 1000 lux (office setting), and
100,000 lux (sunny day). A perfectly diffuse screen illuminated by a source of L lux
will produce (L/π) cd/m
2
[in which 1 cd/m
2
= 1 lumen/(m
2
⋅sr)], so the peak
luminance values used for the Lambertian sources corresponding to the above
lighting conditions ranged from 3.18 to 31,800 cd/m
2
. Photometric units were used
because the sensitivities of image sensor arrays are often tested and specified using
lux-based units. For reference, a table of typical brightness levels for various indoor
and outdoor settings is shown below in Table 4-5.
97
Indoor Outdoor
Warehouses 20 - 75 lux Full sunlight 10,000 - 1,000,000 lux
Emergency stairs 30 - 75 lux Overcast day 100 - 10,000 lux
Corridors and stairs 75 - 200 lux Twilight 1 - 10 lux
Shops 75 - 300 lux Full moon 0.1 - 1 lux
Offices and reception areas 300 - 500 lux Overcast night 0.01 - 0.1 lux
Banks and offices 200 - 1000 lux Star light, clear 0.001 - 0.01 lux
Assembly lines 300 - 1000 lux Star light, overcast 0.0001 - 0.001 lux
Bright LCD monitor 300 cd/m
2
Table 4-5 Typical brightness values for common indoor and outdoor settings (After
[37-39]).
The uniform source profile (a diffuse screen placed one meter from the lens)
was used to observe the effects of brightness falloff from the center to the edge of the
image due to the effects of vignetting and aberrations, as shown in Figure 4-9. The
top two plots in this figure (a and b) show the variance of illuminance across the
sensor plane (also known as “faceplate illuminance”) after transmission through a set
of ideal, diffraction-limited lenses at each f-number and source luminance. The
bottom two plots show the results for the five IOC lens systems. Losses due to
material absorption and Fresnel reflections at the optical surfaces are neglected
(T(λ) = 1). However, if the lens and window surfaces are assumed to be
anti-reflection (AR) coated to provide less than 1% reflectance over 0° to 20° across
the visible spectrum, then the total transmittance through the six optical surfaces in
the cornea −IOC system is estimated to be ≥ 0.90. On the left of Figure 4-9, a log
scale is used to show the results over four decades of source luminance (shown every
decade from 10 to 1000 lux for dim to bright indoor conditions, and at 100,000 lux to
represent outdoors on a sunny day). On the right, a linear scale is used to show the
98
details at low brightness (10 lux; 3.18 cd/m
2
). Error bars are used to indicate the
brightness at the center of the image (peak lux at 0°), at ±10°, and at the image edge
(±20°). For the diffraction-limited set of lenses, the variation in brightness at the
image plane is due only to angle-of-incidence falloff. These simulation results were
acquired by casting 10 million rays at each wavelength and binning them to
0
1
10
100
1000
10000
100000
0.51.0 1.52.0 2.53.0
f-number
Illuminance at image plane (lux)
Scene illuminance:
100,000 lux (31,831 cd/m
2
)
1,000 lux (318.31 cd/m
2
)
100 lux (31.831 cd/m
2
)
10 lux (3.1831 cd/m
2
)
Diffraction limited lenses
20°
0°
10°
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
0.5 1.0 1.5 2.0 2.5 3.0
f-number
Illuminance at image plane (lux)
Diffraction limited lenses
Linear Scale for 10 lux case
20°
0°
10°
(a) log scale (b) linear scale (10 lux case)
0
1
10
100
1000
10000
100000
0.5 1.0 1.5 2.0 2.5 3.0
f-number
Illuminance at image plane (lux)
Scene illuminance:
100,000 lux (31,831 cd/m
2
)
1,000 lux (318.31 cd/m
2
)
100 lux (31.831 cd/m
2
)
10 lux (3.1831 cd/m
2
)
IOC Lenses
20°
0°
10°
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
0.5 1.0 1.5 2.0 2.5 3.0
f-number
Illuminance at image plane (lux)
IOC Lenses
Linear Scale for 10 lux case
20°
0°
10°
(c) log scale (d) linear scale (10 lux case)
Figure 4-9 Comparison of the illuminance at the sensor plane as a function of
f-number and semi-field angle for diffraction limited lenses (a and b) and the five
IOC lens designs (c and d) shown in Figure 4-8. Aberrations present in the IOC
lenses cause blurring and a more severe falloff in brightness at large field angles.
The source is a perfectly diffuse uniform screen placed one meter from the camera,
covering a 40° (±20°) field of view. Four lighting conditions are shown. The results
are an average of the intensity levels at red, green, and blue wavelengths.
99
determine the brightness across a 200 × 200 grid at the sensor plane. As expected,
the illuminance levels shown in Figure 4-9 (a) and (b), for the diffraction-limited
lenses, are close to the values predicted by Equation (4-1) (with m ≈ 0.002 and
T(λ) = 1). The results for the IOC lens designs (c and d) show a more severe falloff
in brightness with field angle due to off-axis aberrations, but generally show the
same brightness levels as the diffraction limited cases on-axis. This falloff in
brightness across the field is especially dramatic for the f/0.7 IOC lens system, for
which the design had to be limited to a ±10° field of view because of the extreme
difficulty of correcting off-axis aberrations at such a low f-number. However, the
variation in image brightness with field angle at f/1 is close to the ideal diffraction-
limited case, and provides eight times the brightness of the f/2.8 lenses typically
found in small digital cameras. Provided that the aberrations (evaluated later in this
chapter) are acceptable for the retinal prosthesis application, it may be feasible to
incorporate an f/1 lens into the IOC to ensure operability in very low-light
conditions. An additional consideration, however, will be the potential for saturation
in sunny or bright conditions with an f/1 lens. For this reason, high dynamic range is
a critical parameter for the sensor. This particular issue is being tested and evaluated
in a parallel research effort.
The “step tablet” source was imaged over both a ±20° field of view and a
±10° field of view in order to evaluate the ability to distinguish between neighboring
brightness values at the sensor plane as a function of f-number and field angle in low
100
light conditions (10 lux; 3.18 cd/m
2
source luminance for the white bars). Graphs of
the resulting illuminance values across a horizontal slice at the center of the image
plane are shown in Figure 4-10, and exhibit the expected staircase profile. A bar
across the top of each plot indicates where the divisions occur in the nine equally
spaced bars from black to white, and the full simulated image is shown as an inset.
These results were obtained by casting 25 million rays toward the optical system at
each wavelength, a percentage of which are captured by a 200 × 200 grid at the
image plane (which measures 2 × 2 mm, with the step tablet image covering
1.6 × 1.6 mm for the ±20° field of view case). The noise in the plots result from the
collection of a finite number of rays at the image plane, and is therefore worse at
higher f-numbers, which capture fewer rays. In an attempt to reduce this noise, the
number of rays was doubled for the f/2.8 case, but the reduction in noise was not
significant compared with the increased computation time required, and the average
value of the illuminance at each grayscale level remained the same. Thus, the plots
shown are all for the case of 25 million rays cast at each of the three wavelengths
(and averaged).
As expected, the different brightness levels are more discernable when
imaged within the central ±10° for each lens system, rather than when spread across
the full ±20°. What is important for evaluating which of the optical systems, in
conjunction with appropriate sensors, are suitable for the IOC application, is the
absolute value of the illuminance at the sensor plane for a given f-number, as well as
101
f/0.7
±°20
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Image plane horizontal axis (mm)
Illuminance at image plane (lux)
f/0.7
±°10
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Image plane horizontal axis (mm)
Illuminance at image plane (lux)
f/1.0
±°20
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Image plane horizontal axis (mm)
Illuminance at image plane (lux)
f/1.0
±°10
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Image plane horizontal axis (mm)
Illuminance at image plane (lux)
f/1.4
±°20
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Image plane horizontal axis (mm)
Illuminance at image plane (lux)
f/1.4
±°10
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Image plane horizontal axis (mm)
Illuminance at image plane (lux)
f/2.0
±°20
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Image plane horizontal axis (mm)
Illuminance at image plane (lux)
f/2.0
±°10
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Image plane horizontal axis (mm)
Illuminance at image plane (lux)
Figure 4-10 Illuminance across a central slice of the image plane for the “step
tablet” target in low light (10 lux; 3.18 cd/ms) as a function of f-number and field
angle for the five IOC lens designs. The noise is due to the collection of a finite
number of rays in each bin of a 200 × 200 grid at the image plane. (Continued on
next page.)
102
the difference in illuminance between neighboring values. For example, the
illuminance at the sensor plane for the f/2.8 system ranges from approximately
0.03 lux for the bar just next to black, to 0.25 lux for the white bar, whereas the
corresponding brightness values for the f/0.7 system are an order of magnitude
higher.
Some initial conclusions can be drawn by considering these results in
conjunction with published data on the sensitivity of CMOS sensors for use in small,
low-power applications with pixel sizes in the range of 5 to 20 μm. This is
somewhat difficult because different vendors test and specify the sensitivity of
sensors in different ways (often without any details on how the tests were
performed), and the results of sensitivity tests are highly dependent on the testing
conditions (e.g., light source spectrum, gain setting, voltage input). In general, the
sensitivity of a CMOS sensor depends on the pixel area, optical fill factor (ratio of
the photosensitive area to the entire area of a pixel), integration capacitance, spectral
responsivity of the photodiode, and spectral power distribution of the light source.
f/2.8
±°20
0.00
0.05
0.10
0.15
0.20
0.25
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Image plane horizontal axis (mm)
Illuminance at image plane (lux)
f/2.8
±°10
0.00
0.05
0.10
0.15
0.20
0.25
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Image plane horizontal axis (mm)
Illuminance at image plane (lux)
Figure 4-10, Continued.
103
The sensitivity is often specified in radiometric units of V/(μJ/cm
2
) or
photometric units of V/(lux⋅sec) [or bits/(lux⋅sec)], while some vendors specify the
signal-to-noise ratio (SNR) at a particular faceplate illuminance in lux [40]. From
ISO standard 12232 [41], it is generally accepted that an SNR of 10 dB (at a given
frame rate) is required to produce an acceptable image, while an SNR of 40 dB
produces an excellent image. The conditions under which these SNR values are
achieved, however, can vary across vendors and are often difficult to discern.
Blanksby, et al. report on a color CMOS photogate active pixel sensor with
16 μm square pixels that achieves approximately a 20 dB SNR (10 mV signal to
1 mV noise) for red pixels at a faceplate illuminance of 0.2 lux with a 30 msec
exposure time (with slightly lower SNR for green and blue pixels) [42]. The
OmniVision 6151 black and white 1/7” CMOS sensor (1.97 × 1.61 mm image area)
with 5.6 μm square pixels specifies a 2.20 V/(lux⋅sec) sensitivity [43]. This implies
approximately a 13 mV signal for 0.2 lux illuminance over 30 msec, which can be
estimated to be approximately 22 dB above the noise floor (assuming a noise level of
1 mV). This implies that an acceptable image may result at this illumination level.
Our own experience testing various sensors indicates that CMOS imagers do not
generally produce satisfying images at 30 msec exposure times with faceplate
illuminance levels below 1 lux, though active pixel technologies for reducing noise
and increasing sensitivity are continually advancing and we have evaluated a few
104
sensors (still in the research and development phase) that have excellent low light
performance.
Given the data shown in Figure 4-9 and Figure 4-10, it appears that, from an
optical throughput standpoint, a target aperture of f/1 for the optical system may
prove optimal, provided that the aberrations are sufficiently controlled at this
f-number. For the IOC designs f-numbers higher than f/1, the entire image of the
step tablet is below 1 lux, indicating that the sensor would be unlikely to produce an
acceptable image in which the variations in brightness are discernable. At f/1, the
darkest bar next to black is at 0.2 lux, while the brightest bar is at 1.8 lux, with a
0.2 lux difference between levels. This indicates that the f/1 system will likely
produce sufficiently bright images in low light conditions (10 lux).
To complete the evaluation of IOC optical systems as a function of f-number,
an analysis of the imaging performance for each system is needed. Before
presenting this analysis, a clearer understanding of polychromatic spot diagrams and
the modulation transfer function will be helpful, as they are the two key performance
metrics of an imaging system.
4.5.1 Modulation transfer function (MTF) and RMS spot diameter
A polychromatic spot diagram can be used to evaluate the RMS spot
diameters of an imaging system. This is accomplished by tracing a set of rays that
enter the system along a rectangular grid displayed across the entrance pupil, for
each wavelength and field angle, and then plotting the intersections of the rays with
105
the image plane [44]. Effects of diffraction are not included in a spot diagram. The
squares of the distances between the intersections with the image plane of the chief
ray and each of the other rays are computed and averaged. This average is weighted
according to specified wavelength weights derived from their expected frequency of
occurrence (e.g., a photopic spectrum is represented by weighting the rays at 559 nm
twice as much as the rays near the edge of the spectrum). The RMS spot diameter is
defined as twice the square root of this weighted average. To ensure accuracy, the
spot diagram is recomputed as the number of rays traced at each wavelength is
increased until the RMS spot diameter no longer varies (typically 40 to 60 rays per
wavelength is sufficient).
While there are many diagnostics used by optical system designers to
evaluate specific types of aberrations throughout the design process, a graph of the
modulation transfer function (MTF) is often used to summarize the overall imaging
performance of the system. The MTF provides a comprehensive view of the
imaging performance because it includes the effects of all system aberrations as well
as diffraction. It is for this reason that a set of specific values of the MTF at various
spatial frequencies is typically used as one of the main performance criteria of
image-forming optical systems.
For an imaging system, the ideal scenario is that each infinitesimal point on
the object is imaged to a corresponding infinitesimal point on the image plane.
Diffraction, due to the wavelength nature of light and the finite aperture of the lens,
ensures that there will always be a finite spot diameter for the image point, even for a
106
system that is perfectly corrected with respect to geometrical ray aberrations (i.e., a
diffraction-limited lens). The image of a point object represents the impulse
response of the system and is called the point spread function [45]. In general, the
point spread function varies with wavelength and field angle. The image can be
described, therefore, as a convolution of the object with the point spread function as
shown in Equation (4-2) (after projection to account for the system magnification).
The frequency domain counterpart of the point spread function is the optical transfer
function (OTF), which is the Fourier transform of the point spread function. The
MTF is the modulus, or amplitude, of the optical transfer function. In a ray tracing
program, the diffraction-based MTF can be computed by first tracing a set of rays
from a given object point to the exit pupil of the system. The amplitude and phase
distribution of the optical field at the exit pupil can be computed by keeping track of
the optical path lengths and angles of incidence of this set of rays. Diffraction
effects are included because both the amplitudes and phases of the rays are
accounted for in this procedure. As discussed in Goodman’s text on Fourier Optics
[45], there is a Fourier transform relationship between the complex field at the exit
pupil and the OTF of the optical system. Thus, this complex field at the exit pupil
can be transformed and integrated over wavelength to determine the polychromatic
MTF as a function of spatial frequency. In Code V®, the spatial frequencies are
measured in units of line-pairs/mm at the image surface. These can be converted to
units of cycles/degree by computing the angle subtended by a given number of line
pairs (cycles) at the image plane from the on-axis point at the exit pupil. The effects
107
of relative illumination, including vignetting and falloff due to oblique angles of
incidence at the image plane, are included, while variations in transmission and the
angular sensitivity of coatings are not included.
In simpler terms, the MTF indicates how well an optical system is able to
image and resolve the various spatial frequencies of objects. For example, consider a
periodic object comprising a sinusoidally varying intensity pattern. When this object
is imaged through an optical system, the effects of diffraction, aberrations, and
manufacturing and alignment errors will degrade the image. The brightest areas of
the image will be less bright, and the darkest areas less dark, than in the original
object. In other words, the MTF is a measure of the modulation in the image,
relative to the modulation in the object, at various spatial frequencies. This can be
expressed as [46],
max min
max min
modulation in image
Modulation , and MTF ,
modulation in object
II
II
−
==
+
(4-3)
in which I
max
and I
min
represent the maximum and minimum intensity levels of the
modulation in the object and image. The maximum MTF is unity, and occurs at zero
spatial frequency. An MTF of zero implies a complete loss of contrast, such that a
black-to-white sinusoidal modulation at a given spatial frequency in the object would
appear completely gray at the image plane. The cutoff frequency is defined as the
first point at which the MTF goes to zero, and is equal to () 1- f number λ ⋅ , or
() Df λ , for a diffraction limited lens.
108
A set of typical MTF curves is shown in Figure 4-11 for an f/3.5 triplet lens
designed for use in a conventional digital camera (modified from a triplet lens design
in the Code V® lens database). The figure shows the diffraction-limited MTF for
on-axis light and the MTF for light entering the system at ±20°. A series of
horizontal bar patterns varying from low to high fundamental spatial frequency,
along with corresponding intensity modulation graphs to show the corresponding
Tangential MTF
±20° Field
Diffraction Limit
On-Axis (0°)
Spatial Frequency (line-pairs/mm)
Modulation Transfer Function (MTF)
Radial MTF
±20° Field
Object
Intensity
Image
Intensity
(b)
Figure 4-11 Illustration of typical sinusoidal MTF curves (curves shown are for an
f/3.5 triplet lens).
109
modulation patterns in both objects and images, are depicted below the axis to
illustrate the concept.
As expected, the MTF is a function of both field angle (the angle of the chief
ray for a given object point, relative to the optical axis) and spatial frequency. It is
also a function of object orientation, as indicated by the separate solid and dashed
curves for a 20° field angle. In other words, the image of a bar pattern at a particular
fundamental spatial frequency is usually degraded by different amounts if the bar
pattern is oriented horizontally rather than vertically (for off-axis field angles), as
shown in Figure 4-11. The reason for this can be seen in Figure 4-12, in which two
fans of rays from an off-axis object point are directed toward a lens. The fan of rays
that lies in the yz plane, containing the optical axis and the k-vectors of each ray, are
termed tangential, or meridional, rays and lie in a plane by the same name. The rays
that lie in the orthogonal xz plane are called sagittal, or radial, rays. The chief ray,
which passes through the center of both the entrance pupil and aperture stop of the
system, is the only ray that is common to both the tangential and sagittal planes. As
seen in the figure, when the object point is off-axis, these two sets of rays “see” a
different lens (even though the lens itself possesses axial symmetry) and thus
experience a different amount of bending through the lens surfaces. The marginal
rays in the tangential plane (the most extreme rays) intersect the lens at different
heights and angles than the marginal rays in the sagittal ray fan. The effect is to
cause the tangential fan of rays to come to a focus closer to the lens (for a positive
lens) than the sagittal ray fan, causing the image of a point object to be blurred into a
110
horizontal line image at the tangential focus, and into a vertical line image at the
sagittal focus, forming either an elliptical or circular blur spot at points in between.
This particular aberration is termed astigmatism (and is different than visual
astigmatism, which results from physical asymmetries in the surface curvatures of
the corneal and crystalline lenses). Astigmatism is linearly proportional to the
aperture size and to the square of the field angle. Astigmatism can be partially
controlled by placing a stop in the system that is appropriately shifted away from the
lens to limit the footprint of the rays on the lens surface, by varying the lens shape,
and by incorporating additional lens elements. In well-corrected systems, the
Tangential (Meridional) Rays
Sagittal (Radial) Rays
Optical Axis
Off-Axis Object Point
Figure 4-12 Illustration of tangential (or meriodonal) rays and sagittal (or radial)
rays emanating from an off-axis object point on the y-axis.
111
astigmatism is typically corrected such that the tangential and sagittal foci intersect
at two field angles (often on-axis and at 70% of the maximum field) on a nearly flat
image plane, such that the astigmatism is finite but well controlled between these
two field angles, and rapidly diverges for fields beyond the 70% node [46, 47]. In an
MTF plot, astigmatism results in two distinct curves for each field angle, which are
represented by solid (tangential MTF) and dashed lines (radial, or sagittal, MTF) in
this Code V® and in this thesis. In Code V®, the different field angles are
represented by color in the MTF plot, whereas the colors in spot diagrams
correspond to rays traced at red, green, and blue wavelengths.
4.5.2 Imaging performance of IOC lens systems as a function of f-number
Plots of the MTF curves and spot diagrams for the five IOC lens systems
described previously, when focused at a 20 cm object distance, are shown in Figure
4-13. This data may be evaluated using the performance metrics derived in Section
4.3. For these designs to satisfy the imaging requirements of a retinal
microstimulator array with approximately 1000 electrodes, the MTF should be ≥ 0.5
at 25 lp/mm and the RMS spot diameter should be ≤ 30 µm over a 20° field of view
(±10°). It is further desirable to maintain this, or slightly degraded, performance
over twice this field of view, 40° (±20°), to provision for the possibility of a future
retinal prosthesis system that requires information in the periphery.
Not surprisingly, the data in Figure 4-13 shows that the imaging performance
generally improves with increasing f-number. As seen from the ray diagrams of
112
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
DIFFRACTION MTF
MCH 11-Mar-07
DIFFRACTION LIMIT
AXIS
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
0.000,0.000 DG
0.00, 0.00
0.000,5.000 DG
0.00, 0.24
0.000,10.00 DG
0.00, 0.48
0.000,15.00 DG
0.00, 0.74
0.000,20.00 DG
0.00, 1.00
FIELD
POSITION
DEFOCUSING 0.00000
f/0.7
.100 MM
152 μm at ±20°
67 μm at ±15°
25 μm at ±10°
15 μm at ±5°
15 μm at 0°
f/0.7 MTF f/0.7 spot diagram
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
DIFFRACTION MTF
MCH 27-Nov-06
DIFFRACTION LIMIT
AXIS
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
0.000,0.000 DG
0.00, 0.00
0.000,5.000 DG
0.00, 0.24
0.000,10.00 DG
0.00, 0.48
0.000,15.00 DG
0.00, 0.74
0.000,20.00 DG
0.00, 1.00
FIELD
POSITION
DEFOCUSING 0.00000
f/1.0
.100 MM
44 μm at ±20°
28 μm at ±15°
21 μm at ±10°
17 μm at ±5°
15 μm at 0°
f/1.0 MTF f/1.0 spot diagram
Figure 4-13 Polychromatic MTF curves and spot diagrams of five IOC systems
optimized to operate at f-numbers ranging from f/0.7 to f/2.8 (focused on an object
at 20 cm). Colored lines in the MTF plots represent different semi-field angles (red:
0°, green: 5°, blue: 10°, orange: 15°, pink: 20°) with solid lines for the tangential
rays, and dashed lines for radial (or sagittal) rays. The colors in the spot diagrams
correspond to red, green, and blue wavelengths. (Continued on next page.)
113
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
f/1.4
DIFFRACTION MTF
MCH 27-Nov-06
DIFFRACTION LIMIT
AXIS
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
0.000,0.000 DG
0.00, 0.00
0.000,5.000 DG
0.00, 0.24
0.000,10.00 DG
0.00, 0.48
0.000,15.00 DG
0.00, 0.74
0.000,20.00 DG
0.00, 1.00
FIELD
POSITION
DEFOCUSING 0.00000
f/1.4
.100 MM
49 μm at ±20°
28 μm at ±15°
17 μm at ±10°
11 μm at ±5°
8 μm at 0°
f/1.4 MTF f/1.4 spot diagram
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
f/2.0
DIFFRACTION MTF
MCH 27-Nov-06
DIFFRACTION LIMIT
AXIS
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
0.000,0.000 DG
0.00, 0.00
0.000,5.000 DG
0.00, 0.24
0.000,10.00 DG
0.00, 0.48
0.000,15.00 DG
0.00, 0.74
0.000,20.00 DG
0.00, 1.00
FIELD
POSITION
DEFOCUSING 0.00000
f/2.0
.100 MM
27 μm at ±20°
18 μm at ±15°
13 μm at ±10°
11 μm at ±5°
11 μm at 0°
f/2.0 MTF f/2.0 spot diagram
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
f/2.8
DIFFRACTION MTF
MCH 27-Nov-06
DIFFRACTION LIMIT
AXIS
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
0.000,0.000 DG
0.00, 0.00
0.000,5.000 DG
0.00, 0.24
0.000,10.00 DG
0.00, 0.48
0.000,15.00 DG
0.00, 0.74
0.000,20.00 DG
0.00, 1.00
FIELD
POSITION
DEFOCUSING 0.00000
f/2.8
.100 MM
21 μm at ±20°
8 μm at ±15°
6 μm at ±10°
4 μm at ±5°
3 μm at 0°
f/2.8 MTF f/2.8 spot diagram
Figure 4-13, Continued.
114
these five lens systems in Figure 4-8, the diameter of the entrance aperture is smaller
at higher f-numbers. The bundle of rays passing through the system at each field
angle is therefore more paraxial, which results in lower aberrations (at a cost in
image brightness, as discussed previously). At f/2.0 and f/2.8, the spot diameters
remain below 30 μm across the entire ±20° field of view. At f/1.0 and f/1.4, this
performance is maintained over ±15°, and at f/0.7 the performance rapidly degrades
and is unacceptable outside ±10° (which may in fact still be sufficient for a system
intended to only match the field of view of current retinal implants). A more
detailed analysis of the specific types of aberrations underlying the imaging
performance of IOC lens designs similar to these will be covered in the next chapter.
To evaluate the MTF curves shown in Figure 4-13, the values at 25 lp/mm
were extracted and plotted for each IOC system in Figure 4-14. An MTF ≥ 0.5
produces a good, if not excellent, image at that spatial frequency (i.e., with easily
discernable features). It is clear from this plot that a general feature of this single-
lens system is that the imaging quality rapidly degrades from ±15° to ±20°.
Nevertheless, at f/2.0 and f/2.8, the imaging is completely acceptable at ±20°, and at
f/1.0 and f/1.4, the MTF is between 0.1 and 0.4 at ±20°, which is likely sufficient for
imaging in the periphery where the acceptable resolution is lower (just as it is in the
human eye). It is also notable that the MTF remains above the specified value of 0.5
out to ±15° for all cases at or above f/1.0, which is larger than the ±10° field of view
115
of the current retinal prosthesis, indicating there is some performance margin in these
designs.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.5 1 1.5 2 2.5 3
f-number
MTF at 25 line-pairs/mm
0 degrees
± 5 deg. (rad)
± 5 deg. (tan)
± 10 deg. (rad)
± 10 deg. (tan)
± 15 deg. (rad)
± 15 deg. (tan)
± 20 deg. (rad)
± 20 deg. (tan)
f/0.7 f/1.0 f/1.4 f/2.0 f/2.8
Figure 4-14 Radial and tangential MTF at 25 lp/mm as a function of f-number and
field angle (data points are from the MTF curves shown in Figure 4-13).
It should be noted that although these imaging results provide accurate
information about the overall imaging performance of the IOC in the window-plus-
polymer-lens configuration analyzed here, the details are specific to these particular
lens designs at each f-number. This is because the results obtained after optimizing a
given lens system are specific to the starting point, the construction of the error
function, and the ordering of the optimizations performed along the way (i.e., the
order in which lens parameters are frozen or allowed to vary with subsequent
optimizations as the design is being refined). Any of these lens designs could be re-
optimized to fine-tune the balance of the performance across the field of view in
different ways, though large-scale changes would not be achievable without varying
116
the overall constraints. In the next chapter, we will examine how the imaging
changes if the f/1 camera is instead optimized over half this field of view, 20° (±10°),
to simply match the stimulator array with a reduced emphasis on the peripheral field
resolution.
Considering all of the data shown above for the five IOC designs with a
polymer aspherical lens placed in air behind a glass window, the f/1 case appears to
represent the best tradeoff among resolution, brightness, and field of view given the
unique goals of the intraocular camera as part of the retinal prosthesis system.
4.6 Summary
In summary, the unique goals and requirements for an intraocular camera for
current and future retinal prosthesis systems were discussed. An accurate schematic
eye model was implemented in software to aid with the design and analysis of
intraocular camera lens systems. The tradeoffs between image brightness and
aberrations were evaluated for realistic IOC lens designs at different f-numbers. It
was determined that the incorporation of a fused silica window at the front of the
optical system provides several advantages including refraction at the air-lens
interface. This led to a significant reduction in mass by incorporating a polymer
lens, placed in air behind the window, rather than using a high-index, high-density
glass lens in contact with the aqueous humor. The use of polymer lens materials also
has certain manufacturing, cost, and prototyping advantages since they can be
diamond-turned during early phases of prototype evaluation (unlike glass lenses) and
117
injection-molded later to produce large quantities at low cost once a design is
finalized. This windowed-configuration further allows for an aperture stop to be
placed on the window, separate from the lens, which provides an extra degree of
freedom for correcting off-axis aberrations such as astigmatism and coma, as
discussed further in the following chapter. Ultimately, the results shown in this
chapter provide insight into the tradeoffs among f-number, illumination, and
aberrations across different fields of view for an IOC lens system designed to meet
the unique requirements of retinal prostheses. Lessons learned from optimizing
these IOC systems indicated that aberrations are well controlled (relative to the target
resolution of future microstimulator arrays) over a 30° (±15°) field of view for
f-numbers ranging from f/1.0 to f/2.8, with slightly degraded performance at ±20°.
This is twice the ±10° field of view covered by the retinal microstimulator array and
demonstrates that the IOC can be designed to provide for wider fields of view as well
as the possibility of performing image processing functions on the peripheral visual
field. These design experiments therefore enable us to make well-informed
decisions about which parameters to target for IOC lens designs intended to work
with specific retinal prosthesis arrays and visual processing systems.
118
Chapter 4 References
[1] S. Kuiper and B. H. W. Hendriks, “Variable-focus liquid lens for miniature
cameras,” Applied Physics Letters, vol. 85, no. 7, pp. 1128-1130, 2004.
[2] Marshall Electronics, “Miniature lens products for CCD/CMOS cameras,”
Available online: http://www.mars-cam.com/optical.html, as of October, 2007.
[3] Cellphone Lens Products from Largan Precision Co., Ltd., Available online:
http://www.largan.com.tw/products_cellphone_en.htm, as of October, 2007.
[4] MicroJet CMOS Camera Modules, Available online:
http://www.microjet.com.tw/english/product_cmos.htm, as of October, 2007.
[5] STMicroelectronics, “Data Brief: VS6525 VGA single-chip camera module,”
January 2007. Available online: http://www.st.com/stonline/products/
literature/bd/12918/vs6525.pdf, as of October, 2007.
[6] Several additional examples are available by searching for “mobile phone
camera modules” online at http://www.globalsources.com/.
[7] A. Gullstrand, “The optical system of the eye,” in Appendices to Part 1, Von
Helmholtz Handbook of Physiological Optics, 3rd Ed. Hamburg: Voss, 1909,
pp. 350-358.
[8] H.-L. Liou and N. A. Brennan, “Anatomically accurate, finite model eye for
optical modeling,” Journal of the Optical Society of America A, vol. 14, no. 8,
pp. 1684-1695, 1997.
[9] I. Escudero-Sanz and R. Navarro, “Off-axis aberrations of a wide-angle
schematic eye model,” Journal of the Optical Society of America A, vol. 16, no.
8, pp. 1881-1891, 1999.
[10] J. E. Greivenkamp, J. Schwiegerling, J. M. Miller, and M. D. Mellinger,
“Visual acuity modeling using optical raytracing of schematic eyes,” American
Journal of Ophthalmology, vol. 120, no. 2, pp. 227-240, 1995.
[11] R. Navarro, J. Santamaria, and J. Bescos, “Accommodation-dependent model
of the human eye with aspherics,” Journal of the Optical Society of America A,
vol. 2, pp. 1273-1281, 1985.
[12] Y.-L. Chen, B. Tan, and W. L. Lewis, “Simulation of eccentric photorefraction
images,” Optics Express, vol. 11, no. 14, pp. 1628-1642, 2006.
119
[13] Y.-L. Chen, B. Tan, and W. L. Lewis, “Simulation of eccentric photorefraction
images,” Optics Express, vol. 11, no. 14, pp. 1628-1642, 2006.
[14] Y.-L. Chen, B. Tan, and W. L. Lewis, “Simulation of eccentric photorefraction
images,” Optics Express, vol. 11, no. 14, pp. 1628-1642, 2006.
[15] K. Cha, K. Horch, and R. A. Normann, “Simulation of a phosphene-based
visual field: visual acuity in a pixelized vision system,” Annals of Biomedical
Engineering, vol. 20, no. 4, pp. 439-449, 1992.
[16] K. Cha, K. Horch, and R. A. Normann, “Reading speed with a pixelized vision
system,” Journal of the Optical Society of America A, vol. 9, no. 5, pp. 673-
677, 1992.
[17] K. Cha, K. Horch, and R. A. Normann, “Mobility performance with a pixelized
vision system,” Vision Research, vol. 32, no. 7, pp. 1367-1372, 1992.
[18] M. S. Humayun, “Intraocular retinal prosthesis,” Transactions of the American
Ophthalmological Society, vol. 99, pp. 271-300, 2001.
[19] N. R. B. Stiles, M. C. Hauer, P. Lee, P. Nasiatka, J.-C. Lue, J. D. Weiland, M.
S. Humayun, and A. R. Tanguay, Jr., “Intraocular camera for retinal prostheses:
Design constraints based on visual psychophysics,” Frontiers in Optics, The
Annual Meeting of the Optical Society of America, San Jose, CA, 2007, poster
JWC46.
[20] International Congress of Ophthalmology, “Visual standards: Aspects and
ranges of vision loss with emphasis on population surveys,” report prepared for
the 29
th
International Congress of Ophthalmology, Sydney, Australia, 2002.
Available online: http://www.icoph.org/pdf/visualstandardsreport.pdf.
[21] United States Social Security Administration, “Title XVI - Supplemental
security income for the aged, blind, and disabled,” section 1614: “Meaning of
terms: aged, blind, or disabled individual,” in Compilation of the Social
Security Laws Volume 1, Act 1614, as amended through Jan. 2007.
[22] P. B. Catrysse and B. A. Wandell, “Roadmap for CMOS image sensors: Moore
meets Planck and Sommerfeld,” in Digital Photography, Proceedings of the
SPIE, vol. 5678, San Jose, CA, 2005, pp. 1-13.
[23] A. Davies and P. Fennessy, “Image capture,” in Digital Imaging for
Photographers, 4th Ed. Focal Press, 2001, Chapter 3, pp. 28-32.
120
[24] B. W. Keelan, “Attributes having multiple perceptual facets,” in Handbook of
Image Quality: Characterization and Prediction, 1st Ed., New York: Marcel
Dekker, 2002, Chapter 18, pp. 257-258.
[25] W. K. Pratt, “Image sampling and reconstruction,” in Digital Image
Processing: PIKS Scientific Inside, 4th Ed., Hoboken: Wiley-Interscience,
2007, Chapter 4, pp. 91-126.
[26] “FDA Summary of Safety and Effectiveness Data for the Implantable
Miniature Telescope (IMT),” VisionCare Ophthalmic Technologies, Inc., PMA
P050034, 2006.
[27] “Pushing the polymer envelope,” Syntec Technologies white paper, 2005.
Available online: www.syntectechnologies.com.
[28] W. S. Beich, “Injection molded polymer optics in the 21st century,” in Tribute
to Warren Smith: A Legacy in Lens Design and Optical Engineering,
Proceedings of the SPIE, vol. 5865, San Diego, CA, 2005, pp. 157-168.
[29] Y. Konishi, T. Sawaguchi, K. Kubomura, and K. Minami, “High performance
cyclo olefin polymer ZEONEX,” in Advancements in Polymer Optics Design,
Fabrication, and Materials, Proceedings of the SPIE, vol. 5872, San Diego,
CA, 2005, pp. 3-8.
[30] K. Carlson, M. Chidley, K. B. Sung, M. Descour, A. Gillenwater, M. Follen,
and R. Richards-Kortum, “In vivo fiber-optic confocal reflectance microscope
with an injection-molded plastic miniature objective lens,” Applied Optics, vol.
44, pp. 1792-1797, 2005.
[31] F. v. Hulst, P. Geelan, A. Gebhardt, and R. Steinkopf, “Diamond tools for
producing micro-optic elements,” Industrial Diamond Review, pp. 58-62,
March, 2005.
[32] M. Riedl, “On point,” Optical Engineering, pp. 26-29, July, 2004.
[33] J. T. McCann, “Applications of diamond turned null reflectors for generalized
aspheric metrology,” SPIE Optical Testing and Metrology III: Recent Advances
in Industrial Optical Inspection, vol. 1332, pp. 843-849, 1990.
[34] J. C. Wyant, “Precision Optical Testing,” Science, vol. 206, pp. 168-172, 1979.
[35] M. V. Klein and T. E. Furtak, Optics, 2nd Ed., New York: Wiley, 1986.
[36] J. W. Goodman, Introduction to Fourier Optics, 3rd Ed., Greenwood Village:
Roberts & Company Publishers, 2004.
121
[37] Covi Technologies, Inc., “EVQ-1000 Application Note: Lens applications,”
2004. Available online: http://www.covitechnologies.com/user_files/pdf/
lens_applications.pdf, as of November, 2006.
[38] J. Pollack, “Displays of a different stripe,” IEEE Spectrum Magazine, pp. 41-
44, August, 2006.
[39] W. J. Smith, “Radiometry and Photometry,” in Modern Optical Engineering,
3rd Ed., New York: McGraw-Hill, 2000, Chapter 8.
[40] T. Lule, S. Benthien, H. Keller, F. Mutze, P. Rieve, K. Seibel, M. Sommer, and
M. Bohm, “Sensitivity of CMOS based imagers and scaling perspectives,”
IEEE Transactions on Electron Devices, vol. 47, no. 11, pp. 2110-2122, 2000.
[41] “Photography-Electronic still-picture cameras-Determination of ISO speed,”
International Organization for Standardization (ISO), 12232, 1998.
[42] A. J. Blanksby and M. J. Loinaz, “Performance analysis of a color CMOS
photogate image sensor,” IEEE Transactions on Electron Devices, vol. 47, no.
1, pp. 55-64, 2000.
[43] Omnivision Technology, “Omnivision OV6650/OV6151 CIF CMOS Sensor
Datasheet,” 2007. Available online: www.ovt.com.
[44] “Geometrical and diffraction analysis,” in Code V® 9.70 Reference Manual,
Optical Research Associates, Pasadena, CA, 2006, Chapter 19.
[45] J. W. Goodman, “Frequency analysis of optical imaging systems,” in
Introduction to Fourier Optics, 3rd Ed., Greenwood Village: Roberts &
Company Publishers, 2004, Chapter 6, pp. 129-131.
[46] R. E. Fischer and B. Tadic-Galeb, Optical System Design, 1st Ed., New York:
McGraw-Hill, 2000.
[47] W. J. Smith, Modern Optical Engineering, 3rd Ed., New York: McGraw-Hill,
2000.
122
Chapter 5
REFRACTIVE LENS SYSTEMS
5.1 Introduction
In this chapter, we examine in detail the imaging characteristics of several
potential refractive lens designs for the intraocular camera using the imaging goals,
constraints, and camera configuration developed in Chapter 4. The target f-number
for the IOC optical systems evaluated in this Chapter is f/1, and each is constrained
to a total optical system length of 3.5 mm. A single aspherical lens design,
optimized over a 40° (±20°) field of view, is selected for fabrication and use in the
next generation IOC prototype. The optimization process used to develop this
particular lens design is discussed to provide insights into the design process, and to
illustrate the effects of making different choices when constructing a system error
function. In addition to assessing the overall imaging performance of a given lens
design using MTF curves and spot diagrams, the underlying ray aberrations are
examined using ray intercept graphs, astigmatic field curves, and distortion plots. To
aid in this discussion, a review of the seven basic classes of optical ray aberrations is
presented along with an explanation of the common diagnostic plots used to evaluate
them. The performance of this “wide-field” IOC design, which is optimized to cover
twice the field of view of current retinal prostheses (to accommodate retinal
prostheses with wider fields of view, as well as the potential use of peripheral field
image processing algorithms), is compared with a “narrow-field” design that is
123
optimized to cover only 20° (±10°), directly corresponding to the legal blindness
limit (with respect to visual field angle).
As the IOC is a fixed focus camera to be implanted for long term operation in
the eyes of different individuals, its imaging performance is subject to variations in
surgical placement and differences in the curvature of each individual’s corneal lens.
As a consequence, the sensitivity of the camera’s performance to shifts in placement
relative to the cornea and to individual differences in corneal curvature is
investigated. Furthermore, knowing that the end user is being provided with limited
prosthetic vision, it is not clear that the camera’s focus should simply be set at the
hyperfocal distance in order to provide the optimal depth of field. For this reason,
the depth of field as a function of focus distance is analyzed. A tolerancing study is
also presented to determine the robustness of the imaging performance to
manufacturing and alignment errors that are typical for small diameter polymer
optics and silica windows. Finally, we explore the possibility of incorporating
multiple refractive lenses within the confined optical space of the intraocular camera
to gain improved imaging performance at the expense of added cost, complexity, and
alignment steps.
5.2 Single-element, aspherical refractive lens design at f/1 over a 40° field of
view
Following from the analyses and simulation results presented in Chapter 4, a
more detailed optimization of an f/1 optical system was performed. We determined
that this would be the target lens configuration for fabrication and use in the next
124
generation intraocular camera prototype, to be integrated and tested with small form
factor, low power image sensor arrays. The resulting lens system is shown in Figure
5-1, with a set of reference rays traced through the system at five field angles from 0°
to 20° (40° FOV). This lens design is similar, but not identical, to the f/1 design
shown in Chapter 4. After completing the analyses in Chapter 4, the f/1 IOC design
was revisited and further efforts were made to fine tune the imaging performance to
3.51 mm
Cornea Aqueous
Humor
Window
Aspherical Lens
2.00 mm
Air
0.5 mm
0.84 mm
0°
20°
Figure 5-1 Custom designed IOC lens system with a Zeonex E48R polymer
aspherical lens located in air, behind a fused silica window. A 2-mm diameter
aperture stop is located on the rear surface of the window. The system operates at
f/0.96 with a focal length of 2.1 mm and is optimized over a ±20° field of view. The
IOC is focused at a 20-cm object distance. The lens is 2.22 mm thick and weighs
9 mg (by calculation) with a 2.5 mm edge diameter. The image distance is 0.94 mm,
measured from the rear vertex of the lens.
125
increase the MTF at 25 lp/mm for the radial field at ±20°. The details of this
optimization process are discussed later.
A list of the surface parameters for the lens system, including the cornea and
aqueous humor, is shown in Table 5-1. The aspherical profiles of the front and rear
surfaces of the polymer lens are specified by their radii of curvature, conic constants,
and higher-order aspheric coefficients. The formula defining the shape, or “sag,” of
each aspherical surface, with respect to the lens vertex, is [1]:
2
46 8 10 12
22
( ) ...
11 (1 )
cr
z r Ar Br Cr Dr Er
Kc r
=++++++
+− +
, (5-1)
where z is the longitudinal distance from the vertex of the lens to the lens surface, c is
the spherical curvature at the vertex equal to 1/R, in which R is the spherical radius of
curvature, K is the conic constant, r is the radial height coordinate on the lens, and A,
B, C, and the like are the higher order aspheric coefficients. In Table 5-1, the
thickness of a given surface indicates the axial distance between that surface and the
next.
The optical system length of the IOC from the front of the window to the
image plane is 3.513 mm, very close to the desired value of 3.5 mm. Recall that this
constraint was derived from the requirement for the total package length to be
≤ 4.5 mm, in order to leave sufficient margin to house the sensor array and
packaging walls. The maximum clear aperture diameter on the front surface of the
lens for rays entering at angles up to ±20° is 2.341 mm, and is 2.255 mm on the rear
126
Surface Type Radius (mm) Thickness (mm) Material
Refractive
Index (559 nm)
Anterior
Cornea
Conic
R = 7.77
K = −0.18
0.500 Cornea 1.376
Posterior
Cornea
Conic
R = 6.40
K = −0.60
2.000
Aqueous
humor
1.336
Window Front Plane Infinity 0.250 Fused silica 1.460
Window Rear Plane Infinity 0.100 Air 1.000
Lens Front Asphere
R = 1.7620
K = −1.2562
A = 0.0016
B = −0.0002
2.224 Zeonex E48R 1.533
Lens Rear Asphere
R = −1.5086
K = −5.0290
A = 0.0066
B = 0.0039
0.939 Air 1.000
Image Plane Infinity − − −
Table 5-1 Surface parameters for the IOC system shown in Figure 5-1 when
focused at a 20 cm object distance. “Thickness” indicates the axial distance from the
vertex of the current surface to the vertex of the next surface. “Material” refers to
the optical medium following the surface. The quantities R, K, A, and B are defined
in the aspherical surface sag formula shown in Equation (5-1).
surface of the lens. This implies that the physical edge diameter of the lens can be as
small as 2.4 to 2.5 mm. This will easily fit within one of the housing concepts for
the IOC, which has a 3.18 mm cylindrical cross section. It may have to be slightly
smaller to fit within a 2.5 mm × 2.5 mm × 4.5 mm housing. With a physical edge
diameter of 2.5 mm, the fused silica window has a calculated mass of 3 mg and the
polymer lens has a calculated mass of 9 mg for a combined total of 12 mg for the
optical system. If the diameters are increased to 2.75 mm to fit within the inner
walls of the larger packaging concept, then the total weight increases to 14 mg.
Thus, this IOC lens design is extremely lightweight and leaves considerable margin
for the masses of other key IOC components.
127
The first order, or paraxial, properties of the lens system are listed in Table
5-2. The f-number is controlled with a physical aperture stop that is 2 mm in
diameter and is located on the rear surface of the fused silica window. The entrance
pupil is the image of the aperture stop toward object space [2], in this case, the
physical aperture stop as imaged through the cornea. The diameter of the entrance
pupil (rather than the physical stop) is used to compute the f-number of the system.
As shown in the table, the entrance pupil for this system is located 2.232 mm behind
the posterior cornea and is 2.189 mm in diameter.
First Order Parameter Value
Effective focal length 2.106 mm
System f-number 0.96
Entrance pupil location (Cornea to EnP) 2.232 mm
Entrance pupil diameter 2.189 mm
Exit pupil location (ExP to Image) 3.443 mm
Exit pupil diameter 4.560 mm
Front nodal distance (Cornea to NP1) 3.327 mm
Rear nodal distance (NP2 to Image) 2.100 mm
Paraxial image height (at ±20°) 0.770 mm
Actual image height (at ±20°) 0.840 mm
Image distance (lens rear to image) 0.939 mm
Table 5-2 First order optical parameters for the IOC system shown in Figure 5-1.
The effective focal length of the lens system (when placed behind the cornea,
as shown in Figure 5-1) is 2.106 mm, resulting in an f-number of f/0.96. The rear
nodal distance of the system is 2.100 mm, and is approximately equal to the effective
focal length of 2.106 mm. As discussed in Chapter 4, this focal length implies a
target optical resolution of 25 lp/mm to meet the requirements of future 32 × 32
microstimulator arrays. While a resolution of 25 lp/mm analytically corresponds to a
spot diameter of 20 μm (1 line-pair spanned by 2 × 0.020 mm is 25 lp/mm), it was
128
found that lenses with RMS spot diameters up to approximately 30 μm have MTF
values ≥ 0.5 at 25 lp/mm. Thus, spot diameters less than 30 μm can be
accommodated. The paraxial image height for a 20° entrance angle is 0.770 mm.
The image height indicated in the ray diagram shown Figure 5-1 is 0.840 mm, due to
aberrations and pincushion distortion, which will be discussed later. This is the
height of the most extreme ray at the image plane, which, in this case, corresponds to
the lower marginal ray for the 20° field angle (the ray entering at 20° that just passes
through the lower edge of the aperture stop). This means that the image of a ±20°
field of view will fill a 1.68 mm × 1.68 mm area on the sensor. If the sensor is
smaller than this, it will act as a field stop, and the sensor size itself will determine
the system’s field of view. Note that the height of the chief ray at the image plane is
more commonly used to define the real image height, particularly for quantifying
distortion. In this case, the height of the most extreme ray at 0.840 mm was used in
order to indicate the maximum height of the image, including all of the rays passing
through the aperture over a ±20° field of view.
The clearance from the rear vertex of the lens to the image plane is
0.939 mm. It should be noted that several commercial imagers are fabricated with a
coverglass on the face of the sensor for protection, and, often, to provide a
birefringent filter that optically pre-blurs the image to reduce aliasing effects from
undersampling by the sensor’s pixel array. The use of such a coverglass is presently
not included in the IOC designs because of the limited space available, though it may
129
be unavoidable for near term prototypes that use commercially available image
sensors.
5.2.1 Lens optimization and imaging performance
The entrance and exit pupils are key parameters for an optical system, and are
used extensively in ray tracing analyses. As mentioned above, the size and location
of the entrance pupil are determined by imaging the aperture into object space (i.e.,
as viewed through the window, aqueous humor, and corneal lens preceding it).
Likewise, the exit pupil is determined by imaging the aperture into image space (i.e.,
as viewed through the lens following it) [2]. In ray tracing software, it is common to
trace sets of rays from originating from a rectangular grid of points at the entrance
pupil to the exit pupil to support many analytical functions and to perform lens
design optimizations. It is important that this set of rays completely fills the entrance
pupil (and, likewise, the aperture stop) in order to achieve an accurate evaluation of
the imaging performance of the system. Since first order computations do not
account for the effects of vignetting, iterative calculations are performed to adjust the
precise entrance beam size at each field angle to ensure this condition [3].
Geometric wavefront aberrations may be calculated from the RMS optical path
differences (measured in waves) between the actual wavefront at the exit pupil and a
reference wavefront, which is represented by a converging spherical wave centered
on the ideal image point [4]. Diffraction based analyses, such as the MTF and point
spread function, are also computed by tracing a set of rays from the entrance to the
exit pupil and by then computing the appropriate discrete Fourier transforms to
130
determine the spatial frequency response of the system. Spot diagrams and both
longitudinal and transverse ray aberrations are computed by tracing the set of rays
through to the image plane, where geometric differences in the intersections of rays
from common field points can be evaluated.
The imaging performance of the lens system shown in Figure 5-1 is
summarized in the MTF plot and polychromatic spot diagram shown in Figure 5-2.
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
F/0.96, 20-cm Focus
DIFFRACTION MTF
MCH 08-Dec-06
DIFFRACTION LIMIT
AXIS
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
12 μm at 0°
42 μm at ±20°
33 μm at ±15°
29 μm at ±10°
20 μm at ±5°
12 μm at 0°
42 μm at ±20°
33 μm at ±15°
29 μm at ±10°
20 μm at ±5°
MTF Spot diagrams
Figure 5-2 MTF and polychromatic spot diagram with RMS spot diameters for
the lens system shown in Figure 5-1, focused at a 20 cm object distance. The
colored lines in the MTF plot represent different semi-field angles (red: 0°, green:
5°, blue: 10°, orange: 15°, pink: 20°) with solid lines for the tangential rays, and
dashed lines for the radial, or sagittal, rays. The colors in the spot diagrams
correspond to red, green, and blue wavelengths.
The IOC is focused and optimized at a 20 cm object distance over a 40° (±20°) field
of view. Given the very large depth of field of the intraocular camera, the focus
distance used during optimization has a negligible effect on the resulting lens design
131
and performance. Once optimized, the lens may be refocused for a different object
distance to obtain the desired depth of field, as discussed in detail in Section 5.4.
The values of the RMS spot diameters are shown in Figure 5-2, and the MTF values
for the radial and tangential ray fans at each field angle at 25 lp/mm are shown in
Table 5-3 below.
Field: 0° ±5° ±10° ±15° ±20°
Rad. Tan. Rad. Tan. Rad. Tan. Rad. Tan. Rad. Tan.
MTF:
0.808 0.808 0.716 0.518 0.633 0.347 0.538 0.433 0.159 0.440
Table 5-3 MTF values of the radial and tangential ray fans in Figure 5-2.
This lens system was initially optimized to minimize transverse ray
aberrations, meaning that the error function was constructed to minimize the spot
diameter at each wavelength and field angle. It was found that this error function
produced optimized lenses with a severe amount of astigmatism at the ±20° field
angle (i.e., widely disparate tangential and radial MTF curves, with the radial curve
in particular falling to zero near 25 lp/mm). Near the end of the optimization
process, the design was fine-tuned to reduce this astigmatism by using an error-
function based on achieving targeted tangential and radial MTF values for each field
angle. This is illustrated in the MTF plots in Figure 5-3, which were generated at
various stages during the optimization process (and are shown for an infinite object
distance in this case). The MTF curves in Figure 5-3(a) and (c) show the
performance after optimizing the system to minimize transverse ray aberrations (i.e.,
to minimize polychromatic blur spot diameters). The MTF curves in Figure 5-3(b)
and (d) show the performance after additional optimizations were made with an error
132
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
20 cm
DIFFRACTION MTF
MCH 14-Aug-06
DIFFRACTION LIMIT
AXIS
T
R
0.1 FIELD ( ) 2.50
O
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
(a) Transverse ray aberr. optimization (0° to ±15°)
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
20 cm
DIFFRACTION MTF
MCH 16-Aug-06
DIFFRACTION LIMIT
AXIS
T
R
0.1 FIELD ( ) 2.50
O
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
(b) MTF-based optimization (0° to ±15°)
Figure 5-3 Example of MTF curves generated during the lens optimization
process. The curves on the left (a and c) show the performance after optimizing the
system to minimize transverse ray aberrations. The curves on the right (b and d)
show the performance after additional optimizations were made to improve the
MTF (the object distance is infinity). (Figure continued on next page.)
133
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
R
T
20 cm
DIFFRACTION MTF
MCH 14-Aug-06
DIFFRACTION LIMIT
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
(c) Transverse ray aberrations optimization (±20°)
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
R
T
20 cm
DIFFRACTION MTF
MCH 16-Aug-06
DIFFRACTION LIMIT
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
(d) MTF-based optimization (±20°)
Figure 5-3, Continued.
function designed to specifically target an improvement in the MTF (using the
results from the minimization of transverse ray aberrations as a starting point). In
particular, these additional optimizations served to reduce the astigmatism across the
134
field and to raise the modulation for the radial rays at ±20°. For clarity, the curves
for the ±20° field case are graphed separately in the bottom two plots, where it can
be seen that the radial ray curve was raised after the MTF optimization, at a cost in
some performance for the tangential rays.
During the optimization process, the diameter of the aperture stop was fixed
at 2 mm to force a near unity f-number. The optical system length was constrained
to ≤ 3.5 mm, and the thickness of the fused silica window was fixed at 0.250 mm.
The spacing between the window and the lens, the spacing between the lens and the
image plane, and the lens thickness were allowed to vary during optimization. The
aspherical curvatures of the front and rear lens surfaces were allowed to vary
gradually, during successive steps in the optimization process. For example, the first
optimization might be performed with a purely spherical lens to obtain a general
solution that has nearly the right focal and optical system length, but is highly
aberrated. The conic constants of the two surfaces are next allowed to vary to yield a
solution with better imaging over a wider field of view. From there, higher-order
aspherical coefficients for the lens curvatures are included in the optimization. The
specific ordering of the steps and of the initial lens curvatures can make a difference
in the end result, and both experience and trial-and-error plays key roles in
developing an optimal lens design. For this IOC design, several different starting
points and successive optimization strategies were attempted until it was clear that a
general limit of imaging performance had been reached. This is the point at which
the design could be fine-tuned to alter the balance of image quality across the field,
135
but with no overall improvement in imaging performance (i.e., as one characteristic
is improved, another declines).
5.2.2 Geometrical ray aberrations
Up to this point, we have used spot diagrams and MTF plots to evaluate the
imaging performance of different IOC lens designs, and for good reason, as these
tools provide a direct indication of the net performance of the system in terms that
are immediately understandable, such as blur spot diameter and image resolution.
The specific geometrical ray aberrations that underlie the level of image quality
achieved by the IOC can be further evaluated with a set of field curves and ray
aberration plots (or “ray intercept curves”). These diagnostic tools indicate the
specific types of aberrations that are present in the optical system, and how they vary
with field angle and wavelength. Before showing the aberration details of the IOC
lens, a description of the main classes of geometrical ray aberrations in optical
imaging systems is presented.
The errors, or optical path differences, between the optical wavefront located
at the exit pupil of an optical system and a best-fit spherical wavefront converging
toward the image point can be expressed mathematically in the form of a polynomial
(for rotationally symmetric systems). This wavefront aberration polynomial, W, may
be expressed as a Taylor series expansion in field angle and pupil coordinates:
136
2
020
111
4
040
3
131
22 2
222
22
220
3
311
Defocus
cos Change in scale
+ Spherical aberration
+cos Coma
+ cos Astigmatism
+ Field curvature
+ cos Distortion
+ . Higher-order aberrations
WWr
Whr
Wr
Whr
Wh r
Wh r
Wh r
etc
φ
φ
φ
φ
=
′ +
′
′
′
′
(5-2)
in which h′ is the paraxial image height, r and φ are the pupil coordinates, and W is
the optical path difference (OPD) measured in waves [4-6]. The subscripts of the W
coefficients indicate the term dependence on the powers of h′, r, and cosφ,
respectively [7]. The relevant coordinate system is illustrated in Figure 5-4.
r
General Ray
y
y′
x′
x
φ
z
Chief Ray
Δy
Δx
Exit Pupil
Image Plane
h
Object Plane
h′
Lens System
Figure 5-4 Coordinate system showing a chief ray and a general (skew) ray from a
common object point at height h intersecting the image plane at different points. The
chief ray intersects the image plane at height h′.
137
The first two terms in Equation (5-2) represent axial and lateral variations in
focus, respectively. Strictly, these are not aberrations since they can be corrected by
refocusing the system. However, if these two terms vary with wavelength, then they
represent the two forms of first-order chromatic aberration, termed axial and lateral
color. The next five terms represent the five third-order monochromatic aberrations,
also called “Seidel” aberrations. While these five wavefront aberration terms each
depend to fourth order on the sum of the powers of h′ and r, the transverse ray
aberrations in the image plane, Δx and Δy, are proportional to the partial derivatives
of the wave aberration polynomial with respect to the x and y coordinates (for which
x
2
+ y
2
= r
2
) [5, 8]. Thus, the transverse ray aberrations depend to the third order on
the sum of the powers of h′ and r. The third-order aberrations are also called
“primary” aberrations, as they tend to dominate in systems with modest apertures
and fields of view. The remaining terms in the Taylor expansion represent higher-
order aberrations.
Those aberrations that vary with object or image height (h, h′ ≠ 0) are
suitably termed off-axis aberrations and include all but spherical aberration, which is
only defined for axial rays (from an on-axis point at either a finite or infinite object
distance). All five aberrations have a dependence on aperture, or pupil height, r
(note that “aperture height” and “aperture radius” are often used interchangeably).
Annular rings at different aperture heights are often called “zones.” It is common for
optical designers to discuss how aberrations vary by “zone,” or, for example, to refer
138
to “zone rays” that lie in the tangential or sagittal ray fans at the same pupil height.
The three aberrations that vary with cosφ, namely coma, astigmatism, and distortion,
are nonsymmetrical about the optical axis. The seven basic classes of aberrations
corresponding to the terms in Equation (5-2), and common means to control them,
are described as follows [6-9]:
1. Axial Color – Variation in the axial focus with wavelength (also called
longitudinal chromatic aberration, or simply, chromatic aberration). Axial
color results from the natural chromatic dispersion of the lens materials used
in the system (variations in the refractive index with wavelength). This
aberration is typically corrected by combining positive and negative lenses in
such a way that the positive chromatic dispersion generated by the positive
lens element(s) (in which blue light rays are bent toward the optical axis at
larger angles than red light rays) is compensated by the negative dispersion in
the negative lens element(s) while maintaining an overall positive focal
length (for an imaging system).
2. Lateral Color – Variation in the image height with wavelength (also called
transverse chromatic aberration, or chromatic difference in magnification).
This aberration causes color fringing in the image that is worse near the field
edges and disappears at the image center. This aberration increases linearly
with field angle, or object height, and is difficult to correct in wide-angle
systems without the use of anomalous dispersion materials or diffractive
139
elements (both of which bend long wavelength rays at larger angles than
short wavelength rays, which is physically opposite than how light refracts
through conventional optical glasses and polymers). Lateral color is worse
for lenses that are far from the aperture stop because the chief rays from off-
axis ray bundles will intersect these lenses closer to the edge, usually at high
angles of incidence. The short wavelength rays will then refract significantly
more than the long wavelength rays, resulting in different image heights for
each wavelength at the focal plane.
3. Spherical Aberration – Variation in the axial focus with aperture height.
This aberration derives from a cubic dependence on aperture height (r
3
), and
results in marginal rays coming to a focus either in front of or behind the
paraxial image point. If the difference in focus is measured along the optical
axis, it is called longitudinal spherical aberration. If the difference in ray
intercepts is measured in the lateral or transverse direction in the image plane,
it is referred to as transverse spherical aberration. Spherical aberration can be
controlled by bending the lens into a shape that minimizes the angles of
incidence and refraction for the rays at each lens surface. For example, it can
be shown that a convex-plano lens will exhibit significantly less spherical
aberration than a plano-convex lens with the same aperture and focal length.
4. Coma – Variation in the image height from zone to zone of the aperture. The
wavefront distortion due to coma is linearly proportional to the field of view
(object or image height) and varies quadratically with aperture height. This
140
aberration is characterized by a wavefront distortion that varies with cosφ
such that rays crossing through the aperture at an angle φ cross at an angle 2φ
at the image plane. Each circular zone of rays in the aperture forms an image
of a “comatic circle” at the image surface. The 2φ dependence implies that
the tangential rays from an annular zone (±y, φ = 0°) will intersect at the top
of the circle, and the sagittal zone rays (±x, φ = 90°) will form the point at the
bottom of the circle, while the rest of the circle is formed by the remaining
skew rays at other crossing angles. This is an unusual effect since the sagittal
rays end up in the tangential plane at the image surface. The final image is
the combination of all such comatic circles, which increase in diameter with
the zone radius, resulting in a comet-like image that gives the aberration its
name. Like spherical aberration, coma varies with lens shape and is reduced
in cases with smaller angles of incidence, meaning that for a single lens of a
given refractive power, the shape that minimizes spherical aberration is
nearly the same as the shape that minimizes coma. Coma may also be
controlled by shifting the aperture stop away from the lens to force the bundle
of rays from each field angle to intersect the lens above or below the vertex,
such that the angles of incidence and refraction for each ray bundle are
smaller. A lens system that is corrected for both spherical aberration and
coma is termed aplanatic.
141
5. Astigmatism – This aberration can be combined with field curvature
(described below) to generate the term,
( )
22 2
222 220
cos , hr W W φ ′ + since both
of these wavefront aberrations increase quadratically with image height and
aperture radius and result in a curved image plane (after differentiation, the
transverse aberrations increase linearly with aperture). Unlike field
curvature, astigmatism is not symmetric about the optical axis. When
astigmatism is present in an optical system, the shape of the curved image
plane is different for tangential and sagittal ray fans. This effect was
discussed in Chapter 4 when introducing the concept of tangential and
sagittal rays. In essence, a cone of rays from an off-axis object point will
intersect a lens obliquely, and the extreme tangential rays will see a different
curvature than the extreme sagittal rays, causing the two orthogonal ray fans
to come to a focus at different distances behind the lens. As this aberration
varies quadratically with image height, the curved image planes will take on
paraboloidal shapes (in the absence of higher order aberrations), and the best
image plane will be a compromise between the two. Astigmatism is
eliminated when the tangential and sagittal image surfaces are made to
coincide.
6. Field Curvature – In the absence of astigmatism, the focal plane of a positive
lens will bend inwards in the form of a parabola (and will bend outwards for
a negative lens). This curved focal plane is called the Petzval surface and is
142
dependent only on the focal power and indices of refraction of the lens
elements [6-9]. For a single element lens, the field curvature is inversely
proportional to the product of its refractive index and focal length. For a lens
with multiple elements, the field curvature is the sum of the Petzval surfaces
of each component,
11
,
ii P
nf R
=
∑
in which R
P
is the radius of curvature (at
the vertex) of the cumulative Petzval surface. A flat image plane may
therefore be obtained by adding one or more negatively powered elements to
the system to force the Petzval sum to zero. These negative elements are
typically placed at locations where the marginal rays from all field angles
will intersect the surfaces at relatively low heights (such as near an aperture
stop or close to the image plane). This way, the negative powers of the
elements contribute strongly to the Petzval sum, but very little to diverging
the rays away from their focal points. Clearly, the Petzval sum cannot be
zero for a single lens. However, the field may be artificially flattened in a
single lens system by shifting the lens away from the aperture stop so that the
chief rays for off-axis field angles no longer intersect the center of the lens.
For a given lens, there will exist a stop position for which the sagittal and
tangential astigmatic image surfaces curve in opposite directions so that the
surface of best focus lies on a flat plane between them [6-9].
7. Distortion – Variation in lateral magnification for object points at increasing
distances from the optical axis. In the absence of all preceding aberrations, a
143
lens with distortion will produce an image that is clearly in focus, but appears
either nonuniformly stretched or nonuniformly compressed relative to the
object scene. If the magnification increases with increasing distance from the
optical axis, then positive or pincushion distortion results. If instead the
magnification decreases with increasing distance from the optical axis, the
result is negative or barrel distortion. Since the wavefront distortion
contributing to this aberration increases with the cube of the field of view, the
corners of a rectilinear image suffer the largest amount of distortion as they
are farthest from the optical axis. The distortion for a given field height is
calculated as the lateral displacement of the chief ray from the corresponding
paraxial ray of the orthoscopic case in the image plane as a percentage of the
orthoscopic paraxial ray height:
100
cp
p
yy
Distortion (%)
y
−
=× (5-3)
in which y
c
and y
p
are the image heights of the chief and orthoscopic paraxial
rays, respectively. Variations in lens thickness and aperture position affect
the amount of distortion in a system. For a thin lens with the aperture stop at
the lens, the distortion is zero. If the aperture is placed at some distance in
front of the lens, then the bundle of rays from an off-axis object point will
intersect the lens off center at a high enough angle of incidence that the actual
rays will bend more than a paraxial ray would in the orthoscopic case, and
will intersect the image plane below the paraxial focus, resulting in barrel
144
distortion. Another way to think of this is that, as the aperture is shifted
farther away from the lens, the off-axis ray bundle effectively travels a
greater distance from the object to the image (due to the increasing obliquity
of the rays going from the stop to the lens), causing the lateral magnification
to decrease. Alternatively, if the aperture is located some distance behind the
lens, the image will show pincushion distortion.
In general, one way to construct a well corrected optical system is to ensure
that the optical rays intersect each surface at small angles of incidence. This
guarantees that the paraxial, or small angle, condition is maintained throughout the
system, even for rays that intersect surfaces far from the optical axis
(sinθ ≈ tanθ ≈ θ ). If a low f-number is required, then this strategy will inevitably
require many optical surfaces to gradually bend the rays at small angles at each
successive surface to ultimately achieve the overall sharp angles, and short back
focal length, that are needed. Alternatively, a well corrected lens system may be
constructed in which the aberrations are significant at individual surfaces within the
system, but the curvatures and spacings are chosen in order to balance the positive
aberrations of some surfaces against the negative aberrations of others across the
field of view and wavelength spectrum [5]. This too requires many surfaces if the
seven basic classes of aberrations are to be corrected over a finite field of view. In
addition to using multiple lenses, varying the placement of the aperture stop is
another important technique for controlling off-axis aberrations such as coma,
astigmatism, and distortion. In this case, the lens elements are often arranged to
145
achieve symmetry about the aperture stop, which tends to reduce angles of incidence
at the lens surfaces and cancel off-axis aberrations. These common approaches to
aberration correction can be readily seen by examining several classic, well-
corrected lens designs such as the Cooke triplet, Petzval lens, and Double-Gauss
Lens, which have three, four, and six lens elements, respectively, as shown in several
texts on optical system design [6, 8, 10].
For the refractive IOC design presented here, a single refractive lens with a
low f-number and ultrashort optical system length is required due to space
constraints. Little can be done beyond varying the two aspherical surface curvatures
and the lens thickness to correct individual aberrations. The first aspherical surface
is, by necessity, close to the aperture stop, meaning that the chief ray for each field
angle nearly intersects the center, or vertex, of the lens surface. This means that
changes in the shape of this surface will not significantly affect the bending of the
chief rays, making it difficult to control off-axis aberrations at this surface.
However, varying the aspherical shape of the front surface can largely affect the
spherical aberration. Since the lens is thick, the rear surface is significantly shifted
away from the stop and the chief ray at different field angles will intersect the rear
surface at very different heights. This means that variations in the aspherical shape
of the rear surface will significantly affect the bending of the bundle of rays from
each field angle in different ways, allowing for some correction of both on and off-
axis aberrations. Together, both surfaces act to balance the ray aberrations in the
system while simultaneously bending the light from each object point toward its
146
conjugate point in the image plane, less than a millimeter behind the rear surface of
the lens. Without additional system length and degrees of freedom, this single
refractive aspherical lens system is far from aberration free, but, perhaps
surprisingly, provides an adequate level of imaging for the retinal prosthesis
application.
5.2.3 Ray aberration plots and field curves
Insight into the details of optical ray aberrations can be gleaned from a set of
ray intercept curves, also called ray aberration plots. This concept is illustrated in
Figure 5-5. A ray aberration curve is a plot of the ray intercept height at the image
plane, relative to the chief ray, as a function of ray height at the pupil (or aperture
stop). In other words, the horizontal axis represents the position of the ray in the
entrance pupil (with the chief ray at the origin), and the vertical axis represents the
displacement at the image plane relative to the position of the chief ray. An ideal
Entrance pupil
Image plane
differences
from the chief ray
Marginal rays
Chief ray
Pupil position
Ray aberration curve
Paraxial
image
plane
Figure 5-5 Concept of a ray intercept curve (showing third-order spherical
aberration for a lens focused at the paraxial image plane).
147
image point would therefore be represented by a flat, horizontal line. The illustration
in Figure 5-5 shows a ray intercept curve that indicates the presence of third-order
spherical aberration in a spherical lens. The central region of the ray intercept curve
is flat, showing that the paraxial rays (the set of rays that are close to the optical axis)
come to a common focus, while the marginal rays at the upper and lower edges of
the pupil intersect the image plane below and above the chief ray, respectively.
Typical shapes of ray intercept curves that result from the presence of
defocus, coma, astigmatism, and third-order spherical aberration are shown in Figure
5-6. The curves for both tangential (Δy) and sagittal (Δx) ray fans are shown so that
Defocus
3
rd
-Order Spherical Aberration Astigmatism
3
rd
-Order Coma
Tangential Rays Sagittal Rays
Δy Δx
Δy Δx Δy Δx
Δy Δx
Tangential Rays Sagittal Rays
3
rd
-Order Spherical Aberration at Best Focus
Δy Δx
Coma Plus Astigmatism
Δy Δx
Figure 5-6 Transverse ray intercept curves showing the shapes associated with
certain individual and combined aberrations (After [11]).
148
the effects of astigmatism can be seen. For an axially symmetric lens system, only
the upper half of the pupil is drawn in the sagittal plots, as the lower half will be
identical in magnitude. Any real optical system will possess a combination of these
aberrations that varies with wavelength and field angle. By viewing the ray intercept
curves at each field angle, it is often possible to see which types of aberrations are
present and dominant in a system by simply observing the shapes of these curves.
The ray intercept curves for the IOC optical system of Figure 5-1 are shown
in Figure 5-7 at field angles of 0°, ±5°, ±10°, ±15°, and ±20° degrees. Each plot
shows the aberration curve for one of three wavelengths within the visible spectrum
(514 nm, 559 nm, and 609 nm). The three-wavelength spectrum used during the
optimization of this lens design approximates the photopic response curve of the eye,
which peaks at 555 nm, and is about twice as sensitive to green light as it is to red
and blue light [12, 13]. The vertical axes in the ray intercept curves in Figure 5-7
range from -0.025 to +0.025 mm. Differences in the curves at each wavelength
represent the amount of lateral chromatic aberration across this spectrum. The
chromatic aberration is actually quite small due to the relatively large Abbe value of
the Zeonex lens material (V
d
= 55.8). An analysis of the performance of this lens
over a wider spectrum, from 400 to 700 nm, that has been appropriately weighted to
match the combined transmittance of the cornea and the spectral response of typical
RGB and monochrome CMOS sensors is presented in Section 5.8, along with a lens
design that has been optimized over this extended spectrum.
149
MCH 08-Dec-06
F/0.96, 20-cm Focus
RAY ABERRATIONS ( MILLIMETERS)
608.8900 NM
558.9800 NM
513.8800 NM
-0.025
0.025
-0.025
0.025
0.00 RELATIVE
FIELD HEIGHT
( 0.000 )
O
-0.025
0.025
-0.025
0.025
0.24 RELATIVE
FIELD HEIGHT
( 5.000 )
O
-0.025
0.025
-0.025
0.025
0.48 RELATIVE
FIELD HEIGHT
( 10.00 )
O
-0.025
0.025
-0.025
0.025
0.74 RELATIVE
FIELD HEIGHT
( 15.00 )
O
-0.025
0.025
-0.025
0.025
TANGENTIAL 1.00 RELATIVE SAGITTAL
FIELD HEIGHT
( 20.00 )
O
Figure 5-7 Ray intercept curves for the IOC lens design shown in Figure 5-1 The
horizontal axis is the ray height at the entrance pupil, and the vertical axis is the ray
intersection at the image plane.
The on-axis plot in Figure 5-7 (0.00 Relative Field Height, 0.000°) shows a
small amount spherical aberration (SA) at best focus. The Code V® optical design
software used to produce these plots also provides metrics of the amount of higher-
order aberrations present in a lens system. These metrics indicated that the spherical
150
aberration shown in Figure 5-7 is largely due to residual 5
th
-order spherical
aberration. The 3
rd
-order SA is essentially cancelled by nearly equal but opposite-
sign contributions from the front and rear surfaces of the aspherical lens, whereas
both surfaces have a small amount of negative 5
th
-order SA. At all off-axis field
angles, residual coma and astigmatism are present, though the astigmatism is largely
corrected at the ±20° field edge (both the tangential and sagittal curves slope
downward and to the right before coma causes the marginal rays to rapidly diverge at
the plot edges). The spherical-aberration-like undulation that begins to appear at
±15°, and becomes significantly worse at ±20°, is indicative of fifth-order coma,
which is also called oblique spherical aberration.
The two plots in Figure 5-8 provide additional information about the system
aberrations that is difficult to ascertain from the ray intercept plots shown above. On
the left is a set of astigmatic field curves at the center wavelength (559 nm), which
shows the specific curvatures of the sagittal and tangential image planes. The origin
in both of these plots represents the location of the image plane, which is set at the
best-compromise focus over the field of view and wavelength spectrum. The vertical
axis in both plots indicates the semi-field angle from 0° to 20°. The horizontal axis
in the field curves plot indicates longitudinal distance (focus distance) of the actual
focal plane at each field angle relative to the image plane. The rightmost figure is a
plot of the distortion in the image over the field of view. The distortion at ±20° is
151
3.1%, indicating that there is a small amount of pincushion distortion at the edges of
the image that drops to less than 1.3% over the central field of view, ±10°.
(a) (b)
Figure 5-8 (a) Astigmatic field curves (“S” is for sagittal and “T” is for tangential)
and (b) distortion plot as a function of semi-field angle from 0° to 20° at 559 nm for
the lens design shown in Figure 5-1. The horizontal axis range for the astigmatic
field plot is ±0.08 mm, and is ±5% for the distortion plot.
The field curves can be better understood by referring to Figure 5-9, which
shows a zoomed-in image of the tangential ray fans at the image plane of the IOC.
In the diagram, the rays have been extended slightly beyond the image plane so that
it is easier to see how and where the rays at each field angle come to a focus. It is
not obvious from the ray diagram alone, but the best focus position for each field
angle corresponds to the image distance at which the RMS wavefront error is
minimized (or, likewise, where the polychromatic MTF is maximized), which does
not necessarily correspond to the minimum beam diameter for each fan of rays. This
152
is because the ray diagram does not accurately indicate the energy distribution in the
focal plane. For instance at a given image point, the marginal rays may form a large
spot diameter, but nonetheless account for a very small amount of the total energy in
the spot. However, if enough rays are traced, then the cross-sectional density of the
rays can provide a meaningful estimate of the energy distribution. In such a case, it
is possible to see how the tangential field curve shown in Figure 5-8 above follows
the location of the best focus spot for each field angle, as shown with a dotted line in
Figure 5-9. The location of the flat image plane is a calculated compromise among
the best focal spots for the tangential and sagittal ray fans across the field of view
and wavelength spectrum. This typically results in the image plane being placed at
the best focus for a relative field height of 0.7. This can be seen in the astigmatic
field curves in Figure 5-8, in which the sagittal and tangential curves are about
Image plane
0°
±5°
±10°
±15°
±20°
Tangential ray fans
Tangential
field curvature
0.1 mm 0.1 mm
IOC Lens System
Figure 5-9 Field curvature concept. The dotted line at the location of the image
plane is drawn to roughly indicate the tangential astigmatic field curve shown in
Figure 5-8(a). The location of the image plane is a compromise among the best focal
spots (at which the RMS wavefront errors are a minimum) for the tangential and
sagittal ray fans across the field of view and wavelength spectrum.
153
equidistant from, and on opposite sides of, the image plane at a relative semi-field
angle of 0.7 (~14°).
Altogether, the previous three plots indicate that field curvature due to the
combined effects of astigmatism and the Petzval sum contribute significantly to
increasing spot diameters as a function of field angle in this IOC lens system.
Attempts to incorporate a manufacturable negative lens element to reduce the Petzval
sum were unsuccessful because of the limited space within the package. The
possibility of using two positive lenses with an appropriately positioned aperture stop
to artificially flatten the field by obtaining sagittal and tangential image planes that
curve in opposite directions is considered in Section 5.7.
In summary, while the field curvature is difficult to compensate for
completely in this design, the distortion and spherical aberration are low. The
aberrations are surprisingly acceptable over the desired ±20° field of view for the
prosthetic vision application considered herein. Overall, the key IOC design goals
have been met successfully with the lens design known in Figure 5-1.
5.2.4 Fabrication and test of polymer aspherical lens
Based on the above analyses, the aspherical polymer lens shown in Figure
5-1 was selected for fabrication. Prototype quantities of the lens were fabricated by
G-S Plastic Optics, Rochester, NY, using single-point diamond turning. Both sides
of each lens were antireflection coated to a specification of < 1% reflectivity at
normal incidence across the visible spectrum. Test results on the fabricated lenses
154
showed that the reflectivity is below 0.5% at normal incidence from 425 to 700 nm.
A CAD drawing of the lens is shown in Figure 5-10(a), and a photograph of the
profile of one of the fabricated lenses is shown in Figure 5-10(b). The prototype lens
includes a 5-mm diameter flange for mounting on an optical bench for lens testing.
The measured mass of this lens, with the 5-mm flange, is 27.7 mg, in close
agreement with an analytically calculated value of 30 mg. Final versions of the lens
for packaging within the IOC will not include the flange, will be < 2.8 mm in
diameter, and will have a mass closer to 11 mg, with the actual size and mass
determined by which packaging concept is finally selected.
(a) (b)
Figure 5-10 (a) CAD drawing used for fabrication of three custom polymer IOC
lenses for use in the next generation IOC prototype (from G-S Plastic Optics,
Rochester, NY). (b) Photograph of one of the fabricated lenses, showing the residual
5-mm diameter flange used in fabrication and maintained for lens testing purposes.
The fabricated lenses meet all of their required specifications. The lens
surfaces have substantially less than 1 wave of surface figure error (deviation from
the specified aspherical curvature) and 0.5 waves of surface irregularity, as shown in
the profilometer plots for the front and rear surfaces of one of the fabricated lenses in
155
Figure 5-11. The surface roughness is ≤ 70 angstroms, the wedge is < 0.010 mm,
and the scratch/dig values are less than 40/10. These parameters and typical
Surface 1 (front surface)
Surface 2 (rear surface)
Figure 5-11 Profilometer test results for the front and rear aspherical surfaces of one
of three fabricated polymer lenses, showing that the surface figure (power) error and
surface irregularity are much less than the specified 1λ and 0.5λ, respectively (at
633 nm). These data curves are a measure of the deviation from the specified
aspherical profiles of the two surfaces. The vertical axis is scaled in units of waves
(λ) at 633 nm, such that 1.00λ = 0.633 μm. The horizontal axis is scaled in units of
mm, covering the full 2.5 mm clear aperture of the lens.
156
tolerances for polymer molded and diamond-turned lenses are described in detail in
Section 5.6, in which the associated sensitivity and tolerancing analyses are carried
out for this lens system.
Two experimental setups were constructed to characterize the lens. In the
first setup, the lens is tested alone in air, using a 10×, 0.25 NA microscope objective
to relay the image plane onto a commercial sensor that is connected to a video frame
grabber card in a computer. A custom microlens holder with a 2-mm aperture stop
was machined to mount the lens on a computer controlled positioning stage. The
fused silica window in the IOC design shown in Figure 5-1 was not included in this
test apparatus. In this setup, the field of view and the numerical aperture of the
microscope objective limit the extent to which the full set of rays exiting the lens can
be captured. The images shown in Figure 5-12 were captured using this test
configuration, which covers a full field of view of ~12°. Another test configuration
is being developed to characterize this lens over ±20° when placed behind a fused
silica optical window and within a physical eye model composed of lenses that
combine to mimic the refraction of the biological cornea and aqueous humor anterior
to the IOC.
In the second experimental setup, the aim was to integrate the lens with a
chip-scale-packaged image sensor array for the first time. Although a customized
wide dynamic range sensor may be required for the IOC, there has been a shift in the
commercial market during the past year toward providing very small, low power,
157
Figure 5-12 Grayscale and color images captured with the prototype lens shown in
Figure 5-10, when placed in air behind a 2-mm aperture stop. The images formed by
the lens were relayed by a 10×, 0.25 NA microscope objective to a commercial
sensor that was connected to a frame grabber card in a computer.
low resolution camera chips that come close to meeting the requirements of the IOC.
OmniVision Technologies, Santa Clara, CA, released the first of a new class of
CMOS image sensors designed for extremely constrained space and electrical power
requirements. Designated the OV6920, this analog NTSC camera chip measures
2.1 mm by 2.3 mm when packaged, has a 1/18” format sensor area, a mass of
7.8 mg, and only requires a 9-pin I/O [14]. Of the initial 1,025 worldwide pre-
production fabrication run, our team secured 12 units (#1013-1025).
After designing the support electronics (two capacitors and one crystal
oscillator), an optical setup was constructed to characterize the image sensor’s
performance when used in conjunction with the custom designed polymer lens. In
this setup, the lens was placed alone in air, behind a 2-mm aperture stop, and imaged
through a coverglass onto the sensor array. One of the first images obtained with
this test configuration is shown in Figure 5-13.
158
Figure 5-13 Grayscale video image frame of text on a book cover captured by the
lens-sensor combination. The sensor exhibited lower than specified contrast.
The sensor dissipated 37 mW with normal room illumination (400 lux),
mainly due to the analog output buffering on the sensor. Normal sensor operations
were observed after lowering the input voltage slightly, reducing the power
dissipation to below 30 mW. However, in all of the images obtained using this
sensor, the contrast ratio was lower than specified for each sensor operating
condition. This contrast reduction was isolated to the sensor because it was also
observed when coupled with a high-performance 35-mm camera lens (rather than the
prototype IOC lens). Whether this is due to an issue with the early pre-production
fabrication run of the sensor or the supporting electronic circuit is under
investigation.
The custom aspherical lens is designed for optimal imaging when placed
behind a fused silica optical window within the aqueous humor, and a the proper
distance from the biological corneal. The experimental configuration described
above, in which the lens is placed alone in air in front of the OmniVision
159
Technologies OV6920 image sensor, is significantly different from the as-designed
optical system. Nonetheless, these images may be compared with predicted results
using the Monte-Carlo ray tracing function in Code V®, as described earlier in
Chapters 2 and 4. A ray diagram of the experimental lens configuration used for this
purpose is shown in Figure 5-14, in which the custom aspherical lens is located in
air, behind a 2-mm aperture stop, and images through a coverglass on the sensor.
Figure 5-14 Ray diagram of the custom aspherical lens in air, imaging through the
coverglass on the OmniVision Technologies OV6920 CMOS sensor array. Note that
this is different from the as-designed lens configuration, in which the lens is located
behind the biological cornea, aqueous humor, and a fused silica window, and without
a coverglass on the image sensor array.
Using the lens model shown in Figure 5-14, Monte-Carlo ray tracing methods
were used to simulate the imaging of a portion of a USAF resolution chart that was
also captured experimentally (the source image of the resolution chart was provided
by Optical Research Associates with the Code V® optical design software). The
simulated and experimentally captured images are shown in Figure 5-15. The low
160
contrast exhibited by the OV6920 limited the quality of the experimental image.
There was also more distortion in the experimental image, which was isolated to
alignment issues in the optical setup. The resolution of the simulated and
experimental images is remarkably similar though, which indicated to us that the lens
is performing as designed with respect to resolution. The experimentally captured
images are expected to more closely match predicted (simulated) images after the
sensor’s contrast is raised to its specified levels.
Experiment Simulation
Figure 5-15 Comparison of an experimentally obtained image of a USAF resolution
chart using the custom IOC lens and OV6920 sensor (left), with the image predicted
by a Monte-Carlo ray-traced simulation (right) using the in-air lens model shown in
Figure 5-14.
Although the OmniVision OV6920 image sensor array is ideal in terms of its
size for an eye implantable device, the power dissipation is higher than expected due
to the analog sensor output. Also, the requirement for an off-chip crystal oscillator
precludes its incorporation into an implantable device. As such, additional sensors,
both commercially available and custom-designed, are still being considered for the
IOC.
161
5.3 Comparison of f/1 systems optimized over 20° and 40° fields of view
The IOC lens system detailed above was optimized over a 40° (±20°) field of
view, despite the fact that current retinal microstimulator arrays are designed to only
cover a 20° (±10°) visual field. As mentioned previously, the desire to provide an
extended field of view in the IOC is to facilitate retinal microstimulator arrays with
even wider fields of view, as well as the potential to incorporate image processing
techniques in the visual processing unit (VPU) that make use of information in the
periphery. An extended field of view would also provision for the incorporation of
enlarged peripheral electrodes in the microstimulator array. For these reasons, the
40° FOV design shown above was chosen for use in the next-generation IOC
prototype. However, it is still of interest to determine how the imaging performance
might be improved if the lens were instead optimized to cover only the central 20°
field of view corresponding to current retinal microstimulator arrays.
In addition, for future high-density retinal microstimulator arrays, it may be
desirable to employ an IOC with imaging performance that exceeds the resolution
required by the prosthesis in the central field of view. This extra margin may allow
for additional imaging processing techniques to be employed before the image
frames are downsampled to match the resolution of the prosthesis.
For these reasons, an IOC was designed using the same configuration,
constraints, and optimization techniques described above, except that the field of
view was restricted to 20° (±10°). The imaging performance over this optimized
162
field of view, as well as at semi-field angles out to 40°, was evaluated for direct
comparison with the 40° (±20°) FOV lens design described in the previous section.
A ray diagram of the lens system optimized over a 20° FOV is shown in
Figure 5-16. Though the lens was optimized over ±10°, the figure shows rays traced
through the system at five field angles from 0° to +20°. It is clear from the figure
that the rays traced at 0°, 5°, and 10° come to a common focus (approximately),
whereas the rays at 15° and 20° do not coincide at the same point on the image plane,
as would be expected of a lens that is only optimized over ±10°.
3.50 mm
Cornea Aqueous
Humor
Window
Aspherical lens
2.00 mm
Air
0.5 mm
0.88 mm
0°
20°
3.50 mm
Cornea Aqueous
Humor
Window
Aspherical lens
2.00 mm
Air
0.5 mm
0.88 mm
0°
20°
Figure 5-16 Custom designed f/1 aspherical lens optimized over a 20° (±10°) FOV
(rays are traced in the figure out to +20° for comparison with the previous design,
shown in Figure 5-1, which was optimized over a 40° FOV).
163
It is interesting to note that, in this case, the rear surface of the lens optimized
to a more radical aspherical shape than the lens shown in Figure 5-1. With a smaller
field of view to accommodate, smaller spot sizes could be achieved with a rear
surface whose curvature begins to invert near the edges. In essence, the degrees of
freedom of the rear aspherical surface proved to be of more use in reducing off-axis
aberrations in the system with a smaller field of view. As in the previous case,
however, only two higher-order aspherical terms were necessary to achieve the given
performance, and adding additional terms resulted in negligible improvement. The
surface parameters for this lens system are shown below in Table 5-4.
Table 5-4 Surface parameters for the IOC system shown in Figure 5-16.
The first order properties of the two lens systems (40° FOV and 20° FOV)
are shown in Table 5-5 for comparison. For the most part, the parameters are
similar. The 20° FOV system has a slightly longer effective focal length at 2.2 mm
and a shorter distance to the image plane at 0.67 mm, and the f-number is exactly
Surface Type Radius (mm) Thickness (mm) Material
Refractive
Index (559 nm)
Anterior
Cornea
Conic
R = 7.77
K = −0.18
0.500 Cornea 1.376
Posterior
Cornea
Conic
R = 6.40
K = −0.60
2.000 Aqueous humor 1.336
Window Front Plane Infinity 0.250 Fused silica 1.460
Window Rear Plane Infinity 0.133 Air 1.000
Lens Front Asphere
R = 1.4175
K = −0.5520
A = −0.0002
B = 0.0036
2.443 Zeonex E48R 1.533
Lens Rear Asphere
R = −2.1884
K = −15.9455
A = 0.1333
B = −0.0124
0.674 Air 1.000
Image Plane Infinity − − −
164
f/1.0. The rear nodal distance is 2.2 mm, which is 0.1 mm longer than in the
previous (40° FOV) case. As the required RMS spot diameter scales with nodal
distance, this implies a very slight increase in the desired spot diameter to meet the
imaging requirements (of about 5%, or just over 1 μm), but is not significant enough
to matter in this lens system.
First Order Parameter 40° FOV System 20° FOV System
Effective focal length 2.106 mm 2.198 mm
System f-number 0.96 1.00
Entrance pupil location (Cornea to EnP) 2.232 mm 2.232 mm
Entrance pupil diameter 2.189 mm 2.189 mm
Exit pupil location (ExP to Image) 3.443 mm 3.000 mm
Exit pupil diameter 4.560 mm 3.635 mm
Front nodal distance (Cornea to NP1) 3.327 mm 3.107 mm
Rear nodal distance (NP2 to Image) 2.100 mm 2.221 mm
Paraxial image height (at ±20°) 0.770 mm 0.805 mm
Actual image height (at ±20°) 0.840 mm 0.880 mm
Image distance (lens rear to image) 0.939 mm 0.674 mm
Table 5-5 Comparison of the first order optical properties of the IOC optical system
optimized over a 40° FOV (Figure 5-1) with the IOC optical system optimized over a
20° FOV (Figure 5-16).
After the lens system was optimized with rays traced over a 20° FOV (±10°),
the system was evaluated with rays traced over a 40° FOV (±20°) to see how the
performance would look if rays beyond the optimized FOV were allowed to pass
through the system (and for direct comparison with the previous IOC system that
was optimized over a 40° FOV). With the wider field-angle rays included, the clear
aperture diameter on the front surface (the maximum footprint of rays on the lens
surface) is 2.56 mm, which implies that the physical lens diameter would exceed the
rectangular IOC packaging concept of 2.5 mm on a side (to avoid vignetting at this
165
surface). Since this lens is not actually designed for this field of view, this would not
be an issue because a field stop would likely be employed at the image plane to
eliminate rays outside the desired visual field. This could be done, for example, by
selecting a sensor of the appropriate size to cover a 20° FOV, or by reading out only
the pixels corresponding to the desired field of view, or by placing a physical
aperture at the sensor plane. If the field of view is restricted to the designed 20°,
then the required clear aperture diameter is only 2.18 mm, which leaves sufficient
room to fit in the smaller of the two envisioned housing concepts described earlier.
Nevertheless, the clear aperture of the lens was not restricted in this analysis in order
to facilitate a comparison of the imaging performance of the narrow and wide FOV
lens systems over a full 40° (±20°) FOV.
The imaging performance of the narrow FOV system is shown in Figure 5-17
in terms of the MTF and polychromatic spot diagram with RMS spot diameters at
five field angles from 0 to ±20°. The RMS spot diameters over the designed FOV
are all less than 10 μm, and the MTF remains well above 0.5 out to 50 lp/mm. A
spatial frequency of 50 lp/mm corresponds to better than twice the resolution needed
for future envisioned retinal prosthesis arrays with 625 to 1000 electrodes over a
±10° FOV. Thus, if the aim is to design the IOC optical system to simply match the
needed resolution and visual field of the prosthesis, then this is quite feasible at f/1.
In fact, the system can be designed with significant margin to provide added
166
flexibility for image processing steps that may be applied before converting the
image into a set of neuronal stimulus pulses.
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
F1, 20 cm, 20 deg FO
V
DIFFRACTION MTF
MCH 08-Dec-06
DIFFRACTION LIMIT
AXIS
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
4.5 μm at 0°
113 μm at ±20°
36 μm at ±15°
9.1 μm at ±10°
7.2 μm at ±5°
4.5 μm at 0°
113 μm at ±20°
36 μm at ±15°
9.1 μm at ±10°
7.2 μm at ±5°
MTF Spot diagrams
Figure 5-17 MTF and polychromatic spot diagram with RMS spot diameters for
the lens system shown in Figure 5-16 focused at a 20 cm object distance. The
colored lines in the MTF plot represent different semi-field angles (red: 0°, green:
5°, blue: 10°, orange: 15°, pink: 20°) with solid lines for the tangential rays, and
dashed lines for the radial, or sagittal, rays. The colors in the spot diagrams
correspond to red, green, and blue wavelengths.
A qualitative comparison of how the two IOC lens systems would image a
USAF resolution chart is illustrated in Figure 5-18 (the source image of the
resolution chart was provided by Optical Research Associates with the Code V®
optical design software). The full image shown covers a 2 × 2 mm region at the
image plane (including the black areas surrounding the chart), while the actual area
covered by the resolution chart is 1.6 × 1.6 mm, which corresponds to a 40°
horizontal and vertical field of view. Using the process described previously, these
167
Lens designed for
20° (±10°) FOV
Lens designed for
40° (±20°) FOV
Figure 5-18 Simulated images of a USAF resolution chart that spans a 40°
horizontal and vertical field of view for the two IOC lens design cases.
168
images were obtained by performing a Monte-Carlo ray trace from the source image
to the image plane. The noise present in the images is due to sampling a finite
number of rays at the image plane. The top image shows the ray trace through the
narrow FOV system, optimized over 20° (±10°), and the bottom image is for the
wide FOV lens system, optimized over 40° (±20°). The resolution at the sensor
plane of the line pairs in Element 6 of each of the four groups shown in the USAF
resolution chart is approximately 19 lp/mm for group 0, 38 lp/mm for group 1,
76 lp/mm for group 2, and 152 lp/mm for group 3. All of the line pairs in groups 2
and 3 are above 40 lp/mm, which exceeds the resolution of the envisioned 32 × 32
retinal microstimulator arrays (for which the imaging goal for the central ±10° field
of view is good contrast at 25 lp/mm to just match the resolution of the retinal
microstimulator array). For reference, the line pairs in Group 1, Element 3 have a
resolution of approximately 26 lp/mm at the sensor plane, and therefore represent the
approximate level of detail that could be just resolved by the retinal microstimulator
array. Clearly, either of these two IOC lens systems could provide more than
sufficient contrast at this resolution over the central field of view. The narrow FOV
lens system (top image in Figure 5-18) produces high-contrast, sharp images in the
central portion of the chart where groups 2 and 3 reside, but shows severe pincushion
distortion and blur in the peripheral field where the lens system was not optimized.
In contrast, the wide FOV system (bottom image) sacrifices some sharpness in the
central field to obtain much better performance over the full 40° field of view with
little distortion at the edges.
169
5.4 Optimal focus distance to provide best depth of field for retinal prosthesis
subjects
The following three sections focus on the depth of field, sensitivity, and
tolerances of the IOC under various conditions to evaluate the robustness of this
fixed focus system to variations in object distance, surgical placement, corneal
curvature, and manufacturing and alignment errors. The wide field of view lens
system discussed in Section 5.2, and shown in Figure 5-1, is used for each of these
analyses. As mentioned previously, this system was optimized over a 40° (±20°)
field of view and is nominally focused at a 20 cm object distance (unless stated
otherwise).
One of the more notable features of this short focal length camera is that it
possesses an extended depth of field, as introduced in Chapter 2. This performance
characteristic was initially recognized during experiments with the first generation
IOC prototype. Extended depth of field is an extremely important feature of the
intraocular camera, as it implies that images will remain in focus for the retinal
prosthesis subject over a wide range of object distances even as compared with the
normal human visual system. As discussed in Chapter 2, the unusually short focal
length and small aperture of the IOC in combination are responsible for this large
depth of field. Under these conditions, the hyperfocal object distance is very close to
the lens system, and varies inversely with the allowable blur circle diameter. When a
lens is focused at the hyperfocal distance, the depth of field extends from half this
distance to infinity.
170
For the custom IOC lens design presented here, the allowable blur circle
diameter is approximately 30 μm, the focal length is 2.1 mm, and the diameter of the
entrance pupil is approximately 2.2 mm. This places the hyperfocal distance at
approximately 15 cm, meaning that the blur spot diameter should remain less than
30 μm from about 7.5 cm to infinity. However, the hyperfocal distance is derived
assuming on-axis objects, so it is not clear exactly how the blur spot diameters will
vary as a function of object distance for off-axis field angles. Also, while it seems
sensible to simply focus the camera at the hyperfocal distance and claim that this will
provide the “best” depth of field, a closer or farther focus distance might in fact
provide a more “optimal” depth of field. A more complete analysis might take into
account the ±10° field of view of the retinal microstimulator array, the needs of
retinal prosthesis subjects, and the desire for decent imaging out to ±20° for
peripheral field image processing or peripheral microstimulator electrodes.
It may be desirable to focus the IOC in front of the hyperfocal distance to
allow a retinal prosthesis subject to bring objects with fine details, such as words on
a page or numbers on a key, extremely close to their eyes. This would allow them to
enlarge the image on the retina and read the information with clarity (something that
is not possible for sighted individuals, as the image becomes too blurred when
brought within a few centimeters of the eye). If this can be done while maintaining a
reasonable amount of blur at mid-range and far distances, then it may be an excellent
idea for IRP subjects. However, since subjects will often be focusing on common
171
objects such as other people, furniture, signs, and cars, which are located at mid to
far distances, it would not be sensible to focus the camera very close to the eye if it
significantly sacrifices resolution at these more common object distances.
Thus, there are three focus regions to consider for the IOC: closer than the
hyperfocal distance, at the hyperfocal distance (15 cm), and beyond the hyperfocal
distance. The plots in Figure 5-19 show the RMS spot diameter as a function of
object distance across a ±20° field of view for six focus distances: (a) 5 cm, (b)
10 cm, (c) 15 cm (hyperfocal), (d) 20 cm, (e) 50 cm, and (f) 1 m. The vertical scale
ranges from 0 to 140 μm, with a horizontal dotted line indicating the desired 30 μm
spot diameter (for the central ±10° to match the resolution and FOV of current retinal
microstimulator arrays).
Clearly, the blur spot diameters at very near distances are lower when the
camera is focused at 5 or 10 cm, which is in front of the hyperfocal distance.
However, in these cases, the blur spots remain below 30 μm over only a narrow
region of object distances near 8 to 10 cm, and become unacceptably large for
distances beyond half a meter. This choice of focus meets the goal of being able to
bring objects within a few centimeters of the eye to see small details, but fails to
maintain reasonable blur for viewing objects at more common mid-range distances
for the IRP subject.
172
IOC Focused at 5 cm
0
20
40
60
80
100
120
140
1 10 100 1000 10000
Object Distance (cm)
RMS Spot Diameter (microns)
2.5 cm
5 cm
10 cm
20 cm
50 cm
1 m 5 m 10 m
± 0° field
± 5° field
± 10° field
± 15° field
± 20° field
100 m
15 cm
(a)
IOC Focused at 10 cm
0
20
40
60
80
100
120
140
1 10 100 1000 10000
Object Distance (cm)
RMS Spot Diameter (microns)
2.5 cm
5 cm
10 cm
20 cm
50 cm
1 m 5 m 10 m
± 0° field
± 5° field
± 10° field
± 15° field
± 20° field
100 m
15 cm
(b)
Figure 5-19 Evaluation of the depth of field as a function of the distance at which
the camera is focused for the f/0.96 IOC system optimized over ±20°. The plots
show the RMS spot diameter as a function of object distance across a ±20° field for
six focus distances: (a) 5 cm, (b) 10 cm, (c) 15 cm, (d) 20 cm, (e) 50 cm, and (f) 1 m.
The hyperfocal distance for a 30 μm allowable blur spot diameter is 15 cm (Case
(c)). (Continued on next page.)
173
IOC Focused at 15 cm
0
20
40
60
80
100
120
140
1 10 100 1000 10000
Object Distance (cm)
RMS Spot Diameter (microns)
2.5 cm
5 cm
10 cm
20 cm
50 cm
1 m 5 m 10 m
± 0° field
± 5° field
± 10° field
± 15° field
± 20° field
100 m
15 cm
(c)
IOC Focused at 20 cm
0
20
40
60
80
100
120
140
1 10 100 1000 10000
Object Distance (cm)
RMS Spot Diameter (microns)
2.5 cm
5 cm
10 cm
20 cm
50 cm
1 m 5 m 10 m
± 0° field
± 5° field
± 10° field
± 15° field
± 20° field
100 m
15 cm
(d)
Figure 5-19, Continued.
174
IOC Focused at 50 cm
0
20
40
60
80
100
120
140
1 10 100 1000 10000
Object Distance (cm)
RMS Spot Diameter (microns)
2.5 cm
5 cm
10 cm
20 cm
50 cm
1 m 5 m 10 m
± 0° field
± 5° field
± 10° field
± 15° field
± 20° field
100 m
15 cm
(e)
IOC Focused at 1 m
0
20
40
60
80
100
120
140
1 10 100 1000 10000
Object Distance (cm)
RMS Spot Diameter (microns)
2.5 cm
5 cm
10 cm
20 cm
50 cm
1 m 5 m 10 m
± 0° field
± 5° field
± 10° field
± 15° field
± 20° field
100 m
15 cm
(f)
Figure 5-19, Continued.
When focused at the hyperfocal distance of 15 cm, the depth of field for on-
axis rays ranges from approximately 7 cm to infinity, which matches our analysis.
This can be seen in Figure 5-19(c), in which the 0° curve remains below the 30 μm
175
allowable blur line over this range of distances. However, the curves for ±5° and
±10° begin to rise for object distances from 15 cm to 100 cm, eventually crossing
above the 30 μm line, meaning that the resolution may drop below the optimal value
for mid to far object distances over the central field of view. The 20-cm focus
distance case may provide a better compromise: the blur spot diameters at close
distances (2.5 to 15 cm) are nearly the same as in the hyperfocal case, but the upward
drift of the central field curves is less dramatic, with the ±5° curve remaining well
below the 30 μm line and the ±10° curve remaining essentially flat, and just above
the line, from 15 cm to 100 m. This flattening of the curves for distances beyond 15
cm is even more pronounced for the 50 cm and 1 m focus cases, but at a cost in the
nearest distance at which objects remain within focus. It is for these reasons that the
20 cm case has been used as the nominal focus distance in the analyses presented in
most of this thesis.
If the narrow-FOV IOC design is considered instead (as described in Section
5.3), then we would expect that the spot diameter curves over ±10° would be shifted
well below the 30 μm line, since this design has smaller spot diameters in the central
field of view (the curves at ±15° and ±20° would be worse, but the assumption here
is that these fields would not be imaged if this design were utilized). This would
allow the camera to be focused at a closer distance (either at or just below the
hyperfocal distance) to accommodate the desire for viewing objects near to the eye,
while maintaining the desired blur for mid to far object distances. This can be seen
176
in the plots for the narrow-FOV camera in Figure 5-20. It is still impractical to focus
the camera very close to the eye, at 5 cm, for example, as shown in Figure 5-20(a),
but it is clear that focusing the camera somewhere between 10 and 15 cm would
meet the desired imaging performance for object distances between about 5 cm to
infinity. This is an extremely large depth of field that would allow the subject to
bring objects with fine detail very close to their eyes for inspection.
Overall, these analyses demonstrate the tradeoffs with respect to depth of
field for different choices of focus distance. They show that the best focus distance
for IRP subjects is not necessarily at the hyperfocal distance of 15 cm. For the wide-
FOV IOC, it appears optimal to focus the camera at a few centimeters beyond the
hyperfocal distance (at approximately 20 cm), and at a few centimeters in front of it
(approximately 10 cm) for the narrow-FOV camera. These results also reveal that
this may be a rich area for study using visual psychophysics experiments to better
determine the optimal depth of field for prosthetic vision subjects, based on their
most common activities and needs, so that the best focus position can be set during
fabrication and alignment of an IOC.
177
Narrow-FOV IOC Focused at 5 cm
0
20
40
60
80
100
120
140
1 10 100 1000 10000
Object Distance (cm)
RMS Spot Diameter (microns)
2.5 cm
5 cm
10 cm
20 cm
50 cm
1 m 5 m 10 m
± 0° field
± 5° field
± 10° field
100 m
15 cm
(a)
Narrow-FOV IOC Focused at 10 cm
0
20
40
60
80
100
120
140
1 10 100 1000 10000
Object Distance (cm)
RMS Spot Diameter (microns)
2.5 cm
5 cm
10 cm
20 cm
50 cm
1 m 5 m 10 m
± 0° field
± 5° field
± 10° field
100 m
15 cm
(b)
Figure 5-20 Evaluation of the depth of field as a function of the distance at which
the camera is focused for the narrow FOV IOC system optimized over ±10°. The
plots show the RMS spot diameter as a function of object distance across a ±10° field
for four focus distances: (a) 5 cm, (b) 10 cm, (c) 15 cm, and (d) 20 cm. (Continued
on next page.)
178
Narrow-FOV IOC Focused at 15 cm
0
20
40
60
80
100
120
140
1 10 100 1000 10000
Object Distance (cm)
RMS Spot Diameter (microns)
2.5 cm
5 cm
10 cm
20 cm
50 cm
1 m 5 m 10 m
± 0° field
± 5° field
± 10° field
100 m
15 cm
(c)
Narrow-FOV IOC Focused at 20 cm
0
20
40
60
80
100
120
140
1 10 100 1000 10000
Object Distance (cm)
RMS Spot Diameter (microns)
2.5 cm
5 cm
10 cm
20 cm
50 cm
1 m 5 m 10 m
± 0° field
± 5° field
± 10° field
100 m
15 cm
(d)
Figure 5-20, Continued.
5.5 Sensitivity to variations in surgical placement
In the analyses presented to this point, the front of the IOC optical system has
been nominally located 2 mm behind the posterior cornea. This separation is to
179
provide sufficient clearance to avoid contact with the corneal endothelium during
and after surgical implantation, and constrains the length of the IOC package and
mounting location within the crystalline lens sac, as described in Section 2.9.
However, the exact placement and alignment of the IOC relative to the corneal lens
will vary from this nominal value due to differences in individual eyes and surgeries.
The camera could be slightly closer or farther from the cornea, decentered vertically
or horizontally, or may be tilted. Since the IOC views the world through the aqueous
humor and corneal lens, it is of interest to determine how sensitive the camera’s
imaging performance is to variations in the position of the camera within the eye.
At approximately 32 diopters (using the parameters of the Liou schematic eye
model [15]), the corneal power is only 7% of the refractive power of the IOC (476
diopters). For this reason, the imaging performance of the intraocular camera is
expected to be highly tolerant to variations in placement and tilt with respect to the
corneal lens. A quantitative analysis will provide insight into the range over which
various placement errors can be tolerated without significant compromise of imaging
performance.
The xyz coordinate system used for the sensitivity analyses in this section is
the same as shown previously in Figure 5-4. As usual, the term vertical corresponds
to the y direction, horizontal to the x direction, and longitudinal to the z direction.
The f/0.96 lens system optimized over ±20° is used for these analyses, and the object
distance from the anterior cornea is fixed at 20 cm for all cases. For each plot,
180
results for five semi-field angles from 0° to +20° are shown (representing a full 40°
FOV). It is important to note that the set of rays traced at each of these field angles
is tilted along the vertical (y) dimension (within the yz plane, about the x axis), as
illustrated in all of the lens diagrams in this thesis. This is important because we will
see that the tangential MTF appears to be more sensitive to vertical shifts and tilts in
the IOC placement than the radial (or sagittal) MTF is to horizontal variations. This
may seem odd at first, but it is because vertically tilted field angles are being
considered. If the ray fans are instead traced with the field angles varying along the
horizontal dimension (within the xz plane), then the radial MTF shows the same kind
of sensitivity to horizontal shifts and tilts that the tangential MTF shows to vertical
placement errors.
The sensitivity of the IOC imaging performance to shifts in longitudinal
placement relative to the cornea is shown in Figure 5-21. Variations in the RMS
spot diameter and the tangential and radial MTF values at 25 lp/mm are plotted as
the distance between the posterior cornea and the front of the optical window is
varied from 0.5 to 4 mm. The upper plot in each pair [Figure 5-21(a), (c), and (e)]
shows the variation when the camera’s focus is fixed for the nominal case of a 2 mm
cornea−IOC distance. This represents the case in which the camera is focused a
priori with the assumption that the IOC will reside approximately 2 mm from the
cornea after implantation.
181
Alternatively, the lower plot in each pair [Figure 5-21(b), (d), and (f)] shows
the results when the camera is refocused for each cornea−IOC distance. Since the
nominal distance of 2 mm behind the cornea is subject to change as our
understanding of the packaging and surgical issues evolve, these plots show that
virtually any distance from the cornea may be chosen without any loss of
performance. This is not meant to imply that the IOC focus is adjusted in an already
designed device, on a surgery-by-surgery basis. Rather, if the surgical implantation
studies with current, or future, prototypes indicate a different nominal placement
than 2 mm from the cornea, the IOC package can be designed accordingly so that the
lens has the best focus for this revised condition. As such, there is no apparent need
to design a new custom lens, in spite of the fact that the lens shown in Figure 5-1 was
designed assuming a 2 mm distance from the cornea. For the range of cornea−IOC
distances shown, 0.5 to 4 mm, the image plane only needs to be moved by +8 to -11
μm, respectively, relative to the nominal focus position for the 2 mm case. This also
indicates that there is significant tolerance to focusing errors in the IOC.
Aside from the +20° field, it is immediately clear from Figure 5-21 that the
imaging performance is highly tolerant to longitudinal translation errors. A
longitudinal shift of ±1.5 mm toward or away from the cornea, with respect to the
nominal location, is easily tolerated. The spot diameters remain within 10% of
nominal at 0°, within 5% at 5°, 10°, and 15° degrees, and within 18% at 20°. Not
182
0
10
20
30
40
50
60
0.0 1.0 2.0 3.0 4.0 5.0
Distance from posterior cornea to IOC (mm)
RMS Spot Diameter (microns)
0° field
5° field
10° field
15° field
20° field
IOC focus fixed for 2 mm distance from cornea
(a)
0
10
20
30
40
50
60
0.0 1.0 2.0 3.0 4.0 5.0
Distance from posterior cornea to IOC (mm)
RMS Spot Diameter (microns)
0° field
5° field
10° field
15° field
20° field
IOC refocused for each distance from cornea
(b)
Figure 5-21 Sensitivity of the IOC imaging performance as a function of the
distance between the posterior cornea and the front of the camera (for the f/0.96 ±20°
IOC optical system). (a) and (b) RMS spot diameter; (c) and (d) tangential MTF at
25 lp/mm; (e) and (f) radial MTF at 25 lp/mm. Semi-field angles from 0° to 20°
along the vertical (y) axis are shown (note that the radial and tangential MTF
performance essentially reverse if the field angles are oriented along the horizontal
(x) axis instead, as the system is rotationally symmetric). (Continued on next page.)
183
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.0 1.0 2.0 3.0 4.0 5.0
Distance from posterior cornea to IOC (mm)
Tangential MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
Tangential MTF
IOC focus fixed for 2 mm distance from cornea
(c)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.0 1.0 2.0 3.0 4.0 5.0
Distance from posterior cornea to IOC (mm)
Tangential MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
IOC refocused for each distance from cornea
Tangential MTF
(d)
Figure 5-21, Continued.
184
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.0 1.0 2.0 3.0 4.0 5.0
Distance from posterior cornea to IOC (mm)
Radial MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
Radial MTF
IOC focus fixed for 2 mm distance from cornea
(e)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.0 1.0 2.0 3.0 4.0 5.0
Distance from posterior cornea to IOC (mm)
Radial MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
IOC refocused for each distance from cornea
Radial MTF
(f)
Figure 5-21, Continued.
surprisingly, the imaging at ±20° is more sensitive than at the other field angles.
This is a very large range, as it varies from having the camera placed only 0.5 mm
from the cornea (which would be dangerously close to the corneal endothelium), to
185
having it 3.5 mm from the cornea, which would put the front of the camera within
the crystalline lens sac. Thus, this range is expected to far exceed any variations due
to surgical placement.
The sensitivity of the IOC imaging performance to shifts in the vertical and
horizontal placement within the eye is shown in the three pairs of plots in Figure
5-22. A series of lens diagrams showing the IOC being vertically displaced by 0 to -
1.5 mm relative to the cornea is shown at the top of Figure 5-22(a). Incoming rays at
0°, 10°, and 20° within the yz plane are drawn in each of these diagrams. A similar
illustration for horizontal displacements is not shown, but is easily imagined, as the
IOC position would be translated in and out of the page in this case. Variations in
the RMS spot diameter, tangential MTF, and radial MTF at 25 lp/mm as a function
of the vertical displacement are shown in the upper plot of each pair [Figure 5-22(a),
(c), and (e)]. The variations for horizontal displacements are shown in the lower plot
of each pair [Figure 5-22(b), (d), and (f)].
The nominal camera placement is represented by the rightmost point in each
plot, at 0.0 mm. Since the system is symmetric, these errors would be the same for
positive shifts in the x and y directions. A shift in the negative y direction is
potentially the most likely displacement for the IOC, as the weight of the camera
may cause it to sag slightly over time. The plots show that tangential rays are the
most sensitive to such a displacement, as is expected since these rays are no longer
186
0
10
20
30
40
50
60
-2.00 -1.75 -1.50 -1.25 -1.00 -0.75 -0.50 -0.25 0.00
Vertical displacement relative to cornea (mm)
RMS Spot Diameter (microns)
0° field
5° field
10° field
15° field
20° field
(a) Vertical displacement, vertical fields
0
10
20
30
40
50
60
-2.00 -1.75 -1.50 -1.25 -1.00 -0.75 -0.50 -0.25 0.00
Horizontal displacement relative to cornea (mm)
RMS Spot Diameter (microns)
0° field
5° field
10° field
15° field
20° field
(b) Horizontal displacement, vertical fields
Figure 5-22 Sensitivity of the IOC imaging performance to shifts in placement
along the vertical (y) and horizontal (x) axes. The conditions are the same as stated
in the caption for Figure 5-21. (Continued on next page.)
187
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
-2.00 -1.50 -1.00 -0.50 0.00
Vertical displacement relative to cornea (mm)
Tangential MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
Tangential MTF
(c) Vertical displacement, vertical fields
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
-2.00 -1.50 -1.00 -0.50 0.00
Horizontal displacement relative to cornea (mm)
Tangential MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
Tangential MTF
(d) Horizontal displacement, vertical fields
Figure 5-22, Continued.
188
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
-2.00 -1.50 -1.00 -0.50 0.00
Vertical displacement relative to cornea (mm)
Radial MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
Radial MTF
(e) Vertical displacement, vertical fields
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
-2.00 -1.50 -1.00 -0.50 0.00
Horizontal displacement relative to cornea (mm)
Radial MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
Radial MTF
(f) Horizontal displacement, vertical fields
Figure 5-22, Continued.
centered about the optical axis of the cornea. Nevertheless, because the cornea
contributes little to the bending of the rays in the IOC, a rather large range of
displacement is again quite tolerable. On average, a shift of ±0.75 mm in the vertical
189
or horizontal placement results in less than a 15% change in the spot diameter and
MTF across the ±20° field.
The sensitivity of the IOC performance to vertical and horizontal tilts relative
to the cornea is shown in Figure 5-23. Again, a series of lens diagrams illustrating
the vertical tilt of the IOC is shown at the top Figure 5-23(a). A corresponding series
of diagrams for horizontal tilts can be imagined with the IOC tilting in and out of the
page. A rather large range of tilts from 0° to 14° with respect to the optical axis of
the cornea is considered. The nominal value of 0° is represented by the leftmost
point in each plot. Since the cornea is essentially spherical, tilting the camera such
that it “looks out” through a different portion of the cornea is not expected to cause
much variation in the imaging characteristics. Accordingly, the results show that, on
average, a horizontal or vertical tilt of as much as ±8° results in less than a 15%
change in spot diameter and MTF across the ±20° field.
190
0
10
20
30
40
50
60
02468 10 12 14 16
Vertical tilt (yz plane) relative to cornea (deg.)
RMS Spot Diameter (microns)
0° field
5° field
10° field
15° field
20° field
(a) Vertical tilt, vertical fields
0
10
20
30
40
50
60
0 2 4 6 8 10 12 14 16
Horizontal tilt (xz plane) relative to cornea (deg.)
RMS Spot Diameter (microns)
0° field
5° field
10° field
15° field
20° field
(b) Horizontal tilt, vertical fields
Figure 5-23 Sensitivity of the IOC imaging performance to vertical tilts (within the
yz plane, about the x axis) and horizontal tilts (within the xz plane, about the y axis).
The conditions are the same as stated in the caption for Figure 5-21. (Continued on
next page.)
191
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
024 68 10 12 14 16
Vertical tilt (yz plane) relative to cornea (deg.)
Tangential MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
Tangential MTF
(c) Vertical tilt, vertical fields
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
024 68 10 12 14 16
Horizontal tilt (xz plane) relative to cornea (deg.)
Tangential MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
Tangential MTF
(d) Horizontal tilt, vertical fields
Figure 5-23, Continued.
192
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
024 68 10 12 14 16
Vertical tilt (yz plane) relative to cornea (deg.)
Radial MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
Radial MTF
(e) Vertical tilt, vertical fields
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
024 68 10 12 14 16
Horizontal tilt (xz plane) relative to cornea (deg.)
Radial MTF at 25 lp/mm
0° field
5° field
10° field
15° field
20° field
Radial MTF
(f) Horizontal tilt, vertical fields
Figure 5-23, Continued.
5.6 Tolerances to manufacturing and alignment errors and corneal variations
Additional causes for the imaging performance to vary from the designed
values include manufacturing errors, optical misalignments, and, in the case of an
193
intraocular camera, deviations in the curvature and aberrations of individual corneas.
A tolerancing analysis will show the degree to which each of these errors, alone and
in combination, can be tolerated before a given drop in the imaging performance is
observed.
With regard to manufacturing errors, the method of fabrication is important.
Prototype lenses will, in many cases, be diamond turned. Once a design is finalized,
it will likely be more economical to fabricate a mold to make larger quantities of the
lens at a lower cost per lens. For this reason, typical tolerance values for molded
polymer optics will be used to perform the analyses presented herein. Typical
tolerances for molded polymer lenses, as provided by G-S Plastic Optics (Rochester,
NY), the vendor that was selected to fabricate our custom aspherical lenses for the
Lens Parameter Value Tolerance Comments
Radius of curvature
(front surface)
1.7620 mm ± 5 % = ± 0.088 mm ± 1% is typical
Radius of curvature
(rear surface)
−1.5086 mm ± 5 % = ± 0.075 mm ± 1% is typical
Center thickness 2.310 mm ± 1 % = ± 0.023 mm
Clear aperture 2.500 mm ± 0.020 mm
Edge diameter 5.000 mm ± 1.000 mm
Wedge (TIR) in element - < 0.010 mm
Surface figure error - < 2 fringes 2 fringes = 1 wave
Surface irregularity - < 1 fringe/inch 2 fringes = 1 wave
Scratch-dig - <40/10
40 μm scratch,
100 μm dig
Surface roughness (RMS) - < 50 angstroms
Table 5-6 Typical manufacturing tolerances for molded polymer optics (from G-S
Plastic Optics, Rochester, NY, and [16]). The second column, labeled “Value,”
contains the corresponding values for the custom polymer lens shown in Figure
5-10.
194
next-generation IOC prototype, are shown in Table 5-6. These values, and further
details on molded polymer optics, are available in a paper authored by William S.
Beich [16].
The fused silica optical windows for use in the next-generation IOC prototype were
provided by Valley Design in Santa Cruz, CA and have the manufacturing tolerances
listed in Table 5-7 [17].
Window Parameter Tolerance Comments
Thickness ± 5 % 0.250 mm ± 0.013 mm
Wedge (TIR) in element < 0.010 mm
Tilt < 0.001 rad horiz. and vert. tilt
Surface irregularity < 8 fringes/inch 2 fringes = 1 wave
Scratch-dig <10/5
10 μm scratch,
50 μm dig
Table 5-7 Typical manufacturing tolerances for the fused silica optical window
(from Valley Design Corp., Santa Cruz, CA [17]).
Tolerances on parameters such as surface radius, center thickness, and clear
aperture have self-evident meanings, while other tolerances such as wedge, surface
figure and irregularity, and scratch-dig are less obvious. Wedge is a type of
centering tolerance, and is best illustrated with a diagram, as shown in Figure 5-24.
While the wedge could be measured as either a decenter or tilt of one of the surfaces
relative to the other (as wedge is the result of the two surfaces having different
optical axes; or laterally displaced centers of curvature), it is often specified in
optical shops by the difference in edge thickness at the clear aperture. The value for
wedge is often specified in millimeters and is given as a “total indicator runout,” and
is therefore abbreviated as TIR [18, 19].
195
A
B
A
TIR = (A − B)
Figure 5-24 Diagram illustrating wedge tolerance in terms of TIR (total indicated
reading).
Surface figure (or power) error and irregularity describe deviations of the
surface from the ideal shape specified in the design. For a spherical surface, a
precision test plate is fabricated with the inverse curvature of the surface to be tested.
When the surface under test is brought into contact with the test plate and viewed in
nearly monochromatic light, a set of circular optical fringes is observed. The number
of visible fringes results from differences between the radii, or optical power, of the
two surfaces. Each fringe corresponds to a half wave of difference and the total
number of rings is the surface figure error. Surface irregularity is a measure of the
non-uniformity, or imperfections, in the optical fringes. These concepts are
illustrated in Figure 5-25. For an aspherical surface, it is more common to use an
interferometric test to measure surface errors. In an interferometric “null test,” the
optical wavefront transmitted by the test lens is compared with a reference wavefront
that is reflected from a precision element designed to null (i.e., to flatten or undo) the
variations in the wavefront induced by the test element. Thus, any deviations from
196
the interference pattern of two plane wavefronts at the output of the interferometer
reveal surface figure errors and irregularities. There are several other interferometric
test configurations for measuring surface accuracy, and the general principle of
comparing the test wavefront against a reference wavefront applies in all of them
[20-23].
½Fringe
Local Irregularity
3 Fringes Power
Figure 5-25 Surface figure (or power) and irregularity as measured by fringes either
with a test plate or an interferometric test (After [21, 23]).
The scratch-dig rating (“scratch/dig”) indicates the allowable sizes of visible
defects on the optical surfaces, as defined by U.S. Military Standard MIL-PRF-
13830B [24]. The scratch value refers to the maximum allowable width in microns
of any scratches, marks, or tears on the surface. The dig value indicates the
maximum allowable diameter of any pit or bubble in hundredths of a millimeter.
Thus, a scratch-dig rating of 40/10 would permit a scratches up to 40 μm wide and
dig diameters of 0.10 mm (100 μm). For surfaces that are not near the image plane,
scratches and digs are unlikely to affect the image quality, but can affect the contrast
ratio and signal-to-noise ration, as they contribute scattered light in the image plane.
197
In practice, it is typically not worthwhile for an optical shop to bother with scratches
and digs that are not visible to the eye [25].
Using the manufacturing tolerances listed above for the polymer lens and
silica window, an analysis was performed to determine the impact of varying each
parameter over its full (±) tolerance range on the performance in terms of the change
in MTF at 25 lp/mm at field angles from 0° to ±20°. In addition to the parameters
shown in the tables above, the spacing between the window and the lens was varied
by ±0.025 mm, and the window and lens surfaces were tilted by ±0.001 rad in the
vertical and horizontal directions. The distance to the image plane was used as a
“compensator,” meaning that the camera was refocused in each case to partially
compensate for the degradations in image performance. This simulates the focus
adjustments that could be made during assembly and alignment of the IOC.
The sensitivity analysis feature of Code V® was used to vary each of these
parameters to determine the impact on the system MTF as well as to predict the
probable change in MTF for a statistical combination of these parameter variations
[26, 27].
The results of the sensitivity analysis showed that errors in the radius of
curvature on the front lens surface have the largest impact on the performance,
followed by errors in the radius on the rear lens surface. All of the remaining
parameters had virtually no impact on the tangential and radial MTFs across the field
of view (each error typically caused a change in the MTF of less than ±0.01). This
198
represents one advantage of a single element design – variations due to
manufacturing and alignment errors are minimized. The largest change in
performance was a drop of 0.14 in the tangential MTF for the 0° field angle, which
occurred when the front surface radius varied by −0.088 mm (−5%), after refocusing
the image plane by 59 μm. According to the manufacturer, a ±5% error in the radius
of curvature is extremely conservative for small diameter lenses and a tolerance of
±1% is more commonly achieved.
The probable change in the tangential and radial MTFs for a statistical
combination of these parameter variations was also computed. Again, the worst-case
result is for the on-axis field, for which there is a predicted 98% probability that the
reduction in MTF will be less than or equal to 0.19, assuming that the image plane
can be refocused within a ±70 μm range (the reduction in MTF at other field angles
was between 0.04 and 0.16).
A change of −0.2 in the MTF for on-axis rays is fairly significant. This
would reduce the MTF from 0.8 to 0.6 at 0°. The reduction at the remaining field
angles is closer to −0.1, which is also a little larger than desired, especially at ±20°,
where the already low MTF values at 25 lp/mm would drop to 0.34 tangential and
0.09 radial.
Since the main contributor to this change in performance is the ±5%
tolerance on the lens radii of curvature, which is very conservative, it is of interest to
determine how much this tolerance would need to be tightened to achieve a reduction
199
in MTF of less than −0.05 across the field of view. This would represent a tolerable
level of image degradation.
To this end, the tolerancing capability in CodeV® was used to determine a
new set of tolerances for the optical system such that the probable change in the
tangential MTF is less than -0.05 across the field of view. The results of this
analysis showed that the tolerance on the radius of the front surface of the lens needs
to be tightened from ±0.088 mm (±5%) to 0.020 mm (±1.1%), and that the tolerance
on the rear lens radius needs to be tightened from 0.075 mm (±5%) to 0.040 mm
(±2.7%). However, the lens thickness tolerance can be relaxed from 0.022 mm to
0.080 mm. The irregularity may be relaxed from 2 fringes to 3 fringes, the tilt
tolerance from 0.001 rad to 0.005 rad, and the wedge tolerance should remain the
same at 0.010 mm.
Since the polymer lens manufacturer stated that they typically achieve ±1%
for the radius of curvature for small diameter lenses, this new set of tolerances would
be reasonable to specify when ordering lenses, and would result in a lens system that
is quite robust with respect to manufacturing tolerances and alignment errors.
The final consideration to be made is the effect of possible variations in the
curvature of different individual’s corneas on the imaging performance of the IOC.
In Liou and Brennan’s paper on an anatomically accurate model eye [15, section
3.B], the authors provide an extensive summary of numerous measurements of the
anterior and posterior corneal shapes of hundreds of human eyes by several
200
researchers including Clark [28], Kiely, et al. [29], Guillon, et al. [30], and Royston,
et al. [31]. In accordance with Liou and Brennan, the more recent results on 220
eyes from Guillon, et al. were used to tolerance the shape of the anterior cornea in
the present analysis: 7.77 ± 0.25 mm for the radius of curvature, and −0.18 ± 0.15 for
the conic constant. The same tolerances were used for the posterior cornea:
6.40 ± 0.25 mm for the radius of curvature, and -0.16 ± 0.15 for the conic constant.
In addition, the thickness of the cornea was toleranced to 0.5 ± 0.1 mm. Since there
is currently no envisioned method for adjusting the focus of the IOC after it is
implanted, the image plane was not allowed to be refocused to compensate for
changes in performance that resulted from variations in the corneal lens.
The results of the sensitivity analysis showed that variations in the anterior
corneal radius of curvature have the greatest impact on the system MTF. A variation
of ±0.25 mm in the anterior radius results in a maximum change in the radial MTF of
−0.06, and −0.10 for the tangential MTF, at 25 lp/mm. Individual changes in the
corneal thickness, posterior radius, and the two conic constants resulted in variations
of less than 0.01 in the MTF across the field of view. These are the worst case
results when the corneal lens parameters vary by their maximum tolerance values. A
statistical combination of these tolerances predicts that the radial MTF would vary
by −0.05, and the tangential MTF by −0.08, on average across the field of view. As
expected, since the corneal lens curvature contributes so little refractive power to the
201
total system, the IOC is highly tolerant to expected variations in corneal shape across
individuals.
5.7 Incorporation of multiple refractive lens elements
This section explores the possibility of incorporating more than one lens into
the confined space of the IOC package to better control aberrations relative to the
single lens design optimized over a ±20° field of view. The cost of doing so is added
system complexity, cost, and alignment requirements. Within the extremely short
3.5 mm constraint on the optical system length, it is not clear that it would be
feasible to incorporate additional refractive elements that would be both
manufacturable and produce a significantly improved image.
In Section 5.2.3, attempts to incorporate a negative lens element to reduce the
Petzval sum (and thereby reduce the overall field curvature) were described that
proved to be unsuccessful, primarily due to space limitations within the IOC
housing. Several unsuccessful attempts were made to optimize the design with a
negative lens element placed at different locations within the system (especially near
the image plane, where field flattening lenses are often located). The addition of a
negative lens reduces the power of the optical system and consumes precious space
needed by the positive aspherical lens, which simply cannot be afforded in this case.
It thus appears to be impractical to add negative power to such a high focal power,
short optical length system.
202
Within the limited system length available, designs with two positive
refractive lenses were explored. The addition of a third lens was also considered, but
led to extremely thin lenses or lenses with impractically thin edge thicknesses that
would be costly and difficult to manufacture. As an indicator of what is
manufacturable, commercial lenses are available with clear apertures of 2.0 to
2.5 mm and center thicknesses of 0.8 mm (and are typically aimed at on-axis
focusing applications such as laser diode coupling) [32].
During the design process, several two-lens configurations, all with very
similar performance characteristics, were obtained and evaluated. The optical
systems were initially optimized using conic surfaces, in which the radii of curvature
are specified by a spherical radius and a conic constant. Higher order aspheric
coefficients were then incorporated gradually. Different constraints on the minimum
lens thickness, from ≥ 0.5 mm to ≥ 1.0 mm, were attempted. It was found that the
optimizations tended to select the minimum allowable lens thickness, and so a
constraint of ≥ 0.75 mm was ultimately selected to ensure manufacturability. After
many two-lens starting points and optimization sequences were attempted, it was
apparent that the two-lens optimizations tended to converge on a similar layout with
the same general level of performance across the field of view. Better performance
at ±20° was always achieved with thinner lenses, as more room was then allowed
between the lenses for the rays to travel. The lens diagram for one of the better
performing lens configurations is shown in Figure 5-26, with rays traced through the
system at five field angles from 0° to 20° (40° FOV). In this lens system, the
203
f-number was set at f/1 by using a 1.73 mm diameter aperture stop at the rear surface
of the first lens.
3.50 mm
Cornea
Aqueous
Humor
Window
Aspherical Lenses
2.00 mm
Air
0.5 mm
0.78 mm
0°
20°
Aperture Stop (1.73 mm ∅)
Figure 5-26 Custom designed IOC lens system with two Zeonex E48R polymer
aspherical lenses located in air, behind a fused silica window. A 1.73 mm diameter
aperture stop is located at the rear surface of the first lens. The system operates at f/1
with a focal length of 2.1 mm and is optimized over a ±20° field of view. The IOC is
focused at a 20 cm object distance. Both lenses are 0.75 mm thick and have masses
of 3 mg and 2 mg, respectively (calculated). The image distance is 1.13 mm,
measured from the rear vertex of the second lens to the image plane. The fused silica
window is 2.8 mm in diameter and the two lenses are 2.5 mm in diameter (see text
for explanation).
The surface parameters for the lens system are listed below in Table 5-8. The
back focal length is 1.13 mm, and the lenses are separated by an axial air space of
0.62 mm. As noted above, the two lenses optimized to the minimum allowed center
thickness of 0.75 mm. The maximum used clear aperture on the lens surfaces is
approximately 2.2 mm for rays over the ±20° field of view (on the front surface of
204
lens 1 and rear surface of lens 2). However, the rays at ±20° intersect a clear
aperture of 2.6 mm on the front surface of the silica window, and 2.4 mm on the
window’s rear surface. This means that to avoid vignetting at ±20° degrees, the
window requires a physical diameter > 2.6 mm. The rays at ±15° intersect a clear
aperture of 2.4 mm on the window, so it may be acceptable to simply clip some of
the rays at ±20° to obtain the desired package size.
Surface Type Radius (mm) Thickness (mm) Material
Refractive
Index (559 nm)
Anterior
Cornea
Conic
R = 7.77
K = −0.18
0.500 Cornea 1.376
Posterior
Cornea
Conic
R = 6.40
K = −0.60
2.000
Aqueous
humor
1.336
Window Front Plane Infinity 0.250 Fused silica 1.460
Window Rear Plane Infinity 0.000 Air 1.000
Lens 1 Front Asphere
R = 2.3348
K = −2.8901
A = −0.0115
B = 0.0033
0.7500 Zeonex E48R 1.533
Lens 1 Rear Asphere
R = 9.7433
K = −8.8846
A = −0.0309
B = 0.03389
0.0458 Air 1.000
Aperture Stop Plane Infinity 0.5731 Air
Lens 2 Front Asphere
R = 5.5340
K = −7.9777
A = −0.1210
B = 0.0537
0.7500 Zeonex E48R 1.533
Lens 2 Rear Asphere
R = −1.3810
K = −1.1413
A = −0.0257
B = −0.0002
1.131 Air 1.000
Image Plane Infinity − − −
Table 5-8 Surface parameters for the two-lens IOC system shown in Figure 5-26,
when focused at a 20 cm object distance. “Thickness” indicates the axial distance
from the vertex of the current surface to the vertex of the next surface. “Material”
refers to the optical medium following the surface. The quantities R, K, A, and B are
defined in the aspherical surface sag formula shown in Equation (5-1).
205
The first order properties of the f/1 two-lens IOC system are listed in Table
5-9. The effective focal length and rear nodal distance are about equal to those of the
single lens design shown in Figure 5-1, meaning that the requirements on spot
diameter and MTF remain the same for this two-lens system in order to match the
resolution of current and future retinal microstimulator arrays.
First Order Parameter Value
Effective focal length 2.107 mm
System f-number 1.00
Entrance pupil location (Cornea to EnP) 2.986 mm
Entrance pupil diameter 2.100 mm
Exit pupil location (ExP to Image) 1.898 mm
Exit pupil diameter 3.003 mm
Front nodal distance (Cornea to NP1) 3.620 mm
Rear nodal distance (NP2 to Image) 2.124 mm
Paraxial image height (at ±20°) 0.772 mm
Actual image height (at ±20°) 0.780 mm
Image distance (lens rear to image) 1.131 mm
Table 5-9 First order optical parameters for the two-lens IOC system shown in
Figure 5-26.
The imaging performance over ±20° of this two-lens design is shown in the
MTF curves and spot diagrams in Figure 5-27. For all of the two-lens designs
attempted, the radial MTF at ±20° was severely degraded relative to the performance
at all other field angles. Attempts to optimize the system to improve the radial MTF
at ±20° resulted in an intolerable decrease in performance at all other field angles.
Thus, there is significant astigmatism at ±20° (and also at ±15°), though slightly
smaller RMS spot diameters are achieved across the entire field of view in
comparison with the single lens case.
206
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
X
Y
20 cm
DIFFRACTION MTF
MCH 11-Dec-06
DIFFRACTION LIMIT
Y
X
0.0 FIELD ( ) 0.00
O
Y
X
0.2 FIELD ( ) 5.00
O
Y
X
0.5 FIELD ( ) 10.00
O
Y
X
0.7 FIELD ( ) 15.00
O
Y
X
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
8.1 μm at 0°
33 μm at ±20°
18 μm at ±15°
11 μm at ±10°
8.4 μm at ±5°
MTF Spot diagrams
Figure 5-27 MTF and polychromatic spot diagram with RMS spot diameters for
the two-lens IOC system shown in Figure 5-26, focused at a 20 cm object distance.
The colored lines in the MTF plot represent different semi-field angles (red: 0°,
green: 5°, blue: 10°, orange: 15°, pink: 20°) with solid lines for the tangential rays,
and dashed lines for the radial, or sagittal, rays. The colors in the spot diagrams
correspond to red, green, and blue wavelengths.
The set of ray intercept plots for the two-lens system is shown in Figure 5-28.
In comparison with the same set of plots for the single-lens design shown in Figure
5-7, it is clear that the two lens system provides much better correction of off-axis
aberrations such as third-order coma. The undulations about the axis for the
tangential field at ±20° indicate that the aberrations are controlled with a balance of
first and higher-order aberrations. As noted in the MTF plots, astigmatism
dominates the aberrations at ±15° and ±20°. As we will see, though, this
astigmatism at large off-axis field angles actually provides an artificial flattening of
the field.
207
MCH 08-Apr-07
Two Lens IOC, 20-cm
Focus
RAY ABERRATIONS ( MILLIMETERS)
608.8900 NM
558.9800 NM
513.8800 NM
-0.025
0.025
-0.025
0.025
0.00 RELATIVE
FIELD HEIGHT
( 0.000 )
O
-0.025
0.025
-0.025
0.025
0.24 RELATIVE
FIELD HEIGHT
( 5.000 )
O
-0.025
0.025
-0.025
0.025
0.48 RELATIVE
FIELD HEIGHT
( 10.00 )
O
-0.025
0.025
-0.025
0.025
0.74 RELATIVE
FIELD HEIGHT
( 15.00 )
O
-0.025
0.025
-0.025
0.025
TANGENTIAL 1.00 RELATIVE SAGITTAL
FIELD HEIGHT
( 20.00 )
O
Figure 5-28 Ray intercept curves for the two-lens IOC design shown in Figure 5-26.
The horizontal axis is the ray height in the entrance pupil, and the vertical axis is the
ray intersection at the image plane.
The astigmatic field curves presented in Figure 5-29(a) show that the sagittal
and tangential image planes begin to curve away from each other at about ±5°, and
are significantly separated at ±20°. The “compromise” image plane of best focus,
however, lies on a nearly flat plane. The large astigmatism noted in the MTF curves
208
and ray aberration plots above is offset by this effective field flattening, explaining to
a degree why such small spot diameters are achieved in spite of this astigmatism.
The distortion of the two-lens system is shown in Figure 5-29(b), on the same
scale as the plot for the single lens system in Figure 5-8(b). The distortion is only
0.94% at the field edge, in comparison with 3.1% for the single lens case.
(a) (b)
Figure 5-29 (a) Astigmatic field curves (“S” is for sagittal and “T” is for tangential)
and (b) distortion plot as a function of semi-field angle from 0° to 20° at 559 nm for
the lens design shown in Figure 5-26. The range for the horizontal axis of the
astigmatic field plot is ±0.08 mm, and is ±5% for the distortion plot.
To qualitatively compare the performance of the single-lens and two-lens
IOC systems, Figure 5-30 shows the results of imaging a USAF resolution chart
through both designs. This is the same chart, covering the same field of view, used
previously to compare the narrow-FOV and wide-FOV single-lens systems in Figure
5-18. While both systems meet the relaxed imaging requirements of the retinal
prosthesis system, the two-lens design clearly produces a clearer image across the
209
Two lens design
(Figure 5-26)
Single lens design
(Figure 5-1)
Figure 5-30 Comparison of imaging a USAF resolution chart that spans 40°
horizontal and vertical fields of view for the two-lens design shown in Figure 5-26,
with the single-lens design shown in Figure 5-1.
210
central field of view, and is slightly worse in the peripheral field. This level of
improvement may not be substantial enough to warrant the added cost and
complexity of fabricating and aligning a multiple-lens system in the current
prototype design, especially given that the single-lens system satisfies the current
imaging goals.
5.8 Consideration of the spectral transmittance of the eye and the response of
CMOS imaging sensors
The lens designs presented up to this point were optimized and evaluated
using a three-wavelength spectrum that approximates the photopic response curve of
the eye, which peaks at 555 nm, and is about twice as sensitive to green light as it is
to red and blue light (the response falls to 50% of the peak value at 510 nm on the
blue side of the spectrum and at 610 nm on the red end) [12, 13]. This is a common
spectral weighting to use for the design of visible light imaging systems as these
systems are often engineered to approximate the human visual response (using
specific patterns of color filters on the sensor array and image processing algorithms)
[11, 33-37]. Since it is known that the IOC it will be placed behind the cornea and
will likely incorporate a color or monochrome CMOS sensor, it makes sense to
reevaluate the performance of our lens systems over a spectrum that has been
appropriately weighted to account for these specific conditions. If the performance
fails to meet our imaging criteria over this extended spectrum, it is of interest to
determine if the lens system could be reoptimized to meet our requirements over this
spectrum.
211
An evaluation of the spectral transmittance of the cornea and aqueous humor,
along with the response of typical CMOS sensors resulted in the selection of the set
of wavelengths and weights shown in Table 5-10 for use in our optical design
software (Code V®). The spectral transmittance of the combination of the cornea
and aqueous humor were estimated from values reported in references [38-42]. It
should be noted that there are large discrepancies in the literature for the corneal
transmittance at the blue end of the spectrum, ranging from 0.1 to 0.8 at 400 nm. For
this reason, a mid-range value of 0.5 was assumed for the transmittance at 400 nm.
The spectral response of typical CMOS sensors is shown in references [36, 37]. The
combined responses of the ocular media and CMOS sensor were normalized and
converted to integer values for use in the Code V® optical design software. The
table shows the values for a CMOS sensor with a color filter array. A set of weights
for the combination of the cornea and a monochrome CMOS sensor was also
Wavelength (nm)
Cornea-Aqueous
Transmittance
1
CMOS CFA
Response
2
Product
Normalized to 1.00
Code V®
Spectrum
3
400 0.50 0.20 0.16 2
450 0.70 0.50 0.56 6
550 0.90 0.70 1.00 10
650 0.95 0.60 0.90 9
700 0.95 0.50 0.75 8
1
Values estimated from Refs. [38-42]. Note that there is some discrepancy in the references for the
transmittance near 400 nm and so a worst case (large value) was assumed.
2
Values estimated from Refs. [36, 37] . CFA: Color Filter Array. It is also assumed that an IR cutoff
filter is incorporated to absorb light above 700 nm.
3
Code V® software requires integer values for wavelength weights.
Table 5-10 Set of wavelengths and weights used for the extended wavelength
spectral analyses performed in Code V® software. The weights shown are based on
estimations of the combination of the spectral transmittance of the cornea and
aqueous humor, and a typical CMOS imaging sensor with a color filter array.
212
determined, and is very similar: 3, 6, 10, 8, and 7 at wavelengths 400, 450, 550, 650,
and 700 nm, respectively (assuming an IR cutoff filter is used to absorb light above
700 nm). For comparison, the set of three photopic wavelengths used previously
were from a built-in spectrum in the Code V® software and are: 513.9, 559.0, and
608.9 nm, with weights of 1, 2, and 1, respectively.
A comparison of the polychromatic MTF curves and spot diagrams for the
lens shown in Figure 5-1, using the original photopic spectral weights and the CMOS
spectral weights, is shown in Figure 5-31. For the latter case, the imaging plane was
adjusted back by 21 μm to refocus the lens for the new spectrum. The additional
chromatic aberration resulting from the wider spectrum affects the central ±10° field
of view more than the outer field angles, where non-chromatic aberrations dominate.
The impact is largest for the on-axis image point (at 0°, the RMS spot diameter
increases from 12 to 19 μm and the MTF at 25 lp/mm decreases from 0.8 to 0.6).
The spot diameters at off-axis field angles do not change significantly, but the MTF
drops below the desired value of 0.5 at 25 lp/mm for the tangential rays from 5° to
20°. The sagittal rays remain just above this value for all but the 20° field.
Altogether, the result of using CMOS spectrum is a bunching of the MTF
performance curves around those at the central field angle, ±10°. This means the
performance of this lens system with these spectral weights is at the threshold of our
imaging criteria for the case of the 32 × 32 microstimulator array at the retina
covering a 20° field of view.
213
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
F/0.96, 20-cm Focus
DIFFRACTION MTF
MCH 08-Dec-06
DIFFRACTION LIMIT
AXIS
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
608.9 NM 1
559.0 NM 2
513.9 NM 1
DEFOCUSING 0.00000
12 μm at 0°
42 μm at ±20°
33 μm at ±15°
29 μm at ±10°
20 μm at ±5°
12 μm at 0°
42 μm at ±20°
33 μm at ±15°
29 μm at ±10°
20 μm at ±5°
(a) With Photopic Spectral Weights (same as in Figure 5-2)
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
F/0.96, 20-cm Focus
DIFFRACTION MTF
DIFFRACTION LIMIT
AXIS
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
700.0 NM 8
650.0 NM 9
550.0 NM 10
450.0 NM 6
400.0 NM 2
DEFOCUSING 0.00000
19 μm at 0°
42 μm at ±20°
36 μm at ±15°
33 μm at ±10°
25 μm at ±5°
(b) With Cornea and Color CMOS Spectral Weights
Figure 5-31 Comparison of the MTF and polychromatic spot diagrams for the lens
system shown in Figure 5-1 using (a) the photopic spectrum, and (b) the extended
wavelength spectrum The colored lines in the MTF plot represent different semi-
field angles (red: 0°, green: 5°, blue: 10°, orange: 15°, pink: 20°) with solid lines for
the tangential rays, and dashed lines for the radial, or sagittal, rays. The colors in the
spot diagrams correspond to red, green, and blue wavelengths.
214
The above lens system was reoptimized using the extended wavelength
spectrum to see if the image quality could be restored to a level similar to that seen
with the photopic spectrum. A drawing of the resulting f/1 lens system is shown in
Figure 5-32 with a set of rays traced at five field angles from 0° to 20°.
Figure 5-32 Custom designed f/1 aspherical IOC lens system optimized over a
40° (±20°) FOV using an extended wavelength spectrum that has been weighted to
approximate the combination of the spectral transmittance of the cornea and a
typical color CMOS imaging sensor (for comparison with the previous design
shown in Figure 5-1, which was optimized using photopic spectral weights). A
2-mm aperture stop is located on the rear surface of the window (not shown).
The detailed surface parameters for the reoptimized system are shown below
in Table 5-11. The effective focal length is 2.2 mm and the clearance between the
lens and the image plane is 0.716 mm. Note that the aspherical shape of the rear lens
215
surface inverts by a fairly significant amount near the edge of the clear aperture for
this surface. This was necessary to obtain the desired imaging performance across
the spectrum, but is well within the range of manufacturable lens shapes via diamond
turning and/or molding.
Table 5-11 Surface parameters for the IOC system shown in Figure 5-16 when
focused at a 20 cm object distance. “Thickness” indicates the axial distance from the
vertex of the current surface to the vertex of the next surface. “Material” refers to
the optical medium following the surface. The quantities R, K, A, and B are defined
in the aspherical surface sag formula shown in Equation (5-1).
The MTF and spot diagrams across for the reoptimized lens system are
shown in Figure 5-33. By allowing the rear surface to take on a more severe
aspherical curvature, this design is able to compensate for the extended wavelength
spectrum and even shows general improvement over the field of view relative to the
photopic-weighted design. This is with the exception of the on-axis field point and
the sagittal rays at ±20°. Attempts to increase the MTF for these rays always came at
a cost in performance at the other field angles. Aside from the sagittal rays at ±20°,
Surface Type Radius (mm) Thickness (mm) Material
Refractive
Index (559 nm)
Anterior
Cornea
Conic
R = 7.77
K = −0.18
0.500 Cornea 1.376
Posterior
Cornea
Conic
R = 6.40
K = −0.60
2.000 Aqueous humor 1.336
Window Front Plane Infinity 0.250 Fused silica 1.460
Window Rear Plane Infinity 0.100 Air 1.000
Lens Front Asphere
R = 1.4980
K = −0.6017
A = −0.0126
B = −0.0263
C = 0.0281
B = −0.0080
2.435 Zeonex E48R 1.533
Lens Rear Asphere
R = −1.9539
K = −7.7977
A = 0.0586
B = 0.1217
0.716 Air 1.000
Image Plane Infinity − − −
216
the MTF is at or above the desired value of 0.5 at 25 lp/mm over the entire field of
view.
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
Cornea and RGB CMOS
DIFFRACTION MTF
DIFFRACTION LIMIT
AXIS
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
700.0 NM 8
650.0 NM 9
550.0 NM 10
450.0 NM 6
400.0 NM 2
DEFOCUSING 0.00000
17 μm at 0°
39 μm at ±20°
27 μm at ±15°
22 μm at ±10°
17 μm at ±5°
MTF Spot diagrams
Figure 5-33 MTF and polychromatic spot diagram with RMS spot diameters for
the lens system shown in Figure 5-32, focused at a 20 cm object distance. The
colored lines in the MTF plot represent different semi-field angles (red: 0°, green:
5°, blue: 10°, orange: 15°, pink: 20°) with solid lines for the tangential rays, and
dashed lines for the radial, or sagittal, rays. The colors in the spot diagrams
correspond to different wavelengths.
5.9 Summary
This chapter presented the detailed imaging characteristics and tradeoffs of
several potential refractive lens designs for the intraocular camera. The central
design featured in this chapter has a 2.1 mm focal length, operates at f/0.96 over a
40° (±20°) field of view, has a mass of approximately 14 mg, fits within a 2.5 mm
diameter package, and spans only 3.5 mm in length from the front of the system to
the image plane. Prototype quantities of the polymer lens from this design were
217
fabricated, and initial tests were performed of the lens alone in air and coupled with a
small form factor image sensor array. AR coating and profilometer results showed
that the fabricated lenses meet or exceed the specified manufacturing tolerances.
The lens design process was discussed to provide insight into the sequence of
optimization steps used to arrive at a final design. A review of aberration theory and
some associated diagnostic tools such as ray intercept plots and field curves was
given to better understand the dominant sources of image degradation in IOC optical
systems. It was shown that field curvature and coma are the dominant aberrations
preventing improved image sharpness at off-axis field angles, especially over the
wide 40° field of view. A narrower FOV camera was designed to demonstrate that
the imaging could be significantly improved over the central 20° (±10°) visual field
covered by the retinal prosthesis by sacrificing performance in the periphery. These
results showed that an f/1 camera with excellent imaging that exceeds the required
resolution of envisioned retinal prostheses can be designed over ±10°. This
additional resolution in the captured visual scene may provide added flexibility in the
image processing steps that are applied before down sampling and converting the
scene into a set of neuronal stimulus pulses. This, in turn, could be useful for future
retinal prosthesis systems.
An extensive sensitivity analysis of the IOC optical system showed that the
imaging performance is exceptionally tolerant to variations in surgical placement and
differences in corneal curvatures, largely due to the fact that the corneal lens
218
contributes less than 10% to the refractive power of the overall IOC lens system.
The performance of the IOC optical system remains within tolerable limits for
typical manufacturing and alignment errors, as demonstrated by simulation
experiments. A tolerancing analysis was used to develop a set of tolerance
specifications that, if met, would predict a drop of less than 0.05 in the MTF across
the field of view. These specifications are not only achievable for diamond-turned
and molded polymer optics, but in many cases are relaxed from the tolerances that
are typically achieved in optical shops.
To evaluate the feasibility of incorporating additional refractive surfaces
within the confined space of the IOC, a two-lens system operating at f/1 was
designed and analyzed. This design produced a sharper image over the central field
of view in comparison with a single lens IOC optical system, and was slightly less
sharp in the peripheral field of view. Since the single lens design satisfies the
relaxed imaging requirements of the retinal prosthesis application, the improvement
in imaging obtained with the multiple lens system may not be worth the associated
increase in manufacturing cost and alignment complexity.
The chapter concludes with an analysis of the wide field, single lens IOC
design over an extended wavelength spectrum that was weighted to approximate the
combination of the spectral transmittance of the cornea and the spectral response of a
typical color CMOS imaging sensor. With this extended spectrum, the imaging
performance of the lens, which was originally designed using a narrower, photopic
219
wavelength spectrum, degrades over the central ±10° field of view. The lens system
was therefore reoptimized using the extended wavelength spectrum, resulting in a
design with imaging performance that meets or exceeds the requirements for a
32 × 32 retinal prosthesis system over the central field of view. The performance of
the reoptimized design shows a slightly lower MTF on-axis due to chromatic
aberration (in comparison with the design that was optimized using a photopic
spectrum). The next chapter will explore the possibility of incorporating diffractive
optical elements into the IOC optical system to correct for chromatic aberration (as
well as other aberrations) without adding size or mass to the lens system.
220
Chapter 5 References
[1] “Entering surface shape and position,” in Code V® 9.70 Reference Manual,
Optical Research Associates, Pasadena, CA, 2006, Chapter 4.
[2] M. Born and E. Wolf, “Geometrical theory of optical imaging,” in Principles of
Optics, 7th Ed., Cambridge, UK: Cambridge University Press, 1999, Chapter 4,
pp. 199-201.
[3] “Entering System Data: Specifying Vignetting,” in Code V® 9.70 Reference
Manual, Optical Research Associates, Pasadena, CA, 2006, Chapter 3.
[4] R. E. Fischer and B. Tadic-Galeb, “The concept of optical path difference,” in
Optical System Design, 1st Ed., New York: McGraw-Hill, 2000, Chapter 4.
[5] M. J. Kidger, “Principles of lens design,” in Lens Design, SPIE Critical Review
Series, vol. CR41, 1992, pp. 30-53.
[6] W. J. Smith, Modern Optical Engineering, 3rd Ed., New York: McGraw-Hill,
2000.
[7] F. L. Pedrotti and L. S. Pedrotti, “Aberration theory,” in Introduction to Optics,
2nd Ed., Englewood Cliffs, NJ: Prentice-Hall, 1987, Chapter 5, p. 91.
[8] R. E. Fischer and B. Tadic-Galeb, Optical System Design, 1st Ed., New York:
McGraw-Hill, 2000.
[9] R. Kingslake, Optical System Design, 1st Ed., New York: Academic Press,
1983, Chapter 2, pp. 21-22.
[10] R. Kingslake, Lens Design Fundamentals, 1st Ed., New York: Academic Press,
1978.
[11] R. E. Fischer and B. Tadic-Galeb, “Computer performance evaluation,” in
Optical System Design, 1st Ed., New York: McGraw-Hill, 2000, Chapter 10.
[12] “CIE 1931 standard colorimetric observer,” Commission Internationale de
l’Eclairage (CIE) (a.k.a. International Commission on Illumination), 1931.
Available online: http://www.cie.co.at/index_ie.html.
[13] D. A. Atchison and G. Smith, “Light and the eye,” in Optics of the Human Eye,
1st Ed., Edinburgh: Butterworth-Heinemann, 2000, Chapter 11.
[14] Omnivision OV6920 CMOS CameraChip Image Sensor, Available online:
www.ovt.com, as of March, 8 A.D.
221
[15] H.-L. Liou and N. A. Brennan, “Anatomically accurate, finite model eye for
optical modeling,” Journal of the Optical Society of America A, vol. 14, no. 8,
pp. 1684-1695, 1997.
[16] W. S. Beich, “Injection molded polymer optics in the 21st century,” in Tribute
to Warren Smith: A Legacy in Lens Design and Optical Engineering,
Proceedings of the SPIE, vol. 5865, San Diego, CA, 2005, pp. 157-168.
[17] Valley Design Corporation, Santa Cruz, CA. Available online:
http://www.valleydesign.com, as of Nov. 2007.
[18] “Defining Tolerances,” in Code V® 9.70 Reference Manual, Optical Research
Associates, Pasadena, CA, 2006, Chapter 8.
[19] M. Laikin, “The method of lens design,” in Lens Design, 4th Ed., Boca Raton:
Taylor and Francis CRC Press, 2006, Chapter 1, pp. 22-23.
[20] D. Malacara, Optical Shop Testing, 3rd Ed., Hoboken: John Wiley and Sons,
2007.
[21] A. Pete, “Understanding optical specifications,” Edmund Optics, Inc. White
Paper. Available online: http://www.edmundoptics.com/techSupport/
DisplayArticle.cfm?articleid=252.
[22] J. C. Wyant, “Precision optical testing,” Science, vol. 206, pp. 168-172, 1979.
[23] P. Griffith and A. Marien, “Optical fabrication relies on tried and true
methods,” Laser Focus World, October, 1997.
[24] MIL-PRF-13830B, “General specification governing the manufacture,
assembly, and inspection of optical components for fire control instruments,”
U.S. Military Specifications and Standards, Jan. 1997. Available online:
http://assist.daps.dla.mil/quicksearch.
[25] W. J. Smith, Modern Optical Engineering, 3rd Ed., New York: McGraw-Hill,
2000, p. 436.
[26] “Defining Tolerances,” in Code V® 9.70 Reference Manual, Optical Research
Associates, Pasadena, CA, 2006, Chapter 8.
[27] “Tolerancing,” in Code V® 9.70 Reference Manual, Optical Research
Associates, Pasadena, CA, 2006, Chapter 20.
[28] B. A. J. Clark, “Variations in corneal topography,” Australian Journal of
Optometry, vol. 56, pp. 399-413, 1973.
222
[29] P. M. Kiely, G. Smith, and L. G. Carney, “The mean shape of the human
cornea,” Journal of Modern Optics, vol. 29, no. 8, pp. 1027-1040, 1982.
[30] M. Guillon, D. P. Lydon, and C. Wilson, “Corneal topography: a clinical
model,” Ophthalmic and Physiological Optics, vol. 6, no. 1, pp. 47-56, 1986.
[31] J. M. Royston, M. C. Dunne, and D. A. Barnes, “Measurement of the posterior
corneal radius using slit lamp and Purkinje image techniques,” Ophthalmic and
Physiological Optics, vol. 10, no. 4, pp. 385-388, 1990.
[32] See, for example, catalog lenses in the category of small diameter singlets and
aspherics from companies such as Edmund Optics, Lightpath Technologies,
Melles Griot, and Newport Corporation.
[33] T. Lule, S. Benthien, H. Keller, F. Mutze, P. Rieve, K. Seibel, M. Sommer, and
M. Bohm, “Sensitivity of CMOS based imagers and scaling perspectives,”
IEEE Transactions on Electron Devices, vol. 47, no. 11, pp. 2110-2122, 2000.
[34] “Color correction for image sensors,” Kodak Application Note, MTD-PS-0534-
2, Rev. 2.0, 2003.
[35] E. Fossum, “Visible light CMOS image sensors,” in Proceedings of the
International Workshop on Semiconductor Pixel Detectors for Particles and X-
Rays (PIXEL 2002), Carmel, CA, 2002.
[36] A. El Gamal and A. El Gamal, “Trends in CMOS image sensor technology and
design,” in Electron Devices Meeting (IEDM) Technical Digest, 2002, pp. 805-
808.
[37] A. El Gamal and H. Eltoukhy, “CMOS image sensors,” IEEE Circuits and
Devices Magazine, vol. 21, no. 3, pp. 6-20, 2005.
[38] P. V. Algvere, P. A. Torstensson, and B. M. Tengroth, “Light transmittance of
ocular media in living rabbit eyes,” Investigative Ophthalmology and Visual
Science, vol. 34, no. 2, pp. 349-354, 1993.
[39] W. Ambach, M. Blumthaler, T. Sch÷pf, E. Ambach, F. Katzgraber, F.
Daxecker, and A. Daxer, “Spectral transmission of the optical media of the
human eye with respect to keratitis and cataract formation,” Documenta
Ophthalmologica, vol. 88, no. 2, pp. 165-173, 1994.
[40] J. Dillon, L. Zheng, J. C. Merriam, and E. R. Gaillard, “The optical properties
of the anterior segment of the eye: Implications for cortical cataract,”
Experimental Eye Research, vol. 68, pp. 785-795, 1999.
223
[41] J. Ham, H. A. Mueller, R. C. Williams, and W. J. Geeraets, “Ocular hazard
from viewing the sun unprotected and through various windows and filters,”
Applied Optics, vol. 12, no. 9, p. 2122, 1973.
[42] H. L. Hoover, “Solar ultraviolet irradiation of human cornea, lens, and retina:
equations of ocular irradiation,” Applied Optics, vol. 25, pp. 359-368, 1986.
224
Chapter 6
INCORPORATION OF DIFFRACTIVE LENSES
6.1 Introduction
In this chapter, we explore the addition of diffractive optical (lens) elements
to the optical imaging system of the intraocular camera. Since diffractive lenses are
typically constructed with wavelength-deep surface reliefs on planar or curved
surfaces, they offer the potential for further reducing the size and mass of the IOC.
Their thin size permits us to examine lens arrangements and lens combinations that
are not feasible in a purely refractive system. These additional degrees of freedom
may also possibly be exploited to achieve improved imaging performance across the
field of view.
The specific class of diffractive optical elements that are treated in this
chapter can be categorized as rotationally symmetric kinoform lenses. To first order,
a kinoform lens is a thin diffractive optical element with a phase function that is
derived from an equivalent (spherical or aspherical) thin lens. The diffractive
surface relief of a kinoform lens maintains curvature within each individual segment,
and hence ought to behave similarly to the bulk aspherical lens from which it was
derived. In this sense, kinoforms are the limiting case of both bulk lenses and more
general diffractive optical elements, appearing right on the boundary. However,
because kinoform lenses are thin, they do not behave identically to their refractive
counterparts in cases where the equivalent bulk lenses would be thick. Also, the
225
aberrational properties of kinoform lenses are known to be somewhat different from
those of a refractive thin lens. For example, studies of the aberrational properties of
kinoform lenses reveal that the third-order (Seidel) coefficients for distortion and
Petzval field curvature are zero for a planar diffractive lens [1, 2], which is certainly
not the case for a thin refractive lens.
It would also appear that kinoform lenses would not have any additional
degrees of freedom relative to the aspherical lenses from which they are derived and
that, in this sense, they lack some of the key flexibility afforded to true diffractive
optical elements. However, for diffractive optical elements that are designed to
perform a lensing function, kinoforms are theoretically able to produce the highest
possible throughput into the focal spot (in theory, kinoforms obtain 100% diffraction
efficiency at the design wavelength). This implies that kinoforms are generally
preferable for imaging applications, like the IOC, because any stray light that is
produced by a diffractive optical element with less than 100% efficiency will reduce
the contrast in the image. In a more general diffractive optical (lens) element, the
phase levels are typically quantized to a discrete number of values, which leads to
lower diffraction efficiencies. This is true even if the encoding of the discrete phase
values for a diffractive lens are computer optimized, rather than determined by a
direct step approximation to the nearest related kinoform [3].
In summary, due to the large amount of space that bulk lenses require, a
purely refractive IOC lens design is limited to using a single refractive lens. If
several bulk lenses could be used instead, like in other high performance optical
226
systems, we could flatten the field and correct chromatic aberration more effectively.
Thus, the goal here is to investigate the use of kinoform lenses as a first and best case
approximation to the implementation of multiple DOEs to act as manufacturable
multi-lens systems in place of multiple bulk lenses.
Two options for incorporating kinoform lenses into the IOC are considered
herein. The first is a hybrid refractive/diffractive lens design in which a kinoform
lens is used in combination with a refractive lens to provide additional degrees of
freedom for aberration correction without adding size or mass to the system. This
design configuration uses a low-power kinoform lens and is best suited for correction
of chromatic aberration, though small corrections of other aberrations are also
possible. The second option considered is a purely diffractive optical system. The
severe chromatic dispersion of diffractive optical elements typically prevents a
purely diffractive optical system from being an option for visible light imaging
applications. However, by combining a wide field diffractive lens configuration,
which was originally introduced for monochromatic applications, with a higher-order
kinoform lens (discussed in more detail below) to enable polychromatic operation,
an optical system with more than adequate visible light imaging performance for the
IOC is obtained. Specific designs and analyses of these two diffractive lens
configurations for the IOC are presented in Sections 6.3 and 6.4, respectively,
following a background section on the relevant theory of diffractive lenses.
227
6.2 Kinoform lens theory and modeling
A kinoform lens is a thin diffractive optical element with a surface relief
profile that is designed to modulate the phase of an incoming wave in order to bring
it to a focus [4]. The reduction of a refractive lens to a diffractive kinoform lens is
illustrated in Figure 6-1.
This figure illustrates how a diffractive kinoform lens can be constructed by
first dividing a refractive lens into slices that increase in relative phase from 0 to 2π,
and then removing portions of the lens material in multiples of 2π phase such that all
of the remaining segments vary in phase from 0 to 2π. Since coherent wave
interference effects are governed only by the phase difference between waves,
additional phase differences of 2π are inconsequential, and the kinoform lens will
focus light with 100% efficiency just like the refractive lens (for the case of normal
Figure 6-1 Reduction of a refractive lens to a kinoform by removing multiples of 2π
phase at the design wavelength, λ
0
.
228
incidence). From another point of view, the emerging wavefront from both the
refractive lens and the kinoform lens will be identical, and as such, indistinguishable
at the design wavelength.
Following from the concept of phase-based Fresnel zone plates, each facet of
the surface relief profile occupies a separate zone [5]. The zone boundaries are
chosen such that the optical path length from the axial focal point to the edge of each
subsequent zone differs by an integer multiple of the design wavelength, as shown in
Figure 6-2 below. The image forming properties of the kinoform lens result from the
coherent superposition of the portions of the incident wavefront that pass through
each zone. The maximum relief depth of each zone is typically set to introduce a
phase delay of 2π radians at the design wavelength, λ
0
. The kinoform concept thus
allows one to make exceptionally thin and lightweight lenses. Kinoform lenses are
also known in the literature as phase Fresnel lenses, or, simply as diffractive lenses
[5]. In this thesis, the terms kinoform lens, diffractive lens, and kinoform are used
interchangeably.
Since the behavior of kinoform lenses is governed by coherent interference
effects, their focal properties are wavelength dependent. A kinoform lens with a
maximum phase depth of 2π radians will behave identically to its refractive
counterpart only at the design wavelength (and its harmonics). At other
wavelengths, the zones provide some fraction of a wave in phase delay, resulting in a
change of focal length that is directly proportional to the change in wavelength. This
is quite different from the case of a refractive lens, for which the focal length at
229
different wavelengths varies more gradually with the wavelength dispersion of the
refractive index of the lens material. Even for the most dispersive flint glasses
(V
d
≈ 20), the longitudinal chromatic aberration of a refractive lens is about seven
times less than that of a diffractive lens [2]. As shown in the next section, this fact
makes diffractive lenses an excellent choice for chromatic correction of a refractive
lens system, but often prohibits their consideration for purely diffractive imaging
applications that have an appreciable optical bandwidth.
The maximum phase depth of the kinoform zones can also be set to an
integer multiple of 2π radians. In this case, the kinoform is termed a higher-order
diffractive lens. This type of lens exhibits interesting polychromatic properties that
may be advantageous for the IOC, and will be covered further in Section 5.4.
Kinoform lenses may be fabricated using various mechanical, lithographic, or
direct writing techniques, such as diamond turning, grayscale lithography, laser
direct writing, electron beam lithography, and electron beam direct writing. An
excellent review of these and other fabrication methods is presented by Fleming et
al. [6]. Further details and examples of individual methods may be found in
references [5, 7-17]. The choice of fabrication method for a given kinoform design
primarily depends on the required substrate material, minimum zone size, and
surface relief depth.
The lithographic techniques of binary optics may also be used to form a
staircase approximation to the continuous relief profile of a kinoform lens, at a cost
230
in the diffraction efficiency of the focal spot. For example, the diffraction efficiency
of a two-level approximation to a kinoform lens is limited to 40.5%. This increases
to a theoretical maximum of 81% for four-levels and 99% for 16 levels [5, 18].
These maximum efficiencies assume that a constant number of levels is used across
the entire lens diameter. However, a more optimal encoding technique to optimize
the manufacturing of a multi-level diffractive lens is to use a larger number of levels
near the lens center and fewer levels near the edge [19], especially given that the
zones in a diffractive lens generally decrease in size from the center to the edge of
the lens.
The performance of a diffractive lens depends primarily on the geometry of
the surface relief profile rather than on the material’s refractive index. The local
periods of the zones determine the diffraction angles, and thus, the focal length of the
lens; whereas the surface shapes and phase depths of the zones determine the
diffraction efficiency.
The locations of the zone boundaries for a single-order kinoform lens are
determined by requiring that the optical path length from the axial focus to the
boundary of the qth zone is equal to
00
f qλ + , in which
0
f is the focal length at the
design wavelength
0
λ , as shown in Figure 6-2. Solving for r
q
from the geometry
shown in the diagram leads to the following equation defining the radial transition
points for the kinoform surface relief [20]:
()
2
00 0
2
q
rqf q λλ =+, (6-1)
231
in which q = 0, 1, 2… In the paraxial region,
00
, qf λ and this reduces to
,00
2
q paraxial
rqf λ =. (6-2)
At the edge of the lens, where
max
2, rD = it can be shown that the minimum zone
size,
min
, r Δ can be approximated as [2]
( )
min 0
2# rf λ Δ≈. (6-3)
In the visible spectrum, λ
0
≈ 0.5 μm and the minimum zone size is approximately
equal to the f-number of the lens in microns. The implication of this for the IOC,
which operates at or near f/1, is that the zones would be on the order of 1 μm wide
near the edge of the lens, assuming that a conventional kinoform was used directly in
place of the refractive lens. This places a severe constraint on the manufacturing
options for the lens. A solution to this problem is to use a higher-order kinoform
Figure 6-2 Determination of the zone boundaries for a kinoform lens with focal
length f
0
at the design wavelength, λ
0
.
232
with zones that are multiple wavelengths deep. This option is considered for a
purely diffractive implementation of the IOC in Section 6.4.
The transmission function of a kinoform lens can be derived from the non-
paraxial phase function of a thin lens that transforms an incoming plane wave into a
spherical wave converging toward the focal point, f
0
[21],
()
( )
0
12
22
0
2
oo
rffr
λ
π
φ
λ
⎡ ⎤
=− +
⎢ ⎥
⎣ ⎦
. (6-4)
In the paraxial region,
0
, f r and this reduces to
()
2
0
2
2
paraxial
r
r
f
π
φ
λ
⎛⎞
−
=
⎜⎟
⎜⎟
⎝⎠
. (6-5)
In commercial optical design software, such as Code V®, a binomial
expansion in
2
r of the non-paraxial phase function in Equation (6-4) is often used
for the design of a rotationally symmetric kinoform lens,
()
( )
0
24 6
12 3
0
2
r ararar
λ
π
φ
λ
=+ + + …. (6-6)
Using this equation to define the phase function of the lens, the coefficients, ,
i
a can
be optimized for a given lens design [22]. In the paraxial region, all the terms above
the quadratic term,
2
1
, ar are zero. By comparison with Equation (6-5), we can see
that the effective focal length of the kinoform lens is inversely related to the
233
coefficient of the quadratic term by ( )
01
12 f a =− . Thus, the
1
a term is easily
derived from the effective focal length of the lens.
As mentioned previously, we can also construct the kinoform using an
integer number, p, of 2π phase shifts per zone, with the single-order (or
“conventional”) case corresponding to p = 1. A comparison of a single-order and a
5
th
-order construction of a kinoform lens is shown in Figure 6-3. Because one of the
cases described in this chapter employs higher-order diffractive lenses for use in the
IOC (p > 1), the variable p is included in the following equations for the depth
profile and transmission function of a kinoform lens.
The physical depth profile of the kinoform lens is obtained by performing a
modulo 2πp operation on the phase function, and then converting to a physical path
difference. This yields the following equation describing the surface relief profile of
the kinoform:
Figure 6-3 Comparison of a conventional, single-order diffractive lens with a
higher-order version of the same lens that is constructed with zones that are p = 5
waves deep.
234
()
()
0
0
0
( ) mod(2 )
2
p
dr r p
n
λ
λ
φπ
πλ
⎡ ⎤
=
⎣ ⎦
Δ
, (6-7)
in which () ()
0
1 nn λλ Δ= − and p = 1, 2, 3,…. This expression assumes that the
kinoform is fabricated in a material with refractive index ()
0
n λ at the design
wavelength, and is surrounded by air ( )
air
1 n = . The maximum relief depth of
each zone occurs when the phase is equal to 2πp,
()
0
max
0
p
d
n
λ
λ
=
Δ
. (6-8)
The transmission function of the lens at the propagating wavelength λ (which
may be different than λ
0
) can then be written as,
()
( ) 2
exp ( )
in
tr d r
πλ
λ
Δ⎛⎞
=
⎜⎟
⎝⎠
. (6-9)
The complex field at the aperture plane is the product of the optical input
illumination field and the kinoform transmission function. The input field, (), Ur is
typically modeled as a plane wave or a Gaussian beam, which may be tilted at an
angle, θ, relative to the optical axis,
()
( )
0
Incident Wave
Kinoform
2
2sin()
()exp exp ()
in
ir
Ur U r dr
πλ
πθ
λλ
Δ⎛⎞
⎛⎞
=
⎜⎟ ⎜⎟
⎝⎠
⎝⎠
(6-10)
235
in which
0
() Ur is unity for a plane wave, and is equal to
()
22
exp 2 r σ
⎡⎤
−
⎣⎦
for a
Gaussian-amplitude incident wave with a beam waist proportional to
2
σ . The
Gaussian beam case is often used when the source is a laser. For the imaging
problems considered herein for the IOC, incident plane waves are assumed.
To analytically compute the diffraction efficiency of the kinoform lens as a
function of the diffraction order, m, the kinoform order, p, and the wavelength, λ, the
transmission function in Equation (6-9) can be expanded in a Fourier series [1, 21]
as,
() () ()
2
0 1
exp sinc
2
exp
m
I
i
i
i
tr i p m p m
m
iar
p
πα α
πλ
λλ
∞
=−∞
=
=− − − ⎡⎤
⎣⎦
⎡⎤ ⎛⎞
⎛⎞
×
⎢⎥ ⎜⎟ ⎜⎟
⎝⎠⎢⎥
⎝⎠
⎣⎦
∑
∑
(6-11)
in which m = 0, 1, 2…, and α is a wavelength detuning parameter,
0
0
() 1
() 1
n
n
λ λ
α
λλ
⎡ ⎤ −
=
⎢ ⎥
−
⎣ ⎦
. (6-12)
And the definition of the sinc function is
()
( ) sin
sinc
x
x
x
π
π
=. (6-13)
In the paraxial domain, only the
1
a term is nonzero and ()
01
12 f a =− . The
transmission function then reduces to
236
() ( ) ( )
2
0
0
exp sinc
2
exp .
2
paraxial
m
tr i p m p m
r
i
p
f
m
πα α
π
λ λ
λ
∞
=−∞
=− − − ⎡⎤
⎣⎦
⎡⎤
⎢⎥
⎛⎞
⎢⎥ ×−
⎜⎟
⎛⎞
⎢⎝ ⎠ ⎥
⎜⎟
⎢⎥
⎝⎠ ⎣⎦
∑
(6-14)
A comparison of Equations (6-5) and (6-14) shows that a kinoform lens has an
infinite set of focal lengths that are inversely proportional to the propagation
wavelength and diffraction order,
0
0 m
p
f f
m
λ
λ
⎛⎞
=
⎜⎟
⎝⎠
. (6-15)
The diffraction efficiency into any order, m, at wavelength, λ, is the square of the
amplitude of the mth order coefficient in the transmission function in Equations
(6-11) and (6-14),
()
2
sinc
m
pm ηα =− ⎡ ⎤
⎣ ⎦
, (6-16)
For a higher-order kinoform lens design, p and λ
0
are construction parameters, and
Equation (6-15) shows that the set of wavelengths that satisfy
0
mp λ λ = will come
to a common focus at f
0
. The diffraction efficiency at this focal point will reach
unity when . p m α = The presence of the wavelength detuning parameter, α, [as
defined in Equation (6-12)] in this expression accounts for the longitudinal chromatic
aberration that results from the material dispersion in a higher-order kinoform lens,
which becomes a more significant effect at greater values of p [1].
237
For evaluation of the diffractive lens designs that will be described in later
sections, we are interested in computing the intensity distribution at the image plane
for incident plane waves at different field angles and wavelengths. To accomplish
this, we perform a numerical computation of the Rayleigh-Sommerfeld diffraction
integral [23] using a fast-Fourier-transform-based direct integration method
(FFT-DI), as described by Shen and Wang [24]. This paper shows that the Rayleigh-
Sommerfeld diffraction integral can be written as a convolution of the complex field
at the aperture plane, () ,0, Urz = and the impulse response, () ,, hr z of the linear,
homogeneous, isotropic medium between the aperture and the observation plane.
The impulse response of a two-dimensional linear isotropic medium is
() 1
1
(, ) ( ),
2
jkz
hr z H kR
R
= , (6-17)
in which
22
R rz =+ , 2, k π λ = and
( ) 1
1
() HkR is a first order Hankel function
of the first kind. Note that R is the distance from a point in the aperture at radial
height, r, to an axial point located a distance, z, from the aperture. The complex
electromagnetic field can then be written as
()
()
()
()
1
1
(, ) ( ,0) ( , )
,0 ( , )
,0 ,
2
A
A
Ur z U r h r z
Us hr szds
jkz
Us H kR ds
R
= ⊗
=−
=
∫
∫
(6-18)
238
in which s is the radial coordinate at the image plane, ()
2
2
R rs z =− + in this
case, and the integral is computed over the aperture, A. For our rotationally
symmetric lens, the aperture extends from r = −D/2 to +D/2. As described in
Reference [24], this integral can be computed numerically using FFTs by
implementing the following equation in a computer program:
( ) ( ) S IFFT FFT U FFT H s =⋅× Δ ⎡ ⎤
⎣ ⎦
, (6-19)
where IFFT denotes the inverse fast Fourier transform, U is a vector containing
discrete samples of the complex field at the aperture plane, H is the discrete optical
transfer function of the intervening medium [i.e., the inverse discrete Fourier
transform of () , hr z ], and s Δ is the radial sampling interval along the aperture in
units of distance. The resulting vector, S, is proportional to the complex optical field
at the image plane. The intensity of the optical field at the image plane is then
proportional to the square of the magnitude of S.
The diffraction efficiency within a given spot size can be calculated by
integrating the intensity over the desired spot width and dividing by the total
intensity of the input field. This yields the absolute diffraction efficiency within the
spot, neglecting any Fresnel reflection and absorption losses. Alternatively, the
diffraction efficiency can be computed relative to the total intensity in the output
field integrated over a finite image plane. This is useful when it is desirable to
calculate the efficiency in a given spot diameter relative to the total power incident
239
on a sensor of a given size. The relative diffraction efficiency is useful, for example,
in an analysis of the contrast ratio on an image sensor array. In this case, we assume
that any light diffracted outside the sensor area may be neglected, either because it is
truly negligible, or because we assume it to be absorbed by the packaging walls. For
cases in which the diffraction efficiency is high, the two methods of calculation
produce essentially identical results.
6.3 Hybrid refractive/diffractive lens design
One possible method for incorporating diffractive optical elements into the
intraocular camera is to add a diffractive surface to the refractive lens system. In the
IOC, a diffractive profile could be added, for example, to the front or rear surface of
the refractive lens, or to one of the planar surfaces of the fused silica window. A lens
containing both refractive and diffractive optical elements is called a hybrid
refractive/diffractive lens, and will be referred to as a “hybrid lens” hereafter.
Hybrid lenses are often used to correct chromatic aberration in a lens system by
taking advantage of the physically opposite wavelength dispersion characteristics of
diffractive and refractive optical elements. The fundamental concept of how a
hybrid lens may be used to correct chromatic aberration is illustrated in Figure 6-4.
The positive dispersion of the refractive lens results in a shorter focal length for blue
light than for red light. In contrast, the diffractive lens has negative dispersion and
focuses red light nearer than blue light. When the two optical elements are combined
to form a hybrid lens, this chromatic dispersion can be cancelled.
240
Figure 6-4 Concept of color correction in a hybrid refractive/diffractive lens.
As discussed in Section 4.4, a common measure of the chromatic dispersion
of a refractive lens is the Abbe V-number, which is based on the refractive indices of
the lens material at three wavelengths within the spectrum of interest. For visible
light, the Abbe V-number is defined as,
d
refr
FC
1 n
V
nn
−
=
−
, (6-20)
in which
dF C
, , and nn n are the indices of refraction at three characteristic
wavelengths, λ
d
= 587.6 nm (helium yellow), λ
F
= 486.1 nm (hydrogen blue), and
λ
C
= 656.3 nm (hydrogen red). The definition of the Abbe V-number derives from a
calculation of the lens power (the inverse of the focal length in meters) at the central
wavelength divided by the difference in the power of the lens at the short and long
wavelengths [2]:
241
d
FC
P P
V
PP P
==
Δ−
. (6-21)
This notion allows us to define an equivalent V-number for a diffractive lens. It can
be see from Equation (6-15) that the power of a diffractive lens varies linearly with
wavelength, irrespective of the lens material. As such, the V-number of a diffractive
lens is simply the ratio of the central wavelength to the difference between the short
and long wavelengths:
d
diff
FC
3.45 V
λ
λλ
==−
−
. (6-22)
This value is directly comparable with the refractive Abbe V-numbers. Since the
chromatic aberration of diffractive optical elements is physically opposite of the
chromatic aberration of refractive elements, the combination of these two elements
can be used to correct chromatic aberration in a lens system [25, 26]. By adding
aspherical terms to the diffractive phase profile, spherical aberration may also be
corrected. For the case of a diffractive surface in contact with the refractive lens (as
shown in Figure 6-4), the total power of the hybrid lens is the sum of the individual
lens powers:
trefr diff
PP P = +, (6-23)
in which P
t
is the total power of the hybrid lens, and P
refr
and P
diff
are the powers of
the refractive and diffractive lenses, respectively. For an individual lens (refractive
or diffractive), the difference in the lens power at the short and long wavelengths
242
[ P Δ in Equation (6-21) above] is a measure of the chromatic aberration of the lens.
To correct for chromatic aberration in a hybrid lens, the sum of the individual
chromatic aberrations of the lenses should cancel:
refr diff
0 PP Δ +Δ =. (6-24)
By rewriting Equation (6-24) in terms of the ratios of the respective lens powers to
their Abbe V-numbers [from Equation (6-21)], we can see that in order to correct
chromatic aberration in a hybrid lens, the following equation must be satisfied:
refr diff
refr diff
0
PP
VV
+ =. (6-25)
Equations (6-23) and (6-25) can thus be used to solve for the individual refractive
and diffractive focal powers required to obtain an achromatic hybrid lens with a
specified total power and with a particular lens material of a given Abbe V-number.
Since the negative dispersion of a diffractive optical element is so strong, very little
diffractive power is required to achromatize the lens. For example, the desired
power of the IOC is approximately 500 diopters (assuming a 2 mm focal length) and
has an aperture diameter of approximately 2 mm (to obtain an f/1 system). If the lens
material is Zeonex, which has an Abbe V-number of 55.8, the required diffractive
lens power is only 29.1 diopters (f = 34.3 mm, f-number = 17.2), whereas the
required refractive lens power is 471 diopters, with a focal length of 2.1 mm and an
f-number of 1.06.
243
The example computed above is for the case of the two elements in contact.
The same effect can also be achieved by placing the diffractive lens elsewhere in the
optical system and modifying the design equations accordingly, or by using optical
design software to optimize the imaging performance over the desired wavelength
spectrum. For the IOC, the refractive surfaces require fairly extreme aspherical
shapes to meet the imaging requirements, so it makes sense from a manufacturing
point of view to place the diffractive surface on the rear side of the fused silica
window rather than on the aspherical lens itself. This surface is a good choice
because it is both flat and protected from the ocular environment (since it faces the
inside of the IOC package). The example computed above shows that the required
f-number of the dispersion-compensating diffractive surface is very low (~ f/17).
This means that the surface will require relatively few zones, each with a sufficiently
large width, so that the diffractive optical element can be easily manufactured as a
single-order kinoform.
The lens diagram in Figure 6-5(a) shows an optimized hybrid lens system for
the IOC with a diffractive lens placed on the rear surface of the window, coplanar
with the aperture stop. This system was optimized using the five-wavelength
spectrum described in Section 5.8 that approximates the combined response of the
cornea, aqueous humor, and a CMOS sensor with a color filter array. Aspherical
coefficients in the phase function describing the diffractive surface were permitted to
vary during the optimization. The relief profile of the resulting single-order
kinoform lens is shown in Figure 6-5(b). The lens has 17 zones, the smallest of
244
which is approximately 40 μm wide and 1.2 μm deep, assuming a central wavelength
of 550 nm in fused silica.
(a)
(b)
Figure 6-5 (a) Hybrid IOC lens system with a diffractive lens placed at the location
of the 2-mm diameter aperture stop on the rear surface of the fused silica window.
The system operates at f/1 with a focal length of 2.2 mm and is optimized over a
±20° field of view. (b) Diffractive lens relief profile from the center to the edge of
the lens.
The surface parameters for this lens system are shown below in Table 6-1.
The coefficients describing the aspherical phase profile of the diffractive lens are
defined in Equation (6-6), and the coefficients for the refractive surfaces are defined
in Equation (5-1).
245
Surface Type Radius (mm) Thickness (mm) Material
Refractive
Index (550 nm)
Anterior
Cornea
Conic
R = 7.77
K = −0.18
0.500 Cornea 1.376
Posterior
Cornea
Conic
R = 6.40
K = −0.60
2.000
Aqueous
humor
1.336
Window Front Plane Infinity 0.250 Fused silica 1.460
Window Rear
Plane,
Diffractive
Order, m = 1
λ
0
= 550 nm
a1 = −0.0118
a2 = 0.0088
a3 = −0.0233
a4 = 0.0264
a5 = −0.0097
0.000 Air 1.000
Lens Front Asphere
R = 1.5810
K = −0.1599
A = 0.0116
B = −0.0486
C = 0.0549
D = −0.0249
E = 0.0033
2.572
Zeonex
E48R
1.533
Lens Rear Asphere
R = −1.8489
K = −8.0000
A = 0.2167
B = −0.0779
C = −0.0254
D = 0.2127
0.6359 Air 1.000
Image Plane Infinity − − −
Table 6-1 Surface parameters for the IOC system shown in Figure 6-5 when
focused at a 20 cm object distance. “Thickness” indicates the axial distance from the
vertex of the current surface to the vertex of the next surface. “Material” refers to
the optical medium following the surface. The quantities R, K, A, and B are defined
in the aspherical surface sag formula shown in Equation 5.1.
The first order properties of the hybrid lens system are shown in Table 6-2.
These properties are similar to those of the purely refractive lens systems described
in Chapter 5. With a rear nodal distance of approximately 2.1 mm, the imaging
metrics used in Chapter 5 still apply. These metrics include an MTF ≥ 0.5 at
25 lp/mm and RMS spot diameters < 30 μm across the central ±10° field of view,
and decent imaging from ±10° to ±20°.
246
First Order Parameter Value
Effective focal length 2.199 mm
System f-number 1.0
Entrance pupil location (Cornea to EnP) 2.232 mm
Entrance pupil diameter 2.189 mm
Exit pupil location (ExP to Image) 3.249 mm
Exit pupil diameter 3.866 mm
Front nodal distance (Cornea to NP1) 3.210 mm
Rear nodal distance (NP2 to Image) 2.153 mm
Paraxial image height (at ±20°) 0.805 mm
Actual image height (at ±20°) 0.890 mm
Image distance (lens rear to image) 0.636 mm
Table 6-2 First order optical parameters for the IOC system shown in Figure 6-5.
The size of this hybrid refractive/diffractive optical system is similar to that
of the purely refractive system shown in Figure 5-32. The optical system length of
the hybrid design is 0.05 mm shorter than the purely refractive design, but the center
thickness of the refractive lens in the hybrid design is 0.137 mm larger than in the
purely refractive design (consequently, the image distance in the hybrid design is
slightly shorter). Because the mass of the optical system is dominated by the size of
the refractive lens, the incorporation of a low-power diffractive surface in this hybrid
IOC design does not result in a significant change in the size and mass of the optical
system relative to the designs shown in Chapter 5.
The MTF and spot diagrams for the hybrid lens system, when focused at a
20 cm object distance, are shown in Figure 6-6. In comparison with the results
shown in Section 5.8 (Figure 5-33) for a purely refractive IOC design, the hybrid
system shows significant improvement in the image quality over ±15°. The RMS
spot diameters for the hybrid system are less than 20 μm over ±15°, which is 33%
smaller than the specification of 30 μm over ±10°. In comparison with the RMS spot
247
diameters shown in Figure 5-33 for the purely refractive lens design, the RMS spot
diameters in this hybrid design are smaller by 59%, 41%, 32%, 30%, and 5% at 0°,
±5°, ±10°, ±15°, and ±20°, respectively.
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
R
T
Hybrid IOC
DIFFRACTION MTF
DIFFRACTION LIMIT
T
R
0.0 FIELD ( ) 0.00
O
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
700.0 NM 8
650.0 NM 9
550.0 NM 10
450.0 NM 6
400.0 NM 2
DEFOCUSING 0.00000
7 μm at 0°
37 μm at ±20°
19 μm at ±15°
15 μm at ±10°
10 μm at ±5°
MTF Spot diagrams
Figure 6-6 MTF and polychromatic spot diagram with RMS spot diameters for
the lens system shown in Figure 6-5 when focused at a 20 cm object distance. The
colored lines in the MTF plot represent different semi-field angles (red: 0°, green:
5°, blue: 10°, orange: 15°, pink: 20°) with solid lines for the tangential rays, and
dashed lines for the radial, or sagittal, rays. The colors in the spot diagrams
represent different wavelengths.
The MTF of this hybrid lens design is > 0.5 at 25 lp/mm for all but the
sagittal (radial) rays at 20°, and remains above 0.5 out to 50 lp/mm for the central
±5°. A comparison of the MTF values at 25 lp/mm for the hybrid and refractive
designs is shown in Table 6-3. A significant increase in both the radial and
tangential MTF at semi-field angles up to ±10° for the hybrid design can be seen in
the table, as well as a significant increase in the tangential MTF out to ±20°.
248
0° ±5° ±10° ±15° ±20° Field
MTF
Rad. Tan. Rad. Tan. Rad. Tan. Rad. Tan. Rad. Tan.
Refractive IOC 0.701 0.701 0.722 0.639 0.729 0.535 0.539 0.483 0.066 0.508
Hybrid IOC 0.925 0.925 0.901 0.820 0.848 0.614 0.509 0.659 0.033 0.610
Table 6-3 Comparison of the MTF values of the radial and tangential ray fans at
25 lp/mm for the purely refractive (Figure 5-33) and hybrid refractive/diffractive
(Figure 6-6) IOC designs.
As with the lens systems described in Chapter 5, there is significant
astigmatism in the hybrid refractive/diffractive design, especially for field angles
greater than ±10°. However, with proper placement of the image plane, this
astigmatism can be exploited to flatten the field at ±15° while maintaining good
imaging at lower field angles. At ±20°, though, the astigmatism and field curvature
are too severe to be corrected. This can be seen in Figure 6-7, which shows a plot of
the polychromatic MTF at 25 lp/mm as a function of defocus position. Notice that
the optimal focus position for the on-axis rays is located near the midpoint between
the best focus positions for the sagittal and tangential rays at 15°. From 15° to 20°,
the ideal image plane curves inward toward the lens, as is usual, and the separation
between the best focus positions for the sagittal and tangential rays increases rapidly,
making it impossible to improve the image quality at 20° without severely degrading
it at smaller field angles. Nevertheless, the spot diameter and MTF at 20° is
sufficient to provide decent imaging for spatial frequencies up to approximately
15 lp/mm. This same set of circumstances occurs for the refractive designs, except
that in the hybrid system, the additional degrees of freedom afforded by the
diffractive surface result in smaller spot sizes and higher MTF values at each field
angle.
249
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
-0.07000 -0.05000 -0.03000 -0.01000 0.01000 0.03000 0.05000 0.07000
DEFOCUSING POSITION (MM)
R
T
Hybrid IOC
DIFFRACTION MTF
DIFFRACTION LIMIT
T
R
0.0 FIELD ( ) 0.00
O
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
700.0 NM 8
650.0 NM 9
550.0 NM 10
450.0 NM 6
400.0 NM 2
FREQUENCY 25 C/MM
Figure 6-7 Polychromatic MTF at 25 lp/mm as a function of defocus distance for
the tangential and sagittal (radial) rays at each field angle. The nominal focus
position at 0.0 corresponds to an image distance of 0.6359 mm (used for the MTF
curves and spot diagrams plotted in Figure 6-6).
The improvement in aberration correction achieved by incorporating a
diffractive lens is best seen by comparing the ray intercept curves for the two
systems, as shown in Figure 6-8. In these plots, the line colors correspond to the five
weighted wavelengths used in the design spectrum. While both sets of curves take
on very similar shapes, it is clear that the hybrid IOC has far less chromatic spread
than the refractive design. The curves for the hybrid design are also flatter, in
general, than those of the refractive design, indicating that the addition of the
diffractive surface and consequent modification of the refractive surface provides
some degree of correction for other aberrations as well.
250
This design shows an increase in distortion at ±20° relative to the purely
refractive design shown in Figure 5-32. It can be seen in Table 6-2 that the paraxial
image height is 0.805 mm at 20°. The real chief ray for this field angle actually
intersects the image plane at 0.896 mm at 550 nm. This indicates a pincushion
distortion of 11.2%, whereas the refractive design shown in Figure 5-32 has a
distortion of 6.4%.
In relation to the principal goals for incorporating diffractive surfaces into the
IOC, namely, the potential for lighter mass, a more compact design, and improved
imaging, this hybrid system meets one of these goals. Adding a diffractive surface to
the refractive design does not reduce the size or mass of the system in any substantial
way, but it does provide additional aberration correction without adding any size or
mass. Furthermore, the diffractive surface is highly manufacturable as a kinoform
and can be fabricated on the silica window, a component that already exists in the
refractive system. Since a hybrid refractive/diffractive design has the potential to
provide additional margin between the principal IOC imaging requirements and the
performance of the optical imaging system, with little added complexity, it presents a
worthwhile design approach to pursue for future IOC prototypes.
251
Refractive IOC
RAY ABERRATIONS ( MILLIMETERS )
700.0000 NM
650.0000 NM
550.0000 NM
450.0000 NM
400.0000 NM
-0.025
0.025
-0.025
0.025
0.00 RELATIVE
FIELD HEIGHT
( 0.000 )
O
-0.025
0.025
-0.025
0.025
0.24 RELATIVE
FIELD HEIGHT
( 5.000 )
O
-0.025
0.025
-0.025
0.025
0.48 RELATIVE
FIELD HEIGHT
( 10.00 )
O
-0.025
0.025
-0.025
0.025
0.74 RELATIVE
FIELD HEIGHT
( 15.00 )
O
-0.025
0.025
-0.025
0.025
TANGENTIAL 1.00 RELATIVE SAGITTAL
FIELD HEIGHT
( 20.00 )
O
Hybrid IOC
RAY ABERRATIONS ( MILLIMETERS )
700.0000 NM
650.0000 NM
550.0000 NM
450.0000 NM
400.0000 NM
-0.025
0.025
-0.025
0.025
0.00 RELATIVE
FIELD HEIGHT
( 0.000 )
O
-0.025
0.025
-0.025
0.025
0.24 RELATIVE
FIELD HEIGHT
( 5.000 )
O
-0.025
0.025
-0.025
0.025
0.48 RELATIVE
FIELD HEIGHT
( 10.00 )
O
-0.025
0.025
-0.025
0.025
0.74 RELATIVE
FIELD HEIGHT
( 15.00 )
O
-0.025
0.025
-0.025
0.025
Y-FAN 1.00 RELATIVE X-FAN
FIELD HEIGHT
( 20.00 )
O
Purely Refractive IOC Hybrid Refractive/diffractive IOC
Figure 6-8 Comparison of the ray intercept curves for the purely refractive (Figure
5-32) and hybrid (Figure 6-5) IOC lens designs. The horizontal axis is the ray height
at the entrance pupil, and the vertical axis is the ray intersection at the image plane
(the range is ±0.025 mm). The line colors correspond to different wavelengths.
6.4 Purely diffractive lens design
In the hybrid IOC design presented above, the refractive lens provides the
majority of the focusing power of the system, and is therefore just as bulky as in the
purely refractive designs presented in Chapter 5. Replacing the refractive lens in
either the purely refractive or hybrid refractive/diffractive designs with a low
f-number diffractive optical element would be a significant advance for the IOC
because of the large reduction in mass it would afford. Furthermore, because
diffractive lenses (on existing optical windows) are only a few microns thick, and
even if fabricated on a substrate are only 100 to 250 μm thick, this leaves room in the
252
IOC package to consider alternative lens configurations that provide better aberration
correction over the field of view.
At first glance, the idea of replacing the refractive lens in the IOC with a low
f-number, short focal length diffractive optical element is problematic. Purely
diffractive lens systems are not typically considered for visible light imaging
applications because of their extreme chromatic dispersion. Also, as shown in
Equation (6-3), conventional diffractive lenses operating at low f-numbers in the
visible spectrum have zone widths approaching the wavelength of light, which places
an increased demand on the fabrication approach, tolerances, and costs. In addition,
the assumptions of scalar diffraction theory begin to lose validity in this domain, and
a full vector-based approach is required for the optimization and analysis of the
diffractive lens system.
To circumvent these issues, we are pursuing two approaches to develop a
purely diffractive IOC lens system that exhibits both lower mass and improved
image quality over the field of view. One approach involves the incorporation of a
stratified volume diffractive optical element (SVDOE), a novel device based on a
concept originally introduced by Johnson and Tanguay [27, 28]. In this concept,
multiple planar diffracting layers are appropriately spaced and co-optimized to
obtain a single diffractive optical element that produces a desired diffraction pattern
with high efficiency [29]. Our application of this concept to the IOC is currently in
progress, and is one of the key future research directions discussed in Chapter 7.
253
The other approach involves the combination of a wide field diffractive lens
configuration, originally introduced for monochromatic applications, with a higher-
order kinoform lens. This enables polychromatic operation across the visible
spectrum, and eases the fabrication issues associated with low f-number diffractive
lenses. We also consider implementing the higher-order kinoform lens as an all-
diffractive achromatic compound lens, as introduced in a paper by Roncone and
Sweeney [30]. In this concept, a single-order kinoform is superimposed on the
higher-order diffractive lens to cancel the effects of material dispersion. With this
addition, the imaging performance of the IOC lens system is limited only by the
residual diffractive dispersion at non-resonant wavelengths [i.e., the wavelengths that
do not come to a common focus in a higher-order diffractive lens, as expressed in
Equation (6-15)].
6.4.1 Wide field diffractive lens configuration and monochromatic
performance
In the literature, diffractive lenses are most often analyzed under the
assumptions of on-axis, monochromatic illumination, in which the diffractive lens is
the only element in the optical system. In 1989, Buralli and Morris strayed from this
convention and introduced a lens configuration in which a remote aperture stop is
placed in the front focal plane of a diffractive lens [20]. Using the stop-shift
equations from third-order aberration theory [31, 32], they showed that this two-
element purely-diffractive lens system produces nearly diffraction limited imaging
over a relatively wide field of view at the design wavelength. They termed this
254
configuration a “wide field diffractive landscape lens” and showed that the
monochromatic performance of an f/5.6 implementation of this lens was superior to
that of a conventional f/5.6 Cooke triplet. This is remarkable because the Cooke
triplet is a three-element refractive lens system that serves as the classic example of a
thin lens system that contains the minimum number of design variables (degrees of
freedom) needed to fully correct all first and third order aberrations. The imaging
performance of a Cooke triplet is ultimately limited by higher order aberrations. As
such, they are known to provide good correction for apertures up to f/3.5 and for
fields of view up to ±20° [33].
The configuration of the wide field diffractive landscape lens, as it would be
applied to the IOC, is illustrated in Figure 6-9. A remote aperture stop is placed in
the front focal plane of a spherical diffractive lens. Buralli and Morris assumed a
single-order diffractive lens for their analysis and showed that this simple lens
arrangement is corrected for coma, astigmatism, and field curvature at the design
wavelength. Outside the design wavelength, the imaging performance degrades
rapidly. To obtain polychromatic operation for the IOC, the diffractive lens can be
implemented as a higher-order kinoform, as indicated in the figure. This
configuration also includes an aspherical corrector plate located at the aperture stop
to correct for spherical aberration [20]. The required aspherical corrector plate has
very low power, may be implemented with either a diffractive or refractive surface,
and will be a thin element in either case.
255
Placing the aperture stop in the front focal plane of the diffractive lens has
important implications for the possible focal lengths and f-numbers in an IOC design.
The previous IOC lens designs all had focal lengths of approximately 2.1 mm, with a
maximum system length of 3.5 mm, and were designed to operate at f/1. As seen in
Figure 6-9, this wide field diffractive lens configuration is 2f long, implying that a
focal length of ≤ 1.75 mm is needed to hold the overall length to 3.5 mm or less.
The finite thicknesses of the fused silica window and the diffractive lens substrate
will also have to be taken into account in determining the focal length. A shorter
focal length will result in a smaller image height and a tighter specification on the
spot diameters needed to meet the requirements of a 32 × 32 microstimulator array
covering ±10°.
Figure 6-9 Wide field diffractive lens configuration with an aspherical corrector plate
placed at the aperture stop, which is located in the front focal plane of a spherical
diffractive lens (after [20]). For the IOC application, the diffractive lens may be
constructed as a higher-order kinoform for polychromatic imaging.
256
The f-number is determined in part by the diameter of the aperture stop,
#.
stop system
ffD = The field of view is then limited by the diameter of the
diffractive lens. Note that in order to image over any finite field of view, the
f-number of the diffractive lens taken alone must always be lower than the f-number
of the system as a whole. This is because the diameter of the diffractive lens must
necessarily be larger than the diameter of the front aperture stop, as suggested in
Figure 6-9. The equation relating the f-numbers of the system and the diffractive
lens to the maximum unvignetted semi-field of view is [20]
11 1
arctan
2# #
unvignetted
Lens System
ff
θ
⎡ ⎤ ⎛⎞
⎢ ⎥ ⎜⎟ =−
⎜⎟
⎢ ⎥
⎝⎠ ⎣ ⎦
. (6-26)
The term “unvignetted” here means that the marginal rays, which just pass through
the edge of the aperture stop, are not clipped by the edge of the diffractive lens. A
calculation of the required f-numbers of the diffractive lens in order to achieve an
overall system f-number of either f/1.0, f/1.4, or f/2.0, without clipping any rays at
field angles from ±10° to ±20°, is shown in Table 6-4 (neglecting the effects of the
small divergence of rays produced by the aspherical corrector plate at the stop).
Clearly, the values for both the focal length and the f-number of the IOC need
to be reconsidered for this diffractive lens configuration. We would prefer to limit
the f-number of the diffractive lens element to f/1 or greater, as this is already low for
a diffractive imaging lens and implies relatively large angles of incidence and
diffraction. This limitation obtains partly because stray light (light that is diffracted
257
f/# System f/# Lens
Unvignetted
Semi Field of View
1.0 0.74 10°
1.0 0.65 15°
1.0 0.58 20°
1.4 0.94 10°
1.4 0.80 15°
1.4 0.69 20°
2.0 1.17 10°
2.0 0.97 15°
2.0 0.81 20°
Table 6-4 Required f/# of the diffractive lens to achieve a given system f/# and
unvignetted semi field of view, following from Equation (6-26).
into undesired orders and reduces contrast at the image plane) is more prevalent in
very low f-number diffractive lenses. An f/1 lens is also near the boundary at which
scalar diffraction theory produces accurate results for the diffraction efficiency at the
focal plane [34].
According to Table 6-4, limiting the diffractive lens to f/1 implies that the
system would need to be at least f/2 to avoid vignetting over ±10° (and slightly
higher to completely avoid it over ±20°). However, with the relaxed imaging
constraints of the IOC, it might be reasonable to allow for some vignetting in the
peripheral field in order to allow for a lower system f-number. A good compromise
would be to lower the system f-number from f/2.0 to f/1.4, while maintaining the
f-number of the diffractive lens at f/1, and to accept the vignetting that will result
from 10 to 20 degrees. This is illustrated in the ray diagrams in Figure 6-10, which
compare cases with and without vignetting. Both diagrams are of a system operating
at f/1.4 with a 1.0 mm diameter aperture stop and a 1.4 mm focal length. The top
diagram shows that the diffractive lens must be f/0.7 to avoid vignetting across the
258
(a)
(b)
Figure 6-10 Ray diagrams illustrating the issue of vignetting in an f/1.4 wide field
diffractive lens system with a 1.4 mm focal length. (a) To achieve no vignetting, the
diffractive lens must operate at f/0.7. (b) If the diffractive lens diameter is limited so
that it operates at f/1, then significant vignetting occurs for rays incident between 10°
and 20°.
field of view (i.e., equally filled entrance aperture for semi-field angles). The bottom
diagram shows the vignetting that results for rays near the edge of the aperture at
incident angles from 10° to 20° when the diffractive lens diameter is limited so that it
operates at f/1 (i.e., the entrance aperture cannot be fully filled for the larger semi-
259
field angles). As these ray diagrams are produced from actual lens configurations
optimized in Code V®, they include the effects of the small divergence of rays from
the aspherical corrector plate.
In anticipation of the configuration most suited to the IOC optical system and
housing, the aspherical surface is placed on the rear side of a fused silica window
that is 0.25 mm thick (so that it will interface with the air inside the IOC housing,
rather than with the aqueous humor at the front of the window). The diffractive lens
is located on the rear surface of a fused silica substrate that is 0.1 mm thick. These
design thicknesses were chosen in view of the assumption that the window at the
front of the package needs additional thickness to provide sufficient structural
support and sealing capability, whereas the internal diffractive lens can be fabricated
on a thinner substrate. The distance from the aspherical surface to the diffractive
surface is equal to the focal length, 1.4 mm, and includes the 0.1 mm thickness of the
diffractive lens substrate. The total length of the system is 3.05 mm, which is 13%
shorter than in the previous refractive and hybrid refractive/diffractive IOC designs.
For the lens shown in the bottom diagram of Figure 6-10, the computed
values of the relative illumination at the image plane for the five field angles shown,
0°, 5°, 10°, 15°, and 20°, are 1.0, 1.0, 0.92, 0.77, and 0.59, respectively, assuming
uniform, diffuse illumination of the aperture stop. For the diffractive IOC design
presented below, we will adopt this configuration, thereby assuming that this level of
vignetting is tolerable. The specific parameters of the CMOS image sensor used in
the IOC will play an important role in determining if this level of vignetting is indeed
260
tolerable (e.g., if it can be compensated for with a field-angle-dependent gain
adjustment), or if the parameters of this lens design need to be reevaluated and
adjusted to provide more uniform image brightness across the field.
In order to adapt this wide field diffractive lens system to the IOC, the system
was optimized within a schematic eye model using Code V® optical design software
(using the same schematic eye model used for the IOC optical system designs
presented in Chapters 4 and 5, as well as in the hybrid refractive/diffractive lens
design shown previously in this chapter). The resulting design is shown in Figure
6-11(a). It is very similar to the one depicted above in Figure 6-10(b), except that
the presence of the corneal lens and aqueous humor results in small differences in the
shape of the aspherical surface and in the best focus position for the image plane.
The overall length of this f/1.4 system is 3.19 mm, which is still shorter than the
previous refractive and hybrid refractive/diffractive IOC designs. A 1-mm diameter
aperture stop is collocated with the aspherical lens on the rear surface of the IOC
window, and the clear aperture of the diffractive lens is similarly limited to 1.4 mm.
Thus, the diameter of this optical system is approximately 1 mm smaller than in the
refractive cases, potentially resulting in a significant reduction in IOC diameter and
volume.
The aspherical surface at the aperture stop may be optimized using either a
refractive or a diffractive surface (both cases were attempted and produced
essentially identical imaging results at the design wavelength). In the design shown
261
(a)
(b)
(c)
Figure 6-11 (a) Wide field diffractive IOC lens diagram showing rays traced
through the system at 0°, 5°, 10°, 15°, and 20°. The system operates at f/1.4 with a
focal length of 1.4 mm, and is optimized over a ±20° field of view. (b) and (c)
Physical relief profiles from the center to the edge of the lens for the aspherical
corrector plate (1 mm diameter clear aperture) and the higher-order diffractive lens
(1.4 mm diameter clear aperture).
262
in Figure 6-11, the aspherical surface was specified and optimized using a diffractive
phase function, as described by Equation (6-6), which explains why it appears as a
nearly flat surface in the ray diagram. The power of this surface is low enough that,
even if it is constructed with a “single zone” (such that the phase depth of the
kinoform is taken to the limit at which it becomes a completely refractive surface),
the maximum lens sag is just over 40 μm deep, as shown in Figure 6-11(b). Since
this lens is so thin, there is nothing to be gained by constructing it as a kinoform. In
fact, doing so avoids the potential of this surface to contribute any diffractive sources
of chromatic aberration or stray light to the system.
The f/1 lens, on the other hand, must be diffractive in order to be thin. The
surface relief profile of the diffractive lens, assuming that it is constructed as a
higher-order kinoform with p = 32 waves, is shown in Figure 6-11(c). The
implications of choosing different values for the parameter p will be discussed in the
next section. To obtain a focal length of 1.4 mm, the diffractive lens has a spherical
phase function with coefficient ()
1
1 2 0.357 . af =− =−
-1
mm With p = 32, the
lens has 10 zones with a maximum relief depth of approximately 40 μm. This is too
deep to manufacture by means of direct writing or etching in photoresist, but is
amenable to manufacturing with precision micromachining techniques such as
single-point diamond turning and replication [7, 11].
In order to illustrate the upper performance limit of this diffractive IOC
configuration, the monochromatic MTF curves from 0° to 20° for the design
263
wavelength of 550 nm are shown in Figure 6-12. These curves show the nearly
diffraction limited performance of this lens design, as originally reported by Burralli
and Morris [20]. The spot diagrams plotted next to the MTF curves show that the
RMS spot diameters (which are entirely ray-based and do not account for diffraction
effects) are at or below the diffraction limit of 1.9 μm (computed as
() 2.44 # df λ = [35]). These analyses using Code V® assume the ideal case of
100% diffraction efficiency into the first diffraction order (m = 1) from the kinoform
lens. The ray tracing algorithm for diffractive surfaces in Code V® is limited to
tracing rays into one diffraction order at a time, and does not presently allow for the
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
M
O
D
U
L
A
T
I
O
N
5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0
SPATIAL FREQUENCY (CYCLES/MM)
R
T
f/1.4 Diff. IOC
MTF at 550 nm
DIFFRACTION MTF
DIFFRACTION LIMIT
T
R
0.0 FIELD ( ) 0.00
O
T
R
0.2 FIELD ( ) 5.00
O
T
R
0.5 FIELD ( ) 10.00
O
T
R
0.7 FIELD ( ) 15.00
O
T
R
1.0 FIELD ( ) 20.00
O
WAVELENGTH WEIGHT
550.0 NM 1
DEFOCUSING 0.00000
.100 MM
1.5 μm at 0°
(diff. limited)
1.9 μm at ±20°
(diff. limited)
1.7 μm at ±15°
(diff. limited)
0.9 μm at ±10°
(diff. limited)
1.2 μm at ±5°
(diff. limited)
MTF at 550 nm Spot diagrams
Figure 6-12 Monochromatic MTF and spot diagram showing the RMS spot
diameters for the lens system shown in Figure 6-11 when focused at a 20 cm object
distance. The colored lines in the MTF plot represent different semi-field angles
(red: 0°, green: 5°, blue: 10°, orange: 15°, pink: 20°) with solid lines for the
tangential rays, and dashed lines for the radial, or sagittal, rays. The color in the
spot diagram represents the design wavelength of 550 nm.
264
analysis of multi-order, polychromatic kinoforms. This analysis will be performed
using scalar diffraction theory in the next section.
6.4.2 Incorporation of a higher-order diffractive lens with material dispersion
compensation for polychromatic operation
The purpose of this section is to evaluate the performance of the wide field
diffractive lens configuration when the diffractive surface is constructed as a higher-
order kinoform for polychromatic operation. While the MTF curves and spot
diagrams in the preceding section indicate exceptional performance at the design
wavelength, the intraocular camera application requires analysis of how this lens
configuration performs in broadband illumination. This concept was introduced in
Section 6.2, where the corresponding equations were developed to describe the
properties, the surface relief profile, and the transmittance function of a higher-order
kinoform lens. Various aspects of the polychromatic imaging and aberrational
properties of higher-order kinoforms have been reported previously, both in the
literature [1, 36-42] and in various patents [43-46]. As can be seen from the titles of
these references, there is no consensus in the literature as to what to call these lenses,
and they are alternatively termed multi-order diffractive (MOD) lenses, harmonic
lenses, superzone lenses, modulo Nλ
0
lenses, phase-matched Fresnel lenses, and
planar Fresnel lenses with multiple phase jumps. In these previous reports, the
higher-order kinoform lens is the only element in the system, and the imaging
performance across a wide field of view is not considered. Faklis and Morris,
however, suggest (in Reference [36]) that the wide field landscape lens (from
265
Reference [20]) could by made polychromatic by using a multi-order diffractive lens.
The goal of this section is to extend this previous research to evaluate the visible
light imaging performance of the wide field lens system, comprising a higher-order
kinoform with parameters chosen specifically for the IOC application.
The first consideration is the choice of the construction parameter p. Using
Equations (6-12) and (6-16), the spectral properties of the diffraction efficiency of a
higher-order kinoform can be evaluated as a function of the diffraction order, m, and
the kinoform order, p, in a given substrate material with chromatic dispersion, n(λ).
For the present case, a fused silica substrate is assumed for the diffractive lens. A
plot of the refractive index of fused silica as a function of wavelength across the
visible spectrum is shown in Figure 6-13 [47]. The values for this plot were
computed using the Sellmeier equation,
2
22
()
11
BD
nA
CE
λ
λ λ
=+ +
−−
, (6-27)
with coefficients A = 1.31216, B = 0.79252, C = 0.01100, D = 0.91169, and E = 100
for fused silica at 26 °C [47].
The diffraction efficiency as a function of wavelength for a higher-order
kinoform on a fused silica substrate is shown in Figure 6-14 for two different values
of p. These plots show the spectral echelle characteristics of higher-order kinoform
lenses, with multiple maxima corresponding to each diffracted order. The diffraction
efficiency maxima correspond to the set of wavelengths within the spectral band at
266
which . mp α = Note that the falloff in diffraction efficiency away from each peak
condition is sharper for larger values of m. The peaks are spaced closer together and
also have narrower bandwidths for larger values of p and m.
1.452
1.456
1.460
1.464
1.468
1.472
400 500 600 700
Wavelength (nm)
Refractive Index
Figure 6-13 Chromatic dispersion of fused silica across the visible spectrum at 26
°C (after [47]).
The best choice for the construction value p depends on the particular
application. For the IOC application, the incoming light is spread across the visible
Fused Silica, p = 13
0
0.2
0.4
0.6
0.8
1
400 450 500 550 600 650 700
Wavelength (nm)
Diffraction Efficiency
11 m = 12 13 14 15 17 15
Fused Silica, p = 32
0
0.2
0.4
0.6
0.8
1
400 450 500 550 600 650 700
Wavelength (nm)
Diffraction Efficiency
27 m = 32 40
(a) (b)
Figure 6-14 Diffraction efficiency as a function of wavelength and diffraction order,
m, for (a) a 13
th
-order kinoform lens (p = 13) and (b) a 32
nd
-order kinoform lens, p =
32, in fused silica with λ
0
= 550 nm.
267
spectrum with a weighted distribution that is defined by the transmittance of the
cornea and the response of a CMOS sensor with a color filter array. This spectrum
has peaks near 450 nm, 550 nm, and 650 nm, so it makes sense to choose a value of
p that produces diffraction efficiency peaks at these wavelengths. The two cases
shown in Figure 6-14, for p = 13 and p = 32, meet this criteria. These two cases
allow us to evaluate the differences between choosing a lower-order or a higher-
order kinoform for the IOC.
For broadband imaging applications, it has been shown that the
polychromatic imaging performance improves with increasing values of p, meaning
that a thicker, higher-order kinoform is the preferred embodiment [38, 42]. This can
be seen in the polychromatic MTF curves plotted in Figure 6-15 for an f/1.4
diffractive singlet in fused silica with p = 13 and p = 32. The values for these curves
were computed using an analytical expression for the paraxial optical transfer
function (OTF) of a higher-order kinoform, as described in Reference [36]. The
OTF at each spatial frequency was integrated over the first 100 diffraction orders
(m = 1 to 100) for each of 100 equally spaced wavelengths between 400 nm and
700 nm. The magnitudes of the resulting OTF values were then averaged over
wavelength to produce the on-axis polychromatic MTF curves shown in Figure 6-15.
Comparing these curves with the monochromatic MTF plot shown in Figure 6-12
reveals how much degradation occurs due to the presence of stray light at the focal
plane from the non-resonant wavelengths in the band (i.e., the wavelengths that do
not reach a diffraction efficiency peak in Figure 6-14). These non-resonant
268
wavelengths focus at planes other than the nominal image plane and reduce the
overall diffraction efficiency in the focal spot. This situation improves for higher
values of p because there are more resonant wavelengths within the visible band.
(a) (b)
Figure 6-15 Analytically computed on-axis polychromatic MTF curves for a higher-
order diffractive singlet lens with (a) p = 13 and (b) p = 32.
Another factor to consider in selecting a particular value for the kinoform
order p is the fabrication options. Since the diffractive lens needed for the IOC
operates at f/1, it is advantageous to use a large value for p because of the associated
increase in the widths of the kinoform zones. However, as the zone width increases
by a factor of p, so does the zone depth. An analysis presented by Rossi, et al.,
shows that the sensitivity of the focusing properties of higher-order kinoform lenses
to depth errors increases linearly with the value of the parameter p [41]. The
accuracy in the fabrication of the desired surface relief profile is therefore more
important for larger values of p. Nevertheless, higher-order kinoforms with relief
depths on the order of tens of microns have been successfully fabricated for imaging
applications in the past. For example, in a paper published in 1995, precision
269
diamond turning and replication from a mold was used to fabricate a higher-order
kinoform lens with 50-μm zone depths to an accuracy of better than 0.1 μm with an
RMS surface roughness of 45 to 55 Angstroms [30]. Another report from the same
year describes the fabrication of a 6-mm diameter, f/4.8 diffractive lens with p = 21
and 23.4-μm zone depths [42]. In this case, diamond turning was used to produce an
aluminum master from which an acrylic lens was injection-molded. This lens was
designed for visible light imaging. The paper shows qualitatively decent images
produced by this lens of a dollar bill and an outdoor scene, and experimentally
obtained polychromatic MTF values that closely matched theoretical predictions. As
will be shown later, the required surface relief profiles of the higher-order diffractive
lenses for the IOC designs are similar in scale to these previously reported cases.
Since micromachining tools have continued to advance in the years since these
reports, manufacturing limitations are not expected to be a significant issue for the
higher-order kinoform surfaces required for the IOC.
A computer model was developed in MATLAB to evaluate the performance
of the wide field diffractive lens configuration when the diffractive surface is
constructed as a higher-order kinoform. As in the analyses above, two choices
(relevant to the IOC application) for the kinoform order were used for comparison:
p = 13 and p = 32. In this model, Equations (6-17) through (6-19) were implemented
to compute the complex field and the intensity distribution at the image plane for the
two-element system, which includes the aspherical corrector plate and the higher-
order diffractive lens (as discussed in more detail below). This is accomplished with
270
a repeated application of the Rayleigh-Sommerfeld diffraction integral, as described
in Section 6.2. The corneal lens, aqueous humor, and finite thicknesses of the fused
silica substrates were not included in the model for simplicity. These surfaces may
be safely neglected because the imaging performance of the system is dominated by
the combination of the aspherical surface at the aperture stop and the f/1 diffractive
lens surface. A ray diagram of the lens configuration used for this analysis is shown
in Figure 6-16.
Figure 6-16 Ray diagram of the wide field diffractive lens configuration used for
the scalar diffraction theory analyses.
The system was first optimized in Code V® to obtain the higher-order
aspherical coefficients for the aspherical corrector plate surface and to determine the
best focus distance for the image plane. These values were then used to define the
lens surfaces and the location of the image plane in the MATLAB computer model.
The surface relief profiles for the aspherical corrector plate and the f/1 diffractive
271
lens, when constructed as a higher-order kinoform with either p = 13 or p = 32 at a
design wavelength of 550 nm, are shown in Figure 6-17.
(a)
(b)
(b)
Figure 6-17 Surface relief profiles of the two optical elements in the system shown
in Figure 6-16: (a) the aspherical corrector plate, (b) the higher-order diffractive lens
when constructed with p = 13 waves, (c) the higher-order diffractive lens when
constructed with p = 32 waves. In each case, only half the lens diameter is shown.
The general procedure for computing the intensity distribution at the image
plane for the wide field diffractive lens system shown in Figure 6-16 is as follows:
The input field at the aperture stop is computed as the product of the complex fields
representing the incident plane wave and the phase function of the aspherical
corrector plate, at a given propagation wavelength. A sampling period of λ
0
/20 is
272
used to numerically define this array of complex field values. Then, as described in
Section 6.2, a Rayleigh-Sommerfeld diffraction integral is computed using the FFT-
based direct integration method to determine the complex field values at the input
plane to the next surface, which is the higher-order diffractive lens (located at a
distance z = 1.4 mm from the aperture stop). This array of complex field values is
then multiplied by the transmittance function of the diffractive lens. A second
Rayleigh-Sommerfeld diffraction integral is then computed to determine the final
output field along a 2 mm wide segment of the image plane, centered on the optical
axis. The intensity distribution at the image plane is proportional to the square of the
magnitude of the complex output field. A 2-mm image plane size is used because it
is large enough to capture nearly all of the diffracted energy for the cases considered
herein, and also because it is at least as large as the photosensitive region of any
image sensor that is being considered for the IOC.
It should be noted that, with the kinoform facets facing the image plane, the
f/1 diffractive lens used in the present design is limited by total internal reflection
(TIR) to ±25° (meaning that rays at this angle of incidence, external to the fused
silica substrate, reach TIR at the edge of the clear aperture of the lens). Fortunately,
this is well within the ±20° field of view specified for the IOC design.
For diffraction efficiency computations, an appropriate spot width must be
chosen. The focal length of this diffractive lens system is 1.4 mm, which is shorter
than the 2.1-mm focal length of the refractive and hybrid refractive/diffractive IOC
optical systems shown previously. The required spot diameters corresponding to
273
32 × 32 pixels over ±10° must therefore be recomputed for this case. The full image
height of this optical system is approximately 1 mm for a ±20° field of view. To
meet the IOC specification, 32 discernable spots should be contained in half this
image height. The required spot diameter for this case is therefore approximately
16 μm (0.5 mm/32), or 31 lp/mm in terms of spatial frequency.
The plots in Figure 6-18 show the computed on-axis diffraction efficiency
within a 16 μm spot diameter as a function of wavelength for the p = 13 and p = 32
wide field diffractive lens configurations. As expected, the curves in these plots
essentially follow the envelope of the analytically computed plots in Figure 6-14
except that the diffraction efficiency tails off at the short-wavelength end of the
spectrum. This is due to the material dispersion of the fused silica.
Several reports on the aberrational properties of higher-order kinoform lenses
show that there is an underlying material dispersion in these lenses that ultimately
limits their imaging performance in broadband applications, much like in their
p = 13
0.0
0.2
0.4
0.6
0.8
1.0
400 450 500 550 600 650 700
Wavelength (nm)
Diffraction Efficiency
p = 32
0.0
0.2
0.4
0.6
0.8
1.0
400 450 500 550 600 650 700
Wavelength (nm)
Diffraction Efficiency
Figure 6-18 On-axis diffraction efficiency within a 16 μm spot diameter as a
function of wavelength.
274
refractive counterparts [1, 30, 41, 42]. For example, Sweeney et al. show that the
polychromatic performance of a higher-order lens with p = 30 matches that of a
refractive lens made of the same material, when material dispersion is considered
[42]. In another paper, Roncone and Sweeney suggest that the methods used to
achromatize hybrid refractive/diffractive lenses can also be used to cancel material
dispersion in higher-order kinoforms [30]. They accomplish this by superimposing a
single-order kinoform lens onto the surface relief of a higher-order kinoform lens.
The broadband imaging performance of the resulting compound lens is then only
limited by the residual “diffractive dispersion” at the non-resonant wavelengths in
the spectrum.
This technique can be applied to our diffractive lens system to further
optimize the imaging performance across the visible band. Figure 6-19 shows a
portion of the surface relief profile of the compound surface required to achromatize
the p = 32 kinoform lens in our system. The required focal power of the dispersion-
compensating single-order kinoform was computed analytically using Equations
(6-23) and (6-25) from Section 6.3, neglecting the contribution of the aspherical
corrector plate. This is sufficient to show the general improvement obtained with
this concept, but a more careful design would include the contributions of the
aspherical plate when optimizing the dispersion-compensating surface. The
“material compensating grooves” of the compound lens are approximately 1.2 μm
deep.
275
Figure 6-19 Surface relief profile of a portion of the compound surface required to
achromatize the p = 32 diffractive lens in the system shown in Figure 6-16.
In the paper cited above ([30]), the authors manufactured a surface similar to
the one shown in Figure 6-19 using diamond turning and replication from a mold. A
UV-curable photopolymer with an Abbe V-number of 57.4 was used to cast the
compound lens. Their experimental measurements of the focal length of the
compound lens as a function of wavelength across the visible spectrum showed a
negligible amount of longitudinal chromatic aberration. From this data, they
computed that their compound lens had an effective V-number of 213 (in theory it
should be infinite, but minor differences between the V-number used in their design
calculations and the V-number of the actual material used in fabrication limited their
results).
The on-axis intensity distributions (i.e., the point spread functions) over a
20 μm width at the image plane for the p = 13 and p = 32 cases, with and without the
dispersion-compensating diffractive surface, are shown in Figure 6-20. The p = 13
plots show an overlay of the intensity distributions for 17 discrete wavelengths.
These correspond to the nine peak-efficiency wavelengths and eight minimum-
276
efficiency wavelengths within the visible band for the 13
th
-order kinoform (p = 13).
Likewise, the p = 32 plots include the point spread functions for the 40 wavelengths
corresponding to the efficiency maxima and minima within the visible band for the
32
nd
-order kinoform. The curves are plotted with different line colors to aid in
distinguishing the curves that nearly overlap; a legend is not included due to the
Figure 6-20 Comparison of the on-axis point spread functions for the p = 13 and
p = 32 wide field diffractive lens configurations, with and without the addition of a
material dispersion compensating surface. Plots for the individual wavelengths are
overlaid. The set of wavelengths plotted corresponds to the maximum and minimum
diffraction efficiencies within the visible band (17 wavelengths for the p = 13 case
and 40 wavelengths for the p = 32 case).
277
large number of curves plotted. In the cases without dispersion compensation, the
highest intensity peak occurs at the design wavelength of 550 nm. At this
wavelength, the nearly diffraction limited performance of this lens configuration can
be seen. The full width between the first zeros in the point spread function at
550 nm is approximately 2 μm. For the p = 13 case, no other wavelengths in the
visible band come to a focus in the same plane as the design wavelength because of
material dispersion. The addition of a dispersion-compensating diffractive surface
improves the situation significantly, as seen in the upper right plot in the figure. In
this plot, it is clear that the power in the sidelobes of the point spread functions is
reduced as the various point spread functions come into a common focus with the
design wavelength. A similar improvement is seen for the p = 32 case. The plots
also show that the higher-order p = 32 lens system produces slightly narrower point
spread functions than the p = 13 lens system. In either case, it is remarkable to note
that a majority of the intensity appears to be focused within a ±4 μm width at the
image plane.
Plots of the polychromatic point spread functions for plane waves incident at
field angles of 0°, 10°, and 20° are shown in Figure 6-21 for the p = 32 case, with
and without material dispersion compensation. The purpose of this series of plots is
to evaluate the imaging performance of the wide field diffractive lens configuration
across the field of view. To obtain these plots, the point spread functions were
averaged over wavelength and spectrally weighted using the set of wavelengths and
weights shown in Section 5.8 that represents the combined response of the cornea
278
Figure 6-21 Comparison of the polychromatic point spread functions as a function of
field angle for the p = 32 diffractive lens configuration, for cases with and without the
addition of a material dispersion compensating surface.
279
and a CMOS sensor with a color filter array. The plots were normalized after
averaging. The reduction of the sidelobe levels in the point spread functions when
the dispersion compensating surface is added is even more apparent in these figures.
The most remarkable feature of this lens configuration, relative to all the previous
IOC designs considered, is the consistency of the imaging performance over the
entire ±20° field of view. There is an asymmetry in the point spread functions at off-
axis field angles that becomes more apparent as the semi-field angle increases, but it
is not significant for the IOC application because the optical power is still well
contained within a small spot diameter.
The diffraction efficiencies within a 16 μm spot diameter relative to the total
power across the 2-mm image plane for the cases shown in Figure 6-21 are listed in
Table 6-5.
Diffraction Efficiency in a 16 μm Spot Diameter
Field Angle
Without
Dispersion
Compensation
With
Dispersion
Compensation
0° 82% 82%
10° 82% 81%
20° 79% 78%
Table 6-5 Diffraction efficiencies within a 16 μm spot diameter for the cases plotted
in Figure 6-21.
The results presented in Table 6-5 are important because they show that, for a
16 μm spot diameter, the efficiency is essentially the same for cases with and
without dispersion compensation. The implication of this for the IOC is that it may
280
not be worthwhile after all to add the complexity of superimposing a dispersion
compensating surface on the higher-order diffractive lens.
While the spot diameters and efficiencies relative to the image plane for this
diffractive lens configuration remain fairly constant over the full field of view, the
effects of vignetting must still be considered. Recall that in order to limit the
f-number of the diffractive lens to f/1 in the f/1.4 wide field diffractive lens system,
vignetting was allowed to occur from ±10° to ±20°. This implies that the total power
contained within a given spot diameter falls off with increasing semi-field angle.
This cannot be seen in the normalized point spread functions shown above.
However, the amount of vignetting can be determined by comparing the total power
contained in a 16 μm spot diameter at each field angle relative to the total power
present at the input aperture of the lens system. On axis, 99% of the input power is
present in a 16-μm diameter spot at 550 nm. At 10°, this drops to 89%, and at 20°,
66% of the input power is present in the focal spot. This is actually quite acceptable
given that the vignetting is only 11% at ±10°, which is the field of view that
corresponds to current retinal microstimulator arrays. For comparison, as much as
30% to 40% vignetting at the edge of the field of view is often allowed in
photographic and projection systems in order to obtain smaller, thinner lenses in the
optical design [48].
281
6.5 Summary
In this chapter, two options were presented for incorporating diffractive
optical elements into the IOC. The first case was a hybrid refractive/diffractive
system in which a low focal power diffractive lens surface was added to the fused
silica optical window in an otherwise refractive IOC optical system design. The
refractive element provided the majority of the focusing power in this hybrid
refractive/diffractive optical system, and so the size and mass remained essentially
the same as in the purely refractive designs shown in Chapter 5. However, the
addition of the diffractive surface provided significant correction of chromatic
aberration and some minor correction of other aberrations across the field of view.
The optimized diffractive surface in this design had relatively large and shallow zone
widths and would easily be manufacturable as a kinoform lens. Since this hybrid
refractive/diffractive optical system has the potential to improve the image quality of
the IOC without adding any size or mass to the system, it would be a worthwhile
option to pursue for future prototype designs.
The remainder of the chapter focused on a wide field, polychromatic
diffractive lens configuration that was adapted to the IOC from a combination of
several prior ideas from the literature. Using a higher-order kinoform in the
diffractive wide field landscape lens configuration originally conceived by Buralli
and Morris for monochromatic applications, it was shown that excellent multi-
wavelength imaging can be obtained across a ±20° field of view at f/1.4. The system
was evaluated for both a 13
th
-order and a 32
nd
-order kinoform. These analyses
282
indicated that the 32
nd
order kinoform was preferable to the 13
th
-order kinoform
because it produces a larger number of phase-matched wavelengths within the visible
band while still remaining manufacturable. A discussion of previously reported
experimental results indicated that the higher-order kinoforms presented in this
chapter could be made to sufficiently high precision using diamond turning and
replication from a mold. Numerical computations of a Rayleigh-Sommerfeld
diffraction integral were used to model the system and showed that the diffraction
efficiency is approximately 80% within a 16 μm spot size across the ±20° field of
view, relative to the total power on a 2-mm wide imaging plane. The idea of
constructing a compound diffractive lens in which a single-order diffractive surface
is superimposed on a higher-order kinoform was also considered to correct for
material dispersion in the system. While this technique proved to be effective for
bringing the various wavelengths across the visible band to a tighter common focus,
the efficiency remained unchanged over the 16-μm spot size of interest for the IOC.
It was therefore concluded that the additional complexity of superimposing a single-
order kinoform surface onto the higher-order kinoform is unnecessary for the IOC
application.
A tolerable amount of vignetting was permitted in the wide field diffractive
lens system in order to limit the aperture of the diffractive lens (taken alone) to f/1.
The resulting lens system had an effective focal length of 1.4 mm and a system
f-number of f/1.4. The total length of the wide field diffractive lens system, when
optimized for placement in the eye, was approximately 3.2 mm. The largest
283
diameter element in the optical system is the diffractive lens, with a clear aperture
diameter of 1.4 mm. Altogether, this optical system is nearly 1 mm smaller in
diameter and approximately 0.3 mm shorter than the refractive and hybrid
refractive/diffractive IOC designs. Since the entire wide field diffractive lens system
requires only two thin fused silica flats, the total mass is expected to be less
than 6 mg (from a calculation using the volume and density of the fused silica optical
flats). In conclusion, the wide field diffractive IOC lens design has the potential to
provide significantly better, and more consistent imaging across a ±20° field of view
in an extremely compact, lightweight package that can be fabricated using
commercially available techniques.
284
Chapter 6 References
[1] L. N. Hazra and C. A. Delisle, “Higher order kinoform lenses: diffraction
efficiency and aberrational properties,” Optical Engineering, vol. 36, p. 1500,
1997.
[2] D. C. O’Shea, T. J. Suleski, A. D. Kathman, and D. W. Prather, “Diffractive
lens design,” in Diffractive Optics: Design, Fabrication, and Test,
Bellingham: SPIE Press, 2004, Chapter 4, pp. 57-82.
[3] W. H. Welch, J. E. Morris, and M. R. Feldman, “Iterative discrete on-axis
encoding of radially symmetric computer-generated holograms,” Journal of
the Optical Society of America A, vol. 10, no. 8, pp. 1729-1738, 1993.
[4] J. A. Jordan Jr, P. M. Hirsch, L. B. Lesem, and D. L. Van Rooy, “Kinoform
lenses,” Applied Optics, vol. 9, no. 8, pp. 1888-1887, 1970.
[5] D. C. O’Shea, T. J. Suleski, A. D. Kathman, and D. W. Prather, Diffractive
Optics: Design, Fabrication, and Test, Bellingham: SPIE Press, 2004.
[6] M. B. Fleming and M. C. Hutley, “Blazed diffractive optics,” Applied Optics,
vol. 36, no. 4, pp. 635-4643, 1997.
[7] F. v. Hulst, P. Geelan, A. Gebhardt, and R. Steinkopf, “Diamond tools for
producing micro-optic elements,” Industrial Diamond Review, pp. 58-62,
March, 2005.
[8] V. P. Korolkov, R. K. Nasyrov, and R. V. Shimansky, “Zone-boundary
optimization for direct laser writing of continuous-relief diffractive optical
elements,” Applied Optics, vol. 45, no. 1, pp. 53-62, 2006.
[9] L. Li, A. Y. Yi, C. Huang, D. A. Grewell, A. Benatar, and Y. Chen,
“Fabrication of diffractive optics by use of slow tool servo diamond turning
process,” Optical Engineering, vol. 45, p. 113401, 2006.
[10] C. Ribot, P. Lalanne, M. S. L. Lee, B. Loiseaux, and J. P. Huignard,
“Analysis of blazed diffractive optical elements formed with artificial
dielectrics,” Journal of the Optical Society of America A, vol. 24, no. 12, pp.
3819-3826, 2007.
[11] M. Riedl, “On point,” Optical Engineering, pp. 26-29, July, 2004.
[12] M. Rossi and I. Kallioniemi, “Micro-optical modules fabricated by high-
precision replication processes,” Diffractive Optics and Micro-Optics, vol.
75, pp. 108-110.
285
[13] M. Shan and J. Tan, “Modeling focusing characteristics of low F-number
diffractive optical elements with continuous relief fabricated by laser direct
writing,” Optics Express, vol. 15, no. 25, pp. 17032-17037, 2007.
[14] T. J. Suleski and R. D. T. Kolste, “Fabrication trends for free-space
microoptics,” Journal of Lightwave Technology, vol. 23, no. 2, pp. 633-646,
2005.
[15] M. R. Wang and H. Su, “Laser direct-write gray-level mask and one-step
etching for diffractive microlens fabrication,” Applied Optics, vol. 37, no. 32,
pp. 7568-7576, 1998.
[16] J. W. Sung, H. Hockel, J. D. Brown, and E. G. Johnson, “Development of a
two-dimensional phase-grating mask for fabrication of an analog-resist
profile,” Applied Optics, vol. 45, no. 1, pp. 33-43, 2006.
[17] C. G. Blough, M. Rossi, S. K. Mack, and R. L. Michaels, “Single-point
diamond turning and replication of visible and near-infrared diffractive
optical elements,” Applied Optics, vol. 36, no. 20, pp. 4648-4654, 1997.
[18] G. J. Swanson, “Binary optics technology: The theory and design of multi-
level diffractive optical elements,” Technical Report 854 (MIT Lincoln
Laboratory, Cambridge, MA), 1989.
[19] M. Kuittinen and H. P. Herzig, “Encoding of efficient diffractive
microlenses,” Optics Letters, vol. 20, pp. 2156-2158, 1995.
[20] D. A. Buralli and G. M. Morris, “Design of a wide field diffractive landscape
lens,” Applied Optics, vol. 28, no. 18, pp. 3950-3959, 1989.
[21] D. A. Buralli, G. M. Morris, and J. R. Rogers, “Optical performance of
holographic kinoforms,” Applied Optics, vol. 28, no. 5, pp. 976-983, 1989.
[22] “Entering surface shape and position,” in Code V® 9.70 Reference Manual,
Optical Research Associates, Pasadena, CA, 2006, Chapter 4.
[23] J. W. Goodman, “Foundations of scalar diffraction theory,” in Introduction to
Fourier Optics, 3rd Ed., Greenwood Village: Roberts & Company
Publishers, 2004, Chapter 3, pp. 46-49.
[24] F. Shen and A. Wang, “Fast-Fourier-transform based numerical integration
method for the Rayleigh-Sommerfeld diffraction formula,” Applied Optics,
vol. 45, pp. 1102-1110, 2006.
286
[25] N. Davidson, A. A. Friesem, and E. Hasman, “Analytic design of hybrid
diffractive-refractive achromats,” Applied Optics, vol. 32, no. 25, pp. 4770-
4774, 1993.
[26] T. Stone and N. George, “Hybrid diffractive-refractive lenses and
achromats,” Applied Optics, vol. 27, no. 14, pp. 2960-2971, 1988.
[27] R. V. Johnson and A. R. Tanguay Jr., “Stratified volume holographic optical
elements,” Optics Letters, vol. 13, no. 3, pp. 189-191, 1998.
[28] G. P. Nordin, R. V. Johnson, and A. R. Tanguay Jr., “Diffraction properties
of stratified volume holographic optical elements,” Journal of the Optical
Society of America A, vol. 9, no. 12, pp. 2206-2217, 1992.
[29] D. M. Chambers and G. P. Nordin, “Stratified volume diffractive optical
elements as high-efficiency gratings,” Journal of the Optical Society of
America A, vol. 16, no. 5, pp. 1184-1192, 1999.
[30] R. L. Roncone and D. W. Sweeney, “Cancellation of material dispersion in
harmonic diffractive lenses,” in Diffractive and Holographic Optics
Technology II, vol. 2404. San Jose, CA, USA: SPIE, 1995, pp. 81-88.
[31] W. T. Welford, “Calculation of the Seidel aberrations,” in Aberrations of
Optical Systems, Bristol: Hilger, 1986, Chapter 8, pp. 148-152.
[32] W. J. Smith, “Stop shift equations,” in Modern Lens Design, 2nd Ed., New
York: McGraw-Hill, 2005, Chapter 24, pp. 600-601.
[33] W. T. Welford, “Thin lens aberrations,” in Aberrations of Optical Systems,
Bristol: Hilger, 1986, Chapter 12, pp. 226-236.
[34] J. Liu, B. Y. Gu, J. S. Ye, and B. Z. Dong, “Applicability of improved
Rayleigh-Sommerfeld method 1 in analyzing the focusing characteristics of
cylindrical microlenses,” Optics Communications, vol. 261, no. 1, pp. 187-
198, 2006.
[35] W. J. Smith, “Stops and apertures,” in Modern Optical Engineering, 3rd Ed.,
New York: McGraw-Hill, 2000, Chapter 6, pp. 160-162.
[36] D. Faklis and G. M. Morris, “Spectral properties of multiorder diffractive
lenses,” Applied Optics, vol. 34, no. 14, pp. 2462-2468, 1995.
[37] J. A. Futhey, M. Beal, and S. Saxe, “Superzone diffractive optics,” OSA
Annual Meeting, Washington, D.C., 1991, paper TuS2.
287
[38] M. T. Gruneisen, R. C. Dymale, and M. B. Garvin, “Wavelength-dependent
characteristics of modulo Nλ
0
optical wavefront control,” Applied Optics,
vol. 45, no. 17, pp. 4075-4083, 2006.
[39] Z. Jaroszewicz, R. Staronski, J. Sochacki, and G. Righini, “Planar Fresnel
lens with multiple phase jump,” Pure and Applied Optics: Journal of the
European Optical Society Part A, vol. 3, pp. 667-677, 1994.
[40] J. C. Marron, D. K. Angell, and A. M. Tai, “Higher-order kinoforms,” in
Proceedings of the SPIE, Computer and Optically Formed Holographic
Optics, vol. 1211, 2005, pp. 62-66.
[41] M. Rossi, R. E. Kunz, and H. P. Herzig, “Refractive and diffractive
properties of planar micro-optical elements,” Applied Optics, vol. 34, no. 26,
pp. 5996-6007, 1995.
[42] D. W. Sweeney and G. E. Sommargren, “Harmonic diffractive lenses,”
Applied Optics, vol. 34, no. 14, pp. 2469-2475, 1995.
[43] J. A. Futhey, “Diffractive lens,” US Patent 4,936,666, 1990.
[44] D. Faklis and G. M. Morris, “Polychromatic diffractive lens,” US Patent
5,589,982, 1996.
[45] R. J. Federico, D. A. Buralli, and G. M. Morris, “Bifocal multiorder
diffractive lenses for vision correction,” US Patent 6,951,391, 2005.
[46] R. J. Federico, D. A. Buralli, and G. M. Morris, “Diffractive lenses for vision
correction,” US Patent 7,156,516, 2007.
[47] G. Ghosh, M. Endo, and T. Iwasaki, “Temperature-dependent Sellmeier
coefficients and chromatic dispersions for some optical fiber glasses,”
Journal of Lightwave Technology, vol. 12, no. 8, pp. 1338-1342, 1994.
[48] R. E. Fischer and B. Tadic-Galeb, “Stops and pupils and other basic
principles,” in Optical System Design, 1st Ed., New York: McGraw-Hill,
2000, Chapter 2, pp. 30-33.
288
Chapter 7
SUMMARY AND FUTURE RESEARCH DIRECTIONS
7.1 Summary
The research presented in this thesis focused on the analysis and design of
potential refractive and diffractive lens systems for an intraocular camera for retinal
prostheses. The novelty of this research lies in the unique constraints placed on the
imaging system for such a camera. The desired properties of the lens system are
uniquely influenced by several factors including surgical implantability, placement
in the crystalline lens sac, the psychophysics of prosthetic vision, and the functional
needs of low vision subjects. These factors and others, as described in detail in
Chapters 2 and 4, indicated that an extremely compact, ultra-lightweight optical
system was required.
A detailed understanding of the tradeoffs in the design parameters for the lens
system for an intraocular camera was previously unexplored. For this particular
application, some design constraints and requirements for the system were
established by the ocular environment. For example, the need for low heat
dissipation and a stable location for physically mounting the camera led to the
decision to implant the intraocular camera within the crystalline lens sac. The
unique optical design space of this application with its particular requirements was
explored and evaluated in detail, ultimately arriving at a reasonable set of
specifications. These specifications were the result of evaluating the tradeoffs
289
between focal length, f-number, aberrations, field of view, the mass of the optical
system, and image intensity.
For this application, some optical system requirements were relaxed as
compared to a traditional camera system. The acceptability of low-resolution in the
retinal prosthesis application permitted the concession of some resolution to yield a
reduction in f-number. A low f-number allows the system to capture enough light to
be suitable in a breadth of real-world environments, including low-light conditions.
The limited space of the crystalline lens sac presented unique challenges,
affecting both the focal length and the size and number of optical system elements.
It was established that an ultra-short focal length optical system operating at a low
f-number with a relatively wide field of view was necessary. The limited space and
mounting constraints within the crystalline lens sac ruled out a multiple-lens system
due to size and mass concerns. In order to meet imaging requirements with these
size and mass limitations, a single aspherical lens with an optical window was most
suitable.
Having determined the ultra-short focal length target, and considering the
constraints of reasonable aberrations and dynamic range, it was determined that f/1
was a good aperture to use for the optical system of the intraocular camera.
The lens was designed to account for the refractive power and aberrations of
the corneal surface and aqueous humor. An extensive sensitivity analysis showed
that the imaging performance is exceptionally tolerant to variations in surgical
290
placement and corneal curvatures and aberrations, largely because the corneal lens
contributes less than 10% to the refractive power of the overall IOC optical system.
An advantage of the compact optical system was a greatly extended depth of
field, eliminating the need for a focusing mechanism to accommodate for near and
far objects. This had two benefits: it helped keep the overall mass and system
complexity constrained, and, because the large depth of field far exceeded that of the
normal human eye, a subject implanted with an IOC would be able to bring objects
with fine details extremely close to their eye for inspection to magnify the image
without losing focus. Given the target resolution, the ability to inspect objects close
up is extremely valuable.
The next-generation epiretinal microstimulator array is expected to provide a
32 × 32 resolution over a ±10° field of view. It was shown that sufficient imaging
performance can be obtained to satisfy this resolution and field of view with a single
aspherical lens system. However, we also showed that the field of view can be
extended to ±20°, twice that needed for current retinal prostheses, while still meeting
the image resolution requirements over the central ±10° with decent imaging out to
±20°.
The IOC optical specifications arrived at were: 2.1 mm focal length, ±20°
field of view, f/1, 2.5 mm maximum lens-edge diameter, and a span of only 3.5 mm
from the front of the optical system to the image plane. A single aspherical lens
design that met these specifications by incorporating a thin fused silica window at
291
the front of the optical system and a polymer aspherical lens had a total predicted
system mass of only 14 mg.
Prototype quantities of a custom polymer aspherical lens based on these
specifications were fabricated and characterized, with results that met requirements.
Color and grayscale images captured with this lens in two different experimental
configurations showed that the resolution across the field of view was a close match
to that predicted by ray traced models of the lens.
A tolerancing analysis of the refractive IOC optical lens design showed that
the imaging performance remained within tolerable limits for manufacturing and
alignment errors that are typical for diamond-turned and molded polymer optics.
In an effort to keep the mass of the system low while increasing the degrees
of freedom to further correct optical aberrations, we investigated the possibility of
incorporating thin diffractive optical elements. Both a hybrid lens design made up of
a diffractive and a refractive lens, and a system entirely composed of diffractive
lenses were considered and evaluated. The hybrid design remained essentially
unchanged in size and mass from the purely refractive design, but the incorporation
of a diffractive lens to the optical system provided additional correction of chromatic
aberration that resulted in improved image quality over a ±15° field of view. Better
still, the purely diffractive optical design resulted in an overall reduction in size and
mass as compared to the purely refractive and hybrid systems, while providing
292
superior imaging performance over a ±20° field of view, according to simulation
results.
7.2 Future research directions
Future research would be in the area of further lens and IOC optical system
characterization efforts; additional analyses of the optical system designs; a more
detailed evaluation of the higher-order kinoform lens design and potential
improvement to that design, and incorporation of stratified volume diffractive optical
elements into the IOC optical system.
The lens could be further characterized using a more sophisticated optical
bench setup with a high NA microscope objective to capture the full extent of the
rays exiting the f/1 lens. Both in-house and commercial options for expanding the
IOC optical test apparatus are being explored. One potential commercial option
under consideration is the Optikos™ QC Bench [1] which includes software for
measuring the point spread function and MTF of short focal length microlenses at on
and off-axis angles. Initial demonstration tests performed by the vendor using this
equipment with one of the IOC prototype lenses look promising, though the
development of an in-house setup with similar features is also a possibility.
We are also evaluating integration of the fabricated custom polymer lens with
an optical window and an up front lens system designed to model the cornea and
aqueous humor for testing with off-the-shelf and custom CMOS image sensor arrays.
293
Further lens analyses could be performed on the IOC designs to address other
potential areas of concern, such as the generation of stray light and ghost images.
Given the relatively few optical surfaces in the IOC designs, though, and the fact that
they will be AR coated, the appearance of ghost images due to multiple reflections in
the system is not expected to be an issue. The non-optical internal surfaces of the
IOC package can be coated with, or fabricated from, materials with low reflectance
coefficients in the visible spectrum to avoid stray light. However, a CAD analysis
that combines optical ray tracing with realistic layouts of various IOC packaging
designs would be useful in tailoring the package so that stray light does not degrade
the image contrast.
In the analyses presented in Chapter 6 of the purely diffractive IOC designs
using higher-order kinoforms, the finite thickness of the facets in the kinoform
surface relief profiles were not taken into account (the kinoforms were assumed to be
infinitesimally thin). For low f-number kinoform lenses, it has been shown that the
local slopes of the incident and output light beams at each facet and the local
curvature of the surface relief should be considered in order to accurately predict the
diffraction efficiency at the image plane [2, 3]. Further evaluation of the purely
diffractive IOC designs would therefore include a model that accounts for the finite
thickness of the kinoform facets. Such a model could also allow for the optimization
of the exact facet shapes in order to maximize the diffraction efficiency in the focal
plane.
294
One possible way to accomplish this would be to use ray tracing software
such as Code V® to trace a set of rays through the schematic eye model to the
entrance plane of the higher-order kinoform (i.e., through the cornea, aqueous
humor, aspherical corrector plate, and finally ending at the entrance plane to the
kinoform substrate). The ray tracing software could compute the angles and relative
phase values of this set of rays (by keeping track of their relative optical path lengths
as they propagate through the optical system). The obliquity of the rays at each point
could then be used to compute the relative magnitudes of the optical field at this
plane. This set of magnitude and phase values could be interpolated and used as the
input for a rigorous vector-based diffraction model of the higher-order kinoform
lens. This model could then be used to propagate the field through the kinoform
without any approximations aside from the numerical sampling of the optical field.
Such a model could provide the field values at a plane lying just beyond the output
of the kinoform, and a Rayleigh-Sommerfeld diffraction integral could be used to
propagate the field from the kinoform output to the image plane. Using these steps
to perform an optimization of the exact shapes and blaze angles of the individual
kinoform facets could be a challenge. It may be sufficient to use analytical methods
to determine the optimal higher-order kinoform profile for a specific design using
analytical techniques once the local incident ray angles are known at each facet [2,
3].
The numerical approach described above for rigorously modeling higher-
order kinoform designs for the IOC could also be used to compute the sensitivity of
295
the higher-order kinoform IOC designs to various manufacturing and alignment
errors. Such analyses would, in turn, allow us to determine which fabrication
technologies are best suited for the purely diffractive IOC lens designs.
To potentially improve the imaging performance of the higher-order
kinoform designs, we could imagine incorporating a multi-layer thin film filter with
a transmission function that is designed to match the periodic diffraction efficiency
function of a specified higher-order kinoform (such as the functions shown in Figure
6-14). This would result in a loss of light at the non-resonant wavelengths between
filter peaks, but would increase the contrast and sharpness of the image because the
focal spots would not be blurred by stray light from non-resonant wavelengths in the
band. However, while this may be a good idea, in general, for improving the
imaging of an ultra-compact camera using this design configuration, it may be
problematic for the IOC. This is because, as discussed in Chapter 2, a certain
amount of optical pre-blur is actually desired in the IOC system to avoid aliasing
artifacts when the image is down sampled to the coarse pixellation levels required for
the retinal prosthesis. It is also unclear if the design of a comb filter to match such a
large number of diffraction efficiency peaks across the visible spectrum would be
practical.
Further research could be made into the incorporation of stratified volume
diffractive optical elements (SVDOEs) into the IOC lens system. These devices are
based on a concept for stratified volume holographic optical elements (SVHOEs)
that originated at USC nearly twenty years ago with work done by Richard V.
296
Johnson and Armand R. Tanguay, Jr. [4]. They introduced a new class of devices in
which thin phase modulation layers separated by homogenous buffer layers could
reproduce the diffraction properties characteristic of a distributed bulk grating,
namely Bragg angular selectivity and high diffraction efficiency. They showed that
the thin holographic layers performed the function of optical phase and/or amplitude
modulation, while the buffer layers allowed diffraction to occur between the
modulation layers. They also showed that SVHOEs have unique diffraction
properties such as the angular periodicity of the +1 and -1 diffracted beams.
Conventional volume holograms only produce the first-order diffracted beam at
Bragg incidence. This concept was later extended to the domain of diffractive
optical elements by Diana Chambers and Gregory Nordin [5]. This is an important
advance because diffractive elements can be fabricated using conventional and
repeatable semiconductor manufacturing techniques. These devices are accordingly
named stratified volume diffractive optical elements (SVDOEs). Chambers and
Nordin showed that high efficiency diffractive gratings could be produced with as
few as three grating layers for cases that are otherwise difficult or impossible to
produce in a single layer, because of fabrication limits on minimum achievable
feature sizes (for blazed gratings) and aspect ratios (for deep gratings).
While previous uses of this technology primarily focused on the fabrication
of high efficiency gratings, the SVDOE concept may enable the development of
semiconductor manufacturable low f-number diffractive lens designs for the IOC.
This may be achievable by cooptimizing the diffracting layers and the buffer
297
thicknesses between them to bring the incident light to a focus at different
wavelengths and field angles. It may also be feasible to use this technique to
multiplex additional functions into the optical system. For example, we could
envision designing an SVDOE to direct incident light from an infrared laser located
on a pair of eyeglasses worn by the retinal prosthesis subject to a photocell that is
used to power a rechargeable battery in the IOC. In this case, the SVDOE would
simultaneously be designed to either pass the visible band to subsequent optical
lenses or to perform the lensing function itself. Other functions could also be
envisioned, such as providing the necessary optical functions to enable bidirectional
optical data communication, or directing a portion of the incident ambient light to a
particular region of the sensor that is used to automatically adjust the sensor’s
electronic gain in different lighting conditions.
298
Chapter 7 References
[1] Optikos QC Bench Data Sheet, Available online: http://www.optikos.com/
Pdf_files/QCB.pdf, as of March, 2008.
[2] M. A. Golub, “Generalized conversion from the phase function to the blazed
surface-relief profile of diffractive optical elements,” Journal of the Optical
Society of America A, vol. 16, no. 5, pp. 1194-1201, 1999.
[3] Y. Han, L. N. Hazra, and C. A. Delisle, “Exact surface-relief profile of a
kinoform lens from its phase function,” Journal of the Optical Society of
America A, vol. 12, no. 3, pp. 524-529, 1995.
[4] R. V. Johnson and A. R. Tanguay Jr., “Stratified volume holographic optical
elements,” Optics Letters, vol. 13, no. 3, pp. 189-191, 1998.
[5] D. M. Chambers and G. P. Nordin, “Stratified volume diffractive optical
elements as high-efficiency gratings,” Journal of the Optical Society of
America A, vol. 16, no. 5, pp. 1184-1192, 1999.
299
Bibliography
P. V. Algvere, P. A. Torstensson, and B. M. Tengroth, “Light transmittance of ocular
media in living rabbit eyes,” Investigative Ophthalmology and Visual Science, vol.
34, no. 2, pp. 349-354, 1993.
W. Ambach, M. Blumthaler, T. Sch÷pf, E. Ambach, F. Katzgraber, F. Daxecker, and
A. Daxer, “Spectral transmission of the optical media of the human eye with respect
to keratitis and cataract formation,” Documenta Ophthalmologica, vol. 88, no. 2, pp.
165-173, 1994.
D. A. Atchison and G. Smith, “Light and the eye,” in Optics of the Human Eye, 1st
Ed., Edinburgh: Butterworth-Heinemann, 2000, Chapter 11.
W. S. Beich, “Injection molded polymer optics in the 21st century,” in Tribute to
Warren Smith: A Legacy in Lens Design and Optical Engineering, Proceedings of
the SPIE, vol. 5865, San Diego, CA, 2005, pp. 157-168.
A. J. Blanksby and M. J. Loinaz, “Performance analysis of a color CMOS photogate
image sensor,” IEEE Transactions on Electron Devices, vol. 47, no. 1, pp. 55-64,
2000.
C. G. Blough, M. Rossi, S. K. Mack, and R. L. Michaels, “Single-point diamond
turning and replication of visible and near-infrared diffractive optical elements,”
Applied Optics, vol. 36, no. 20, pp. 4648-4654, 1997.
M. Born and E. Wolf, “Geometrical theory of optical imaging,” in Principles of
Optics, 7th Ed., Cambridge, UK: Cambridge University Press, 1999, Chapter 4, pp.
199-201.
D. A. Buralli and G. M. Morris, “Design of a wide field diffractive landscape lens,”
Applied Optics, vol. 28, no. 18, pp. 3950-3959, 1989.
D. A. Buralli, G. M. Morris, and J. R. Rogers, “Optical performance of holographic
kinoforms,” Applied Optics, vol. 28, no. 5, pp. 976-983, 1989.
K. Carlson, M. Chidley, K. B. Sung, M. Descour, A. Gillenwater, M. Follen, and R.
Richards-Kortum, “In vivo fiber-optic confocal reflectance microscope with an
injection-molded plastic miniature objective lens,” Applied Optics, vol. 44, pp. 1792-
1797, 2005.
P. B. Catrysse and B. A. Wandell, “Roadmap for CMOS image sensors: Moore
meets Planck and Sommerfeld,” in Digital Photography, Proceedings of the SPIE,
vol. 5678, San Jose, CA, 2005, pp. 1-13.
300
K. Cha, K. Horch, and R. A. Normann, “Simulation of a phosphene-based visual
field: visual acuity in a pixelized vision system,” Annals of Biomedical Engineering,
vol. 20, no. 4, pp. 439-449, 1992.
K. Cha, K. Horch, and R. A. Normann, “Mobility performance with a pixelized
vision system,” Vision Research, vol. 32, no. 7, pp. 1367-1372, 1992.
K. Cha, K. Horch, and R. A. Normann, “Reading speed with a pixelized vision
system,” Journal of the Optical Society of America A, vol. 9, no. 5, pp. 673-677,
1992.
D. M. Chambers and G. P. Nordin, “Stratified volume diffractive optical elements as
high-efficiency gratings,” Journal of the Optical Society of America A, vol. 16, no. 5,
pp. 1184-1192, 1999.
Y.-L. Chen, B. Tan, and W. L. Lewis, “Simulation of eccentric photorefraction
images,” Optics Express, vol. 11, no. 14, pp. 1628-1642, 2006.
A. Y. Chow and N. S. Peachey, “The subretinal microphotodiode array retinal
prosthesis,” Ophthalmic Research, vol. 30, no. 3, pp. 195-196, 1998.
A. Y. Chow, V. Y. Chow, K. H. Packo, J. S. Pollack, G. A. Peyman, and R.
Schuchard, “The artificial silicon retina microchip for the treatment of vision loss
from retinitis pigmentosa,” Archives of Ophthalmology, vol. 122, no. 4, pp. 460-469,
2004.
B. A. J. Clark, “Variations in corneal topography,” Australian Journal of Optometry,
vol. 56, pp. 399-413, 1973.
C. A. Curcio, N. E. Medeiros, and C. L. Millican, “Photoreceptor loss in age-related
macular degeneration,” Investigative Ophthalmology and Visual Science, vol. 37, no.
7, pp. 1236-1249, 1996.
N. Davidson, A. A. Friesem, and E. Hasman, “Analytic design of hybrid diffractive-
refractive achromats,” Applied Optics, vol. 32, no. 25, pp. 4770-4774, 1993.
A. Davies and P. Fennessy, “Image capture,” in Digital Imaging for Photographers,
4th Ed. Focal Press, 2001, Chapter 3, pp. 28-32.
J. L. Demer, F. I. Porter, J. Goldberg, H. A. Jenkins, and K. Schmidt, “Adaptation to
telescopic spectacles: Vestibulo-ocular reflex plasticity,” Investigative
Ophthalmology and Visual Science, vol. 30, pp. 159-170, 1989.
J. Dillon, L. Zheng, J. C. Merriam, and E. R. Gaillard, “The optical properties of the
anterior segment of the eye: Implications for cortical cataract,” Experimental Eye
Research, vol. 68, pp. 785-795, 1999.
301
R. Eckmiller, M. Becker, and R. Hunermann, “Dialog concepts for learning retina
encoders,” in Proceedings of the IEEE International Conference on Neural
Networks, vol. 4, 1997, pp. 2315-2320.
A. El Gamal and A. El Gamal, “Trends in CMOS image sensor technology and
design,” in Electron Devices Meeting (IEDM) Technical Digest, 2002, pp. 805-808.
A. El Gamal and H. Eltoukhy, “CMOS image sensors,” IEEE Circuits and Devices
Magazine, vol. 21, no. 3, pp. 6-20, 2005.
I. Escudero-Sanz and R. Navarro, “Off-axis aberrations of a wide-angle schematic
eye model,” Journal of the Optical Society of America A, vol. 16, no. 8, pp. 1881-
1891, 1999.
D. Faklis and G. M. Morris, “Spectral properties of multiorder diffractive lenses,”
Applied Optics, vol. 34, no. 14, pp. 2462-2468, 1995.
D. Faklis and G. M. Morris, “Polychromatic diffractive lens,” US Patent 5,589,982,
1996.
R. J. Federico, D. A. Buralli, and G. M. Morris, “Bifocal multiorder diffractive
lenses for vision correction,” US Patent 6,951,391, 2005.
R. J. Federico, D. A. Buralli, and G. M. Morris, “Diffractive lenses for vision
correction,” US Patent 7,156,516, 2007.
R. E. Fischer and B. Tadic-Galeb, “Stops and pupils and other basic principles,” in
Optical System Design, 1st Ed., New York: McGraw-Hill, 2000, Chapter 2, pp. 30-
33.
R. E. Fischer and B. Tadic-Galeb, Optical System Design, 1st Ed., New York:
McGraw-Hill, 2000.
R. E. Fischer and B. Tadic-Galeb, “The concept of optical path difference,” in
Optical System Design, 1st Ed., New York: McGraw-Hill, 2000, Chapter 4.
R. E. Fischer and B. Tadic-Galeb, “Computer performance evaluation,” in Optical
System Design, 1st Ed., New York: McGraw-Hill, 2000, Chapter 10.
M. B. Fleming and M. C. Hutley, “Blazed diffractive optics,” Applied Optics, vol.
36, no. 4, pp. 635-4643, 1997.
E. Fossum, “Visible light CMOS image sensors,” in Proceedings of the International
Workshop on Semiconductor Pixel Detectors for Particles and X-Rays (PIXEL
2002), Carmel, CA, 2002.
J. A. Futhey, “Diffractive lens,” US Patent 4,936,666, 1990.
302
J. A. Futhey, M. Beal, and S. Saxe, “Superzone diffractive optics,” OSA Annual
Meeting, Washington, D.C., 1991, paper TuS2.
G. M. Gauthier and D. A. Robinson, “Adaptation of the human vestibular ocular
reflex to magnifying lenses,” Brain Research, vol. 92, pp. 331-335, 1975.
G. Ghosh, M. Endo, and T. Iwasaki, “Temperature-dependent Sellmeier coefficients
and chromatic dispersions for some optical fiber glasses,” Journal of Lightwave
Technology, vol. 12, no. 8, pp. 1338-1342, 1994.
M. A. Golub, “Generalized conversion from the phase function to the blazed surface-
relief profile of diffractive optical elements,” Journal of the Optical Society of
America A, vol. 16, no. 5, pp. 1194-1201, 1999.
J. W. Goodman, “Frequency analysis of optical imaging systems,” in Introduction to
Fourier Optics, 3rd Ed., Greenwood Village: Roberts & Company Publishers, 2004,
Chapter 6, pp. 129-131.
J. W. Goodman, “Foundations of scalar diffraction theory,” in Introduction to
Fourier Optics, 3rd Ed., Greenwood Village: Roberts & Company Publishers, 2004,
Chapter 3, pp. 46-49.
J. W. Goodman, Introduction to Fourier Optics, 3rd Ed., Greenwood Village:
Roberts & Company Publishers, 2004.
K. Gosalia, J. Weiland, M. Humayun, and G. Lazzi, “Thermal elevation in the
human eye and head due to the operation of a retinal prosthesis,” IEEE Transactions
on Biomedical Engineering, vol. 51, no. 8, pp. 1469-1477, 2004.
A. R. Greenleaf, Photographic Optics, New York: The MacMillan Company, 1950,
pp. 25-27.
J. E. Greivenkamp, J. Schwiegerling, J. M. Miller, and M. D. Mellinger, “Visual
acuity modeling using optical raytracing of schematic eyes,” American Journal of
Ophthalmology, vol. 120, no. 2, pp. 227-240, 1995.
P. Griffith and A. Marien, “Optical fabrication relies on tried and true methods,”
Laser Focus World, October, 1997.
M. T. Gruneisen, R. C. Dymale, and M. B. Garvin, “Wavelength-dependent
characteristics of modulo Nλ
0
optical wavefront control,” Applied Optics, vol. 45,
no. 17, pp. 4075-4083, 2006.
M. Guillon, D. P. Lydon, and C. Wilson, “Corneal topography: a clinical model,”
Ophthalmic and Physiological Optics, vol. 6, no. 1, pp. 47-56, 1986.
303
A. Gullstrand, “The optical system of the eye,” in Appendices to Part 1, Von
Helmholtz Handbook of Physiological Optics, 3rd Ed. Hamburg: Voss, 1909, pp.
350-358.
J. Ham, H. A. Mueller, R. C. Williams, and W. J. Geeraets, “Ocular hazard from
viewing the sun unprotected and through various windows and filters,” Applied
Optics, vol. 12, no. 9, p. 2122, 1973.
Y. Han, L. N. Hazra, and C. A. Delisle, “Exact surface-relief profile of a kinoform
lens from its phase function,” Journal of the Optical Society of America A, vol. 12,
no. 3, pp. 524-529, 1995.
M. C. Hauer, P. Nasiatka, N. R. B. Stiles, J.-C. Lue, R. N. Agrawal, J. D. Weiland,
M. S. Humayun, and A. R. Tanguay, Jr., "Intraocular camera for retinal prostheses:
Optical Design," Frontiers in Optics, The Annual Meeting of the Optical Society of
America, San Jose, CA, 2007, paper FThT1.
L. N. Hazra and C. A. Delisle, “Higher order kinoform lenses: diffraction efficiency
and aberrational properties,” Optical Engineering, vol. 36, p. 1500, 1997.
H. L. Hoover, “Solar ultraviolet irradiation of human cornea, lens, and retina:
equations of ocular irradiation,” Applied Optics, vol. 25, pp. 359-368, 1986.
F. v. Hulst, P. Geelan, A. Gebhardt, and R. Steinkopf, “Diamond tools for producing
micro-optic elements,” Industrial Diamond Review, pp. 58-62, March, 2005.
M. S. Humayun, “Intraocular retinal prosthesis,” Transactions of the American
Ophthalmological Society, vol. 99, pp. 271-300, 2001.
M. S. Humayun, J. D. Weiland, B. Justus, C. Merrit, J. Whalen, D. Piyathaisere, S. J.
Chen, E. Margalit, G. Fujii, R. J. Greenberg, E. de Juan, Jr., D. Scribner, and W. Liu,
“Towards a completely implantable, light-sensitive intraocular retinal prosthesis,” in
Proceedings of the Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC), vol. 4, 2001, pp. 3422-3425.
M. S. Humayun, J. D. Weiland, G. Y. Fujii, R. Greenberg, R. Williamson, J. Little,
B. Mech, V. Cimmarusti, B. G. Van, G. Dagnelie, and E. de Juan Jr., “Visual
perception in a blind subject with a chronic microelectronic retinal prosthesis,”
Vision Research, vol. 43, no. 24, pp. 2573-2581, 2003.
M. S. Humayun, E. de Juan, Jr., and R. J. Greenberg, “Visual prosthesis and method
of using same,” US Patent 5935155, 1999.
Z. Jaroszewicz, R. Staronski, J. Sochacki, and G. Righini, “Planar Fresnel lens with
multiple phase jump,” Pure and Applied Optics: Journal of the European Optical
Society Part A, vol. 3, pp. 667-677, 1994.
304
R. V. Johnson and A. R. Tanguay Jr., “Stratified volume holographic optical
elements,” Optics Letters, vol. 13, no. 3, pp. 189-191, 1998.
B. W. Jones, C. B. Watt, and R. E. Marc, “Retinal remodeling,” Clinical and
Experimental Optometry, vol. 88, no. 5, pp. 282-291, 2005.
J. A. Jordan Jr, P. M. Hirsch, L. B. Lesem, and D. L. Van Rooy, “Kinoform lenses,”
Applied Optics, vol. 9, no. 8, pp. 1888-1887, 1970.
B. W. Keelan, “Attributes having multiple perceptual facets,” in Handbook of Image
Quality: Characterization and Prediction, 1st Ed., New York: Marcel Dekker, 2002,
Chapter 18, pp. 257-258.
M. J. Kidger, “Principles of lens design,” in Lens Design, SPIE Critical Review
Series, vol. CR41, 1992, pp. 30-53.
P. M. Kiely, G. Smith, and L. G. Carney, “The mean shape of the human cornea,”
Journal of Modern Optics, vol. 29, no. 8, pp. 1027-1040, 1982.
R. Kingslake, Lens Design Fundamentals, 1st Ed., New York: Academic Press,
1978.
M. V. Klein and T. E. Furtak, Optics, 2nd Ed., New York: Wiley, 1986.
Y. Konishi, T. Sawaguchi, K. Kubomura, and K. Minami, “High performance cyclo
olefin polymer ZEONEX,” in Advancements in Polymer Optics Design, Fabrication,
and Materials, Proceedings of the SPIE, vol. 5872, San Diego, CA, 2005, pp. 3-8.
V. P. Korolkov, R. K. Nasyrov, and R. V. Shimansky, “Zone-boundary optimization
for direct laser writing of continuous-relief diffractive optical elements,” Applied
Optics, vol. 45, no. 1, pp. 53-62, 2006.
S. Kuiper and B. H. W. Hendriks, “Variable-focus liquid lens for miniature
cameras,” Applied Physics Letters, vol. 85, no. 7, pp. 1128-1130, 2004.
M. Kuittinen and H. P. Herzig, “Encoding of efficient diffractive microlenses,”
Optics Letters, vol. 20, pp. 2156-2158, 1995.
M. Laikin, “The method of lens design,” in Lens Design, 4th Ed., Boca Raton:
Taylor and Francis CRC Press, 2006, Chapter 1, pp. 22-23.
M. Land, N. Mennie, and J. Rusted, “The roles of vision and eye movements in the
control of activities of daily living,” Perception, vol. 28, no. 11, pp. 1311-1328,
1999.
305
S. S. Lane, B. D. Kuppermann, I. H. Fine, M. B. Hamill, J. F. Gordon, R. S. Chuck,
R. S. Hoffman, M. Packer, and D. D. Koch, “A prospective multicenter clinical trial
to evaluate the safety and effectiveness of the implantable miniature telescope,”
American Journal of Ophthalmology, vol. 137, no. 6, pp. 993-1001, 2004.
L. Li, A. Y. Yi, C. Huang, D. A. Grewell, A. Benatar, and Y. Chen, “Fabrication of
diffractive optics by use of slow tool servo diamond turning process,” Optical
Engineering, vol. 45, p. 113401, 2006.
H.-L. Liou and N. A. Brennan, “Anatomically accurate, finite model eye for optical
modeling,” Journal of the Optical Society of America A, vol. 14, no. 8, pp. 1684-
1695, 1997.
I. Lipshitz, A. Loewenstein, M. Reingerwitz, and M. Lazar, “An intraocular
telescopic lens for macular degeneration,” Ophthalmic Surgery and Lasers, vol. 28,
pp. 513-517, 1997.
J. Liu, B. Y. Gu, J. S. Ye, and B. Z. Dong, “Applicability of improved Rayleigh-
Sommerfeld method 1 in analyzing the focusing characteristics of cylindrical
microlenses,” Optics Communications, vol. 261, no. 1, pp. 187-198, 2006.
W. Liu, K. Vichienchom, M. Clements, S. C. DeMarco, C. Hughes, E. McGucken,
M. S. Humayun, E. de Juan, Jr., J. D. Weiland, and R. Greenberg, “A neuro-stimulus
chip with telemetry unit for retinal prosthetic device,” IEEE Journal of Solid-State
Circuits, vol. 35, no. 10, pp. 1487-1497, 2000.
W. Liu and M. S. Humayun, “Retinal prosthesis,” in Digest of Technical Papers,
IEEE International Solid-State Circuits Conference (ISSCC), 2004, pp. 218-219.
J. I. Loewenstein, S. R. Montezuma, and J. F. Rizzo III, “Outer retinal degeneration
An electronic retinal prosthesis as a treatment strategy,” Archives of Ophthalmology,
vol. 122, no. 4, pp. 587-596, 2004.
T. Lule, S. Benthien, H. Keller, F. Mutze, P. Rieve, K. Seibel, M. Sommer, and M.
Bohm, “Sensitivity of CMOS based imagers and scaling perspectives,” IEEE
Transactions on Electron Devices, vol. 47, no. 11, pp. 2110-2122, 2000.
D. Malacara, Optical Shop Testing, 3rd Ed., Hoboken: John Wiley and Sons, 2007.
E. Margalit and S. R. Sadda, “Retinal and optic nerve diseases,” Artificial Organs,
vol. 27, no. 11, pp. 963-974, 2003.
J. C. Marron, D. K. Angell, and A. M. Tai, “Higher-order kinoforms,” in
Proceedings of the SPIE, Computer and Optically Formed Holographic Optics, vol.
1211, 2005, pp. 62-66.
306
J. T. McCann, “Applications of diamond turned null reflectors for generalized
aspheric metrology,” SPIE Optical Testing and Metrology III: Recent Advances in
Industrial Optical Inspection, vol. 1332, pp. 843-849, 1990.
M. J. McMahon, A. Caspi, J. D. Dorn, K. H. McClure, M. S. Humayun, and R. J.
Greenberg, “Quantitative assessment of spatial vision in Second Sight retinal
prosthesis subjects,” Frontiers in Optics, The Annual Meeting of the Optical Society
of America, San Jose, CA, 2007, paper FThP1.
P. Nasiatka, A. Ahuja, N. R. B. Stiles, M. C. Hauer, R. N. Agrawal, R. Freda, D.
Guven, M. S. Humayun, J. D. Weiland, and A. R. Tanguay, Jr., “Intraocular camera
for retinal prostheses,” Frontiers in Optics, The Annual Meeting of the Optical
Society of America, Tucson, AZ, 2005, paper FThI4.
P. Nasiatka, A. Ahuja, N. R. B. Stiles, M. C. Hauer, R. N. Agrawal, R. Freda, D.
Guven, M. S. Humayun, J. D. Weiland, and A. R. Tanguay, Jr., “Intraocular camera
for retinal prostheses,” Annual Meeting of the Association for Research in Vision and
Ophthalmology (ARVO), Ft. Lauderdale, FL, 2005, poster B480.
P. Nasiatka, M. C. Hauer, N. R. B. Stiles, J.-C. Lue, S. Takahashi, R. N. Agrawal, R.
Freda, M. S. Humayun, J. D. Weiland, and A. R. Tanguay, Jr., “Intraocular camera
for retinal prostheses,” Annual Meeting of the Association for Research in Vision and
Ophthalmology (ARVO), Ft. Lauderdale, FL, 2006, poster B554.
P. Nasiatka, M. C. Hauer, N. R. B. Stiles, J.-C. Lue, S. Takahashi, R. N. Agrawal, M.
S. Humayun, J. D. Weiland, and A. R. Tanguay, Jr., “An intraocular camera for
retinal prostheses,” Frontiers in Biomedical Devices Conference (BioMed), Irvine,
CA, 2007, paper 38109.
P. Nasiatka, M. C. Hauer, N. R. B. Stiles, N. Suwanmonkha, M. Leighton, J.-C. Lue,
M. S. Humayun, and A. R. Tanguay, Jr., “An intraocular camera for provision of
natural foveation in retinal prostheses,” Annual Fall Meeting of the Biomedical
Engineering Society (BMES), Los Angeles, CA, 2007, poster P6.96.
R. Navarro, J. Santamaria, and J. Bescos, “Accommodation-dependent model of the
human eye with aspherics,” Journal of the Optical Society of America A, vol. 2, pp.
1273-1281, 1985.
G. P. Nordin, R. V. Johnson, and A. R. Tanguay Jr., “Diffraction properties of
stratified volume holographic optical elements,” Journal of the Optical Society of
America A, vol. 9, no. 12, pp. 2206-2217, 1992.
S. Norrby, “Using the lens haptic plane concept and thick-lens ray tracing to
calculate intraocular lens power,” Journal of Cataract and Refractive Surgery, vol.
30, no. 5, pp. 1000-1005, 2004.
307
S. Norrby, E. Lydahl, G. Koranyi, and M. Taube, “Clinical application of the lens
haptic plane concept with transformed axial lengths,” Journal of Cataract and
Refractive Surgery, vol. 31, no. 7, pp. 1338-1344, 2005.
D. C. O’Shea, T. J. Suleski, A. D. Kathman, and D. W. Prather, “Diffractive lens
design,” in Diffractive Optics: Design, Fabrication, and Test, Bellingham: SPIE
Press, 2004, Chapter 4, pp. 57-82.
D. C. O’Shea, T. J. Suleski, A. D. Kathman, and D. W. Prather, Diffractive Optics:
Design, Fabrication, and Test, Bellingham: SPIE Press, 2004.
F. L. Pedrotti and L. S. Pedrotti, “Aberration theory,” in Introduction to Optics, 2nd
Ed., Englewood Cliffs, NJ: Prentice-Hall, 1987, Chapter 5, p. 91.
E. Peli, E. Fine, and A. Labianca, “The detection of moving features on a display:
The interaction of direction of motion, orientation, and display rate,” in Technical
Digest of Papers, Society for Information Display, SID-98, 1998, pp. 1033-1036.
E. Peli, I. Lipshitz, and G. Dotan, “Implantable miniaturized telescope (IMT) for low
vision,” in Vision Rehabilitation: Assessment, Intervention and Outcomes, C. Stuen,
A. Arditi, A. Horowitz, M. A. Lang, B. Rosenthal, and K. Seidman, Eds., Lisse:
Swets & Zeitlinger, 2000, pp. 200-203.
E. Peli, “The optical functional advantages of an intraocular low-vision telescope,”
Optometry and Vision Science, vol. 79, no. 4, pp. 225-233, 2002.
A. Pete, “Understanding optical specifications,” Edmund Optics, Inc. White Paper.
Available online: http://www.edmundoptics.com/techSupport/
DisplayArticle.cfm?articleid=252.
D. V. Piyathaisere, E. Margalit, S. J. Chen, J. S. Shyu, S. A. D’Anna, J. D. Weiland,
R. R. Grebe, L. Grebe, G. Fujii, S. Y. Kim, R. J. Greenberg, E. de Juan, Jr., and M.
S. Humayun, “Heat effects on the retina,” Ophthalmic Surgery, Lasers and Imaging,
vol. 34, no. 2, pp. 114-120, 2003.
J. Pollack, “Displays of a different stripe,” IEEE Spectrum Magazine, pp. 41-44,
August, 2006.
M. Pop, Y. Payette, and M. Mansour, “Ultrasound biomicroscopy of the Artisan
phakic intraocular lens in hyperopic eyes,” Journal of Cataract and Refractive
Surgery, vol. 28, no. 10, pp. 1799-1803, 2002.
W. K. Pratt, “Image sampling and reconstruction,” in Digital Image Processing:
PIKS Scientific Inside, 4th Ed., Hoboken: Wiley-Interscience, 2007, Chapter 4, pp.
91-126.
308
C. Ribot, P. Lalanne, M. S. L. Lee, B. Loiseaux, and J. P. Huignard, “Analysis of
blazed diffractive optical elements formed with artificial dielectrics,” Journal of the
Optical Society of America A, vol. 24, no. 12, pp. 3819-3826, 2007.
M. Riedl, “On point,” Optical Engineering, pp. 26-29, July, 2004.
R. L. Roncone and D. W. Sweeney, “Cancellation of material dispersion in harmonic
diffractive lenses,” in Diffractive and Holographic Optics Technology II, vol. 2404.
San Jose, CA, USA: SPIE, 1995, pp. 81-88.
M. Rossi and I. Kallioniemi, “Micro-optical modules fabricated by high-precision
replication processes,” Diffractive Optics and Micro-Optics, vol. 75, pp. 108-110.
M. Rossi, R. E. Kunz, and H. P. Herzig, “Refractive and diffractive properties of
planar micro-optical elements,” Applied Optics, vol. 34, no. 26, pp. 5996-6007,
1995.
J. M. Royston, M. C. Dunne, and D. A. Barnes, “Measurement of the posterior
corneal radius using slit lamp and Purkinje image techniques,” Ophthalmic and
Physiological Optics, vol. 10, no. 4, pp. 385-388, 1990.
M. Schwarz, B. J. Hosticka, R. Hauschild, W. Mokwa, M. Scholles, and H. K. Trieu,
“Hardware architecture of a neural net based retina implant for patients suffering
from retinitis pigmentosa,” in Proceedings of the IEEE International Conference on
Neural Networks, vol. 2, 1996, pp. 653-658.
M. Schwarz, R. Hauschild, B. J. Hosticka, J. Huppertz, T. Kneip, S. Kolnsberg, L.
Ewe, and H. K. Trieu, “Single-chip CMOS image sensors for a retina implant
system,” IEEE Transactions on Circuits and Systems II: Analog and Digital Signal
Processing, vol. 46, no. 7, pp. 370-377, 1999.
M. Shan and J. Tan, “Modeling focusing characteristics of low F-number diffractive
optical elements with continuous relief fabricated by laser direct writing,” Optics
Express, vol. 15, no. 25, pp. 17032-17037, 2007.
F. Shen and A. Wang, “Fast-Fourier-transform based numerical integration method
for the Rayleigh-Sommerfeld diffraction formula,” Applied Optics, vol. 45, pp. 1102-
1110, 2006.
S. Siik, “Lens autofluorescence: in aging and cataractous human lenses, clinical
applicability.” Ph.D. Dissertation, University of Oulu, 1999.
M. Sivaprakasam, L. Wentai, M. S. Humayun, and J. D. Weiland, “A variable range
bi-phasic current stimulus driver circuitry for an implantable retinal prosthetic
device,” IEEE Journal of Solid-State Circuits, vol. 40, no. 3, pp. 763-771, 2005.
W. J. Smith, Modern Optical Engineering, 3rd Ed., New York: McGraw-Hill, 2000.
309
W. J. Smith, “Radiometry and Photometry,” in Modern Optical Engineering, 3rd
Ed., New York: McGraw-Hill, 2000, Chapter 8.
W. J. Smith, “Stops and apertures,” in Modern Optical Engineering, 3rd Ed., New
York: McGraw-Hill, 2000, Chapter 6, pp. 160-162.
W. J. Smith, “Stop shift equations,” in Modern Lens Design, 2nd Ed., New York:
McGraw-Hill, 2005, Chapter 24, pp. 600-601.
L. A. Spitzberg, R. T. Jose, and C. L. Kuether, “Behind the lens telescope: A new
concept in bioptics,” Optometry and Vision Science, vol. 66, pp. 616-620, 1989.
M. J. Stafford, “The histology and biology of the lens,” Optometry Today, pp. 23-30,
2001.
N. R. B. Stiles, M. C. Hauer, P. Lee, P. Nasiatka, J.-C. Lue, J. D. Weiland, M. S.
Humayun, and A. R. Tanguay, Jr., “Intraocular camera for retinal prostheses: Design
constraints based on visual psychophysics,” Frontiers in Optics, The Annual Meeting
of the Optical Society of America , San Jose, CA, 2007, poster JWC46.
T. Stone and N. George, “Hybrid diffractive-refractive lenses and achromats,”
Applied Optics, vol. 27, no. 14, pp. 2960-2971, 1988.
T. J. Suleski and R. D. T. Kolste, “Fabrication trends for free-space microoptics,”
Journal of Lightwave Technology, vol. 23, no. 2, pp. 633-646, 2005.
J. W. Sung, H. Hockel, J. D. Brown, and E. G. Johnson, “Development of a two-
dimensional phase-grating mask for fabrication of an analog-resist profile,” Applied
Optics, vol. 45, no. 1, pp. 33-43, 2006.
G. J. Swanson, “Binary optics technology: The theory and design of multi-level
diffractive optical elements,” Technical Report 854 (MIT Lincoln Laboratory,
Cambridge, MA), 1989.
D. W. Sweeney and G. E. Sommargren, “Harmonic diffractive lenses,” Applied
Optics, vol. 34, no. 14, pp. 2469-2475, 1995.
M. R. Wang and H. Su, “Laser direct-write gray-level mask and one-step etching for
diffractive microlens fabrication,” Applied Optics, vol. 37, no. 32, pp. 7568-7576,
1998.
J. D. Weiland, D. Yanai, M. Mahadevappa, R. Williamson, B. V. Mech, G. Y. Fujii,
J. Little, R. J. Greenberg, E. de Juan Jr., and M. S. Humayun, “Visual task
performance in blind humans with retinal prosthetic implants,” in Proceedings of the
Annual International Conference of the IEEE Engineering in Medicine and Biology
Society (EMBC), vol. 2, 2004, pp. 4172-4173.
310
J. D. Weiland, W. Liu, and M. S. Humayun, “Retinal prosthesis,” Annual Reviews of
Biomedical Engineering, vol. 7, pp. 361-401, 2005.
W. H. Welch, J. E. Morris, and M. R. Feldman, “Iterative discrete on-axis encoding
of radially symmetric computer-generated holograms,” Journal of the Optical
Society of America A, vol. 10, no. 8, pp. 1729-1738, 1993.
W. T. Welford, “Calculation of the Seidel aberrations,” in Aberrations of Optical
Systems, Bristol: Hilger, 1986, Chapter 8, pp. 148-152.
W. T. Welford, “Thin lens aberrations,” in Aberrations of Optical Systems, Bristol:
Hilger, 1986, Chapter 12, pp. 226-236.
J. C. Wyant, “Precision Optical Testing,” Science, vol. 206, pp. 168-172, 1979.
E. Zrenner, A. Stett, S. Weiss, R. B. Aramant, E. Guenther, K. Kohler, K.-D.
Miliczek, M. J. Seiler, and H. Haemmerle, “Can subretinal microphotodiodes
successfully replace degenerated photoreceptors?,” Vision Research, vol. 39, no. 15,
pp. 2555-2567, 1999.
Covi Technologies, Inc., “EVQ-1000 Application Note: Lens applications,” 2004.
Available online: http://www.covitechnologies.com/user_files/pdf/
lens_applications.pdf, as of November, 2006.
“CIE 1931 standard colorimetric observer,” Commission Internationale de
l’Eclairage (CIE) (a.k.a. International Commission on Illumination), 1931.
Available online: http://www.cie.co.at/index_ie.html.
International Congress of Ophthalmology, “Visual standards: Aspects and ranges of
vision loss with emphasis on population surveys,” report prepared for the 29
th
International Congress of Ophthalmology, Sydney, Australia, 2002. Available
online: http://www.icoph.org/pdf/visualstandardsreport.pdf.
“Photography-Electronic still-picture cameras-Determination of ISO speed,”
International Organization for Standardization (ISO), 12232, 1998.
“Color correction for image sensors,” Kodak Application Note, MTD-PS-0534-2,
Rev. 2.0, 2003.
Marshall Electronics, “Miniature lens products for CCD/CMOS cameras,” Available
online: http://www.mars-cam.com/optical.html, as of October, 2007.
“Vision problems in the U.S. - Prevalence of adult vision impairment and age-related
eye diseases in America,” National Eye Institute, 2002.
Omnivision Technology, “Omnivision OV6650/OV6151 CIF CMOS Sensor
Datasheet,” 2007. Available online: www.ovt.com.
311
“Defining Tolerances,” in Code V® 9.70 Reference Manual, Optical Research
Associates, Pasadena, CA, 2006, Chapter 8.
“Entering System Data: Specifying Vignetting,” in Code V® 9.70 Reference Manual,
Optical Research Associates, Pasadena, CA, 2006, Chapter 3.
“Geometrical and diffraction analysis,” in Code V® 9.70 Reference Manual, Optical
Research Associates, Pasadena, CA, 2006, Chapter 19.
“LUM - Illumination analysis,” in Code V® 9.70 Reference Manual, Optical
Research Associates, Pasadena, CA, 2006, Chapter 23.
“Tolerancing,” in Code V® 9.70 Reference Manual, Optical Research Associates,
Pasadena, CA, 2006, Chapter 20.
“Entering surface shape and position,” in Code V® 9.70 Reference Manual, Optical
Research Associates, Pasadena, CA, 2006, Chapter 4.
Second Sight Medical Products, “Ending the journey through darkness: Innovative
technology offers new hope for treating blindness due to retinitis pigmentosa,” Press
Release, January 2007. Available online: http://www.2-sight.com/
Argus_II_IDE_pr.htm.
Second Sight Medical Products, “Second Sight completes U.S. phase I enrollment
and commences European clinical trial for the Argus II retinal implant,” Press
Release, February 2008. Available online: http://www.2-sight.com/press-release2-
15-final.html.
STMicroelectronics, “Data Brief: VS6525 VGA single-chip camera module,”
January 2007. Available online: http://www.st.com/stonline/products/
literature/bd/12918/vs6525.pdf, as of October, 2007.
“Pushing the polymer envelope,” Syntec Technologies white paper, 2005. Available
online: www.syntectechnologies.com.
MIL-PRF-13830B, "General specification governing the manufacture, assembly, and
inspection of optical components for fire control instruments," U.S. Military
Specifications and Standards, Jan. 1997. Available online: http://assist.daps.dla.mil/
quicksearch.
United States Social Security Administration, “Title XVI - Supplemental security
income for the aged, blind, and disabled,” section 1614: “Meaning of terms: aged,
blind, or disabled individual,” in Compilation of the Social Security Laws Volume 1,
Act 1614, as amended through Jan. 2007.
Valley Design Corporation, Santa Cruz, CA. Available online:
http://www.valleydesign.com, as of Nov. 2007.
312
“FDA Summary of Safety and Effectiveness Data for the Implantable Miniature
Telescope (IMT),” VisionCare Ophthalmic Technologies, Inc., PMA P050034, 2006.
Abstract (if available)
Abstract
The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Novel imaging systems for intraocular retinal prostheses and wearable visual aids
PDF
Intraocular and extraocular cameras for retinal prostheses: effects of foveation by means of visual prosthesis simulation
PDF
Modeling retinal prosthesis mechanics
PDF
Saliency based image processing to aid retinal prosthesis recipients
PDF
Surface modification of parylene C and indium tin oxide for retinal and cortical prostheses
PDF
Towards a high resolution retinal implant
PDF
Prosthetic visual perception: retinal electrical stimulation in blind human patients
PDF
Modular bio microelectromechanical systems (bioMEMS): intraocular drug delivery device and microfluidic interconnects
PDF
Adaptive event-driven simulation strategies for accurate and high performance retinal simulation
PDF
Manipulation of RGCs response using different stimulation strategies for retinal prosthesis
PDF
Oxygen therapy for the treatment of retinal ischemia
PDF
Techniques for increasing number of users in dynamically reconfigurable optical code division multiple access systems and networks
PDF
Encoding of natural images by retinal ganglion cells
PDF
Reconfigurable high-speed processing and noise mitigation of optical data
PDF
Metasurfaces in 3D applications: multiscale stereolithography and inverse design of diffractive optical elements for structured light
PDF
Silicon micro-ring resonator device design for optical interconnect systems
PDF
Electrical stimulation approaches to restoring sight and slowing down the progression of retinal blindness
PDF
Frequency channelized receiver for ultra-wideband and serial-link systems
PDF
RGBD camera based wearable indoor navigation system for the visually impaired
PDF
Nonlinear optical signal processing for high-speed, spectrally efficient fiber optic systems and networks
Asset Metadata
Creator
Hauer, Michelle Christine (author)
Core Title
Intraocular camera for retinal prostheses: refractive and diffractive lens systems
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
02/06/2009
Defense Date
01/28/2009
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
intraocular camera,OAI-PMH Harvest,retinal prosthesis
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Tanguay, Armand R., Jr. (
committee chair
), Weiland, James D. (
committee member
), Willner, Alan E. (
committee member
)
Creator Email
hauer@usc.edu,mhauer@lmu.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1971
Unique identifier
UC1207828
Identifier
etd-Hauer-2656 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-154869 (legacy record id),usctheses-m1971 (legacy record id)
Legacy Identifier
etd-Hauer-2656.pdf
Dmrecord
154869
Document Type
Dissertation
Rights
Hauer, Michelle Christine
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
intraocular camera
retinal prosthesis