Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Analytical tools for complex multidimensional biological imaging data
(USC Thesis Other)
Analytical tools for complex multidimensional biological imaging data
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ANALYTICAL TOOLS FOR COMPLEX MULTIDIMENSIONAL BIOLOGICAL IMAGING DATA
by
Eun Sang (Daniel) Koo
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(BIOMEDICAL ENGINEERING)
December 2022
Copyright 2022 Eun Sang (Daniel) Koo
ii
Acknowledgements
My PhD adventure has been one that has taken me many years to complete, but I believe
that the time it has taken me has truly been well spent. This expedition has been an extensive
accumulation of both setbacks and accomplishments, which have all integrated to add to my
knowledge and skills. However, the most valuable aspect of my PhD has been the many long-
lasting connections I have made along the way. There has been a great deal of people, including
mentors, colleagues, and friends, who have made this journey possible, and I hope this short
message will be able to convey even a fraction of my immense gratitude to each and every single
one of them.
I want to begin by thanking Dr. Scott Fraser, my PhD advisor, for affording me the opportunity
to join the Fraser Lab and providing such an outstanding and enjoyable environment to learn and
mature as a researcher, a scientist, and, most importantly, an individual. I very much appreciate
the freedom he granted me to pursue the litany of research topics and projects that I was
interested in while steering me away from any major pitfalls and preventing me from diving too
deep into the holes that I would dig for myself. I know that without his professional insight and
constructive criticism into my doctoral research, I would not have been able to become the
scientific investigator that I am today.
I would also like to thank Dr. Le Trinh and Dr. Francesco Cutrale, my PhD mentors, for all their
training, guidance, and support over these past many years, throughout all of the projects that
we have worked on. Le was the one who first took a chance on me, bringing me in to work with
her in the Fraser Lab and further pushing me to commence my PhD journey. She introduced me
to the intricate and fascinating world of embryonic cardiac development and taught me how to
be a critical thinker, while also helping me to develop my skills in molecular and developmental
biology. Without her, I would never have taken that first step, and I am infinitely grateful to her
for that. Francesco and I almost spontaneously started to work together because of our shared
interest in image processing, computer programming, and computational analytical methods.
Through his direction, I have considerably increased my ability to comprehend and utilize
computational techniques in analyzing and handling microscopy data. His innovative ideas,
straightforward advice, and great humor has expanded my thoughts on how to become a better
scientist, especially in applying image processing for translational science. I believe my future is
brighter because of Le and Francesco, and I will be forever thankful to them for their lengthy
care, effusive encouragement, and steadfast mentorship.
During my PhD, I have had the privilege of collaborating with my fellow doctorate colleagues
and lab members, Dr. Wen Shi and Dr. Rose Chiang, on several projects. I would like to express
my deep gratitude for all their hard work and effort on our projects during these past several
years and congratulate them on also finishing their dissertations. More than our collaborations,
though, I am extremely glad to have built long-lasting and meaningful friendships with them.
iii
They have greatly enriched my life through their bright enthusiasm, heart-warming kindness, and
clear wit. These past many years have definitely been made all the better by having them be a
large part of it, and I am joyfully looking forward to the paths we all will tread, individually and
together, in the future.
There have been many people in the Fraser Lab who have helped me as I persevered
throughout my PhD. This includes Dr. Cosimo Arnesano, Dr. Jason Junge, and Dr. Simon Restrepo,
whose insights, discussions, and technical expertise have aided me in resolving numerous issues,
expanding my knowledge, and opening new avenues of thought within my research. I am also
grateful to the rest of my fellow Fraser Lab graduate students, Liana, Valerie, Marilena, Shanshan,
Peter, Peiyu, and Pu, with whom I shared all of our ups and downs.
I would also like to mention and thank several people who influenced my early years of
science and research and led me onto my path to earning this PhD. Dr. Yun Kee, Dr. Marianne
Bronner, Dr. Tatjana Sauka-Spengler, Dr. Tatiana Hochgreb-Haegele, and Dr. Marcos Simoes-
Costa all introduced me to the academic research environment and guided me in shaping my
future goals.
Last but most definitely not least, I would like to thank my entire family, my mother Sun and
father Yong, my aunt Esther and uncle Paul, and my brother Arthur and cousin Paul Jr., for all
their love and continuous support during these many years. Without them, I would never have
been able to endure and arrive at the end of this journey.
I once again thank everyone, and I look forward to the next step of this crazy little adventure
called life with all who have made this journey such a worthwhile one up to now.
iv
Table of Contents
Acknowledgements ....................................................................................................................................... ii
List of Tables ................................................................................................................................................ ix
List of Figures ................................................................................................................................................ x
Abstract ....................................................................................................................................................... xiv
Chapter 1 Introduction .............................................................................................................................. 1
1.1 Holistic imaging is necessary for biological systems research ...................................................... 1
1.1.1 Biological systems are complex and require multidimensional insight ....................................... 1
1.1.2 Fluorescent labels and optical microscopy enable biological imaging ........................................ 2
1.1.3 Laser scanning microscopy extends fluorescence microscopy to 4D .......................................... 6
1.1.4 Hyperspectral imaging provides high resolution 5D imaging ...................................................... 9
1.1.5 Quality of data with respect to multidimensionality ................................................................. 13
1.2 Multifaceted data extraction and analysis tools ......................................................................... 16
1.2.1 Segmentation drives analysis and understanding of biological images .................................... 16
1.2.2 Fluorescence microscopy data are represented as grayscale images ....................................... 18
1.2.3 Increasing complexities with increasing dimensions hinders segmentation ............................. 18
1.3 Virtual and optical segmentation approaches for 5D data extraction ....................................... 20
Chapter 2 4D Bilayer tissue segmentation reveals new spatially synchronized patterns of actin
dynamics during cardiac organogenesis ..................................................................................................... 23
2.1 Summary ..................................................................................................................................... 23
2.2 Introduction ................................................................................................................................ 24
2.2.1 Context for the LPM and heart embryogenesis ......................................................................... 24
2.2.2 Structural imaging of the LPM in literature has been limited ................................................... 25
2.2.3 A fresh perspective on the dynamics of the developing LPM ................................................... 25
2.3 Results ......................................................................................................................................... 26
2.3.1 Actin organization demarcates developmental features in the LPM during heart tube
morphogenesis .......................................................................................................................... 26
2.3.2 Bilayer nature of the LPM requires a three-dimensional analysis of the dynamics .................. 29
2.3.3 Layer segmentation reveals distinct cell arrangements in dorsal and ventral LPM layers ........ 32
2.3.4 Distinct actin cable orientations emerge between in LPM layers across multiple time points . 35
2.3.5 Endodermless mutant reveal that midline fusion does not inherently affect dorsal-lateral
actin cable formation and movement ....................................................................................... 38
v
2.3.6 Defects in left-right asymmetry disrupts the shape and movement of supercellular actin
cables in LPM layers ................................................................................................................... 42
2.4 Discussion .................................................................................................................................... 47
2.4.1 Complex multidimensional LPM topography necessitates a custom segmentation workflow . 47
2.4.2 Hypothetical model of the physical mechanics of heart tube formation .................................. 48
2.5 Methods ...................................................................................................................................... 51
2.5.1 Zebrafish lines ............................................................................................................................ 51
2.5.2 Sample preparation ................................................................................................................... 51
2.5.3 Imaging Acquisition .................................................................................................................... 52
2.5.4 3D Image Registration ................................................................................................................ 52
2.5.5 Actin Cable Marking and Visualization ....................................................................................... 53
2.5.6 Layer Segmentation ................................................................................................................... 53
2.5.7 Quantitative Measures .............................................................................................................. 53
2.6 Supplementary Material ............................................................................................................. 55
2.6.1 Supplementary Figures .............................................................................................................. 55
2.6.2 Supplementary Notes ................................................................................................................ 58
2.7 Chapter Conclusion ..................................................................................................................... 61
Chapter 3 Scalable, real-time, multidimensional visualization with SEER ............................................... 63
3.1 Author Contribution .................................................................................................................... 63
3.2 Chapter Summary ....................................................................................................................... 64
3.3 Introduction ................................................................................................................................ 64
3.4 Results ......................................................................................................................................... 68
3.4.1 Spectrally Encoded Enhanced Representations (SEER) ............................................................. 68
3.4.2 Standard reference maps ........................................................................................................... 71
3.4.3 Tensor map ................................................................................................................................ 75
3.4.4 Modes (scale and morph) .......................................................................................................... 75
3.4.5 Color maps enhance different spectral gradients...................................................................... 78
3.4.6 SEER improvements provide clear visualization of intrinsic signals .......................................... 83
3.4.7 Spectral differences visualized in combinatorial approaches .................................................... 87
3.5 Discussion .................................................................................................................................... 88
3.6 Methods ...................................................................................................................................... 92
3.6.1 Simulated hyperspectral test chart ............................................................................................ 92
3.6.2 Standard RGB visualizations ....................................................................................................... 93
vi
3.6.3 Spectral separation accuracy calculation ................................................................................... 95
3.6.4 Compressive spectral algorithm and map reference design: phasor calculations .................... 96
3.6.5 Standard map reference ............................................................................................................ 96
3.6.6 Tensor map ................................................................................................................................ 97
3.6.7 Scale mode ................................................................................................................................. 98
3.6.8 Morph mode ............................................................................................................................ 100
3.6.9 Color image quality calculations: colorfulness ......................................................................... 102
3.6.10 Sharpness ............................................................................................................................... 102
3.6.11 Contrast .................................................................................................................................. 103
3.6.12 Color Quality Enhancement ................................................................................................... 103
3.6.13 Mouse lines ............................................................................................................................ 104
3.6.14 Zebrafish lines ........................................................................................................................ 104
3.6.15 Plasmid constructions ............................................................................................................ 105
3.6.16 Microinjection and screening of transgenic zebrafish lines .................................................. 106
3.6.17 Sample preparation and multispectral image acquisition and instrumentation ................... 106
3.6.18 Non-de-scanned (NDD) multiphoton fluorescence lifetime imaging (FLIM) and analysis. ... 107
3.7 Supplementary Material ........................................................................................................... 109
3.7.1 Supplementary Tables ............................................................................................................. 109
3.7.2 Supplementary Figures ............................................................................................................ 113
3.7.3 Supplementary Notes .............................................................................................................. 141
3.8 Chapter Conclusion ................................................................................................................... 145
Chapter 4 Quantitative data extraction through Hybrid Unmixing (HyU) ............................................. 146
4.1 Author Contribution .................................................................................................................. 146
4.2 Chapter Summary ..................................................................................................................... 146
4.3 Introduction .............................................................................................................................. 147
4.4 Results ....................................................................................................................................... 150
4.4.1 Architecture of HyU in comparison to traditional linear unmixing ......................................... 150
4.4.2 Quantitative assessment of HyU compared to LU ................................................................... 152
4.4.3 Increased accuracy and sensitivity ........................................................................................... 159
4.4.4 Increased sensitivity further extends analysis to intrinsic signals ........................................... 161
4.5 Discussion .................................................................................................................................. 166
4.5.1 Advantages of HyU over LU ..................................................................................................... 166
4.5.2 Effects of applying the phasor encoder for HyU ...................................................................... 168
vii
4.6 Methods .................................................................................................................................... 169
4.6.1 Zebrafish lines .......................................................................................................................... 169
4.6.2 Sample preparation ................................................................................................................. 171
4.6.3 Image acquisition ..................................................................................................................... 171
4.6.4 Hyperspectral Fluorescence Image Simulation ........................................................................ 172
4.6.5 Independent Spectral Signatures ............................................................................................. 172
4.6.6 Phasor analysis ......................................................................................................................... 173
4.6.7 Linear Unmixing ....................................................................................................................... 174
4.6.8 Hybrid Unmixing - Linear Unmixing ......................................................................................... 175
4.6.9 HyU Algorithm .......................................................................................................................... 176
4.6.10 Other unmixing algorithms .................................................................................................... 177
4.6.11 Data visualization ................................................................................................................... 177
4.6.12 Box Plot Generation ............................................................................................................... 177
4.6.13 Timelapse registration ........................................................................................................... 178
4.6.14 Timelapse statistics ................................................................................................................ 178
4.6.15 Quantification with Mean Square Error ................................................................................. 178
4.6.16 Residuals ................................................................................................................................ 179
4.6.17 Image Contrast ....................................................................................................................... 181
4.7 Supplementary Material ........................................................................................................... 182
4.7.1 Supplementary Tables ............................................................................................................. 182
4.7.2 Supplementary Figures ............................................................................................................ 183
4.7.3 Supplementary Notes .............................................................................................................. 196
4.8 Chapter Conclusion ................................................................................................................... 198
Chapter 5 Fluorescence Microscopy Data Simulation Framework........................................................ 200
5.1 Author Contribution .................................................................................................................. 200
5.2 Chapter Summary ..................................................................................................................... 200
5.3 Introduction .............................................................................................................................. 201
5.3.1 Importance of source data for developing image processing algorithms ............................... 201
5.3.2 The challenges of source data collection in fluorescent microscopy ...................................... 202
5.3.3 A new framework to facilitate extraction and analysis ........................................................... 203
5.4 Results ....................................................................................................................................... 203
5.4.1 Generation of simulated fluorescence spectra ........................................................................ 203
5.4.2 Quantitative comparison of experimental and simulated spectral signals ............................. 207
viii
5.4.3 Comparison of experimental and simulated biologically relevant images .............................. 209
5.5 Discussion .................................................................................................................................. 212
5.6 Methods .................................................................................................................................... 215
5.6.1 Fluorescence spectrum and hyperspectral dataset simulation ............................................... 215
5.6.2 Conversion rate measurement ................................................................................................ 220
5.6.3 Reference spectra collection.................................................................................................... 220
5.6.4 Simulation framework accessibility ......................................................................................... 221
5.7 Supplementary Material ........................................................................................................... 221
5.7.1 Supplementary Figures ............................................................................................................ 221
5.7.2 Supplementary Notes .............................................................................................................. 225
5.8 Chapter Conclusion ................................................................................................................... 228
Chapter 6 Thesis Conclusion and Past and Future Perspectives ........................................................... 230
References ................................................................................................................................................ 234
ix
List of Tables
Supplementary Table 3.1: Parameters for in vivo imaging. All data points are 16-bit depth, acquired
using LD C-Apochromat 40x/11 W lens. ......................................................... 110
Supplementary Table 3.2: Processing time comparison SEER vs Independent Component Analysis
(scikit-learn implementation) for Figures 3.4-3.7. ......................................... 111
Supplementary Table 3.3: Average colorfulness, contrast and sharpness score across Figures 3.4-3.7
for different visualization methods ................................................................ 111
Supplementary Table 3.4: Color Quality Enhancement score for datasets in Figures 3.4-3.7.
Parameters calculations are reported in methods section. ........................... 112
Supplementary Table 3.5: Primer list for plasmid constructions ............................................................. 112
Supplementary Table 4.1: Imaging parameters for all datasets ............................................................... 183
x
List of Figures
Figure 1.1. Overview of common fluorophore excitation and emission spectra ......................................... 4
Figure 1.2. Fluorescence imaging reveals a specific target of interest ......................................................... 5
Figure 1.3: Single photon excitation volumes are much greater than for two-photon. ............................... 8
Figure 1.4. Simultaneous signal acquisition of multiple fluorophores results in spectral overlap ............. 12
Figure 1.5. Hyperspectral acquisition records the wavelength dimension ................................................ 13
Figure 2.1. Actin dynamics reveal key morphological features during embryonic heart tube formation . 28
Figure 2.2. Bilayer characteristics of the LPM presents challenges in visualization using the maximum
intensity Z-projection ................................................................................................................ 31
Figure 2.3. Layer segmentation workflow reveals distinct actin patterns for each layer of the LPM ........ 34
Figure 2.4. Actin boundary formations denote differential morphogenetic forces between dorsal and
ventral layers of LPM. ............................................................................................................... 38
Figure 2.5. Shape changes and movement of supracellular actin cables in the endodermless mutant
bilayers are unaffected by the lack of midline fusion ............................................................... 42
Figure 2.6. Disruption to left-right signaling in southpaw right morphants appear to cause
disorganization of actin cable formation and movements with respect to the left-right axis . 46
Supplementary Figure 2.1. Actin cables form across multiple cells during midline fusion ........................ 55
Supplementary Figure 2.2. 3D LPM imaging data is composed of small amounts of LPM signal on 2D
virtual slices. .................................................................................................... 56
Supplementary Figure 2.3. Layer segmented timepoints of the LPM reveal cellular organization and
signal distribution differences between the dorsal and ventral layers over
time ................................................................................................................. 57
Supplementary Figure 2.4. Schematic detailing the creation of the automatic segmentation surface ..... 58
Figure 3.1. Spectrally Encoded Enhanced Representations (SEER) conceptual representation. ............... 71
Figure 3.2. Spectrally Encoded Enhanced Representation (SEER) map designs. ........................................ 74
Figure 3.3. Enhanced contrast modalities. ................................................................................................. 78
Figure 3.4. Autofluorescence visualization comparison for unlabeled freshly isolated mouse tracheal
explant. ..................................................................................................................................... 81
Figure 3.5. Visualization of a single fluorescence label against multiple autofluorescences. .................... 82
Figure 3.6. Triple label fluorescence visualization. ..................................................................................... 86
Figure 3.7. Visualization of combinatorial expression on Zebrabow
102
samples. ....................................... 88
Supplementary Figure 3.1. Computational time comparison of SEER and ICA for different file sizes. .... 113
Supplementary Figure 3.2. Comparison of SEER with visualized HySP
97
results. ..................................... 114
Supplementary Figure 3.3. Simulated Hyperspectral Test Chart I rendered in TrueColor shows nearly
indistinguishable spectra. ............................................................................. 115
Supplementary Figure 3.4. Effect of spectral shape with constant intensities on radial map in absence of
background. .................................................................................................. 116
Supplementary Figure 3.5. Effect of spectrum intensity in presence of background on radial map. ...... 117
Supplementary Figure 3.6. Radial and Angular reference map designs and modes differentiate nearly
indistinguishable spectra (Simulated Hyperspectral Test Chart I)
(Supplementary Figure 3.3). ......................................................................... 118
Supplementary Figure 3.7. Gradient Ascent and Descent reference map designs and modes differentiate
nearly indistinguishable spectra (Supplementary Figure 3.3). ..................... 119
xi
Supplementary Figure 3.8. Simulated Hyperspectral Test Chart II and its standard overlapping
spectra. .......................................................................................................... 120
Supplementary Figure 3.9. Radial and Angular reference map designs and modes rendering standard
overlapping spectra (Simulated Hyperspectral Test Chart II)
(Supplementary Figure 3.2). ......................................................................... 121
Supplementary Figure 3.10. Gradient ascent and descent reference map designs and modes
differentiation of standard overlapping spectra (Simulated
Hyperspectral Test Chart II). ........................................................................ 122
Supplementary Figure 3.11. Spectral denoising effect on Angular and Radial maps visualization of
standard overlapping spectra (Simulated Hyperspectral Test Chart II,
Supplementary Figure 3.8). ......................................................................... 123
Supplementary Figure 3.12. Spectral denoising effect on Gradient Ascent and Descent maps
visualization of standard overlapping spectra (Simulated Hyperspectral
Test Chart II, Supplementary Figure 3.8)..................................................... 124
Supplementary Figure 3.13. Visualization comparison for autofluorescence with other RGB standard
visualizations. .............................................................................................. 125
Supplementary Figure 3.14. Phasor Fluorescence Lifetime Imaging Microscopy (FLIM) of unlabeled
freshly isolated mouse tracheal explant. .................................................... 126
Supplementary Figure 3.15. Gray scale visualization of a single fluorescence label against multiple
autofluorescences. ...................................................................................... 127
Supplementary Figure 3.16. Visualization comparison for single fluorescent label with other RGB
standard visualizations in presence of autofluorescence. .......................... 128
Supplementary Figure 3.17. Visualization comparison for triple label fluorescence with other RGB
standard approaches. .................................................................................. 129
Supplementary Figure 3.18. SEER of zebrafish volumes in Maximum Intensity Projection (MIP) and
Shadow Projection. ..................................................................................... 130
Supplementary Figure 3.19. Visualization comparison for combinatorial expression with other RGB
standard approaches. .................................................................................. 131
Supplementary Figure 3.20. Processing speed comparison SEER vs Independent Component
Analysis for the datasets of Figures 3.4-3.7. ............................................... 132
Supplementary Figure 3.21. RGB Visualization with multiple modalities under different spectral
overlap and SNR conditions. ....................................................................... 133
Supplementary Figure 3.22. Spectra of extreme conditions in SNR-Overlap simulation. ........................ 134
Supplementary Figure 3.23. Spectral separation accuracy of SEER under different spectral overlap
and SNR conditions. .................................................................................... 135
Supplementary Figure 3.24. Comparison of SEER and ICA spectral image visualization (RGB) under
different spectral overlap and SNR conditions. .......................................... 136
Supplementary Figure 3.25. Quantification of enhancement for Figures 3.4-3.7. .................................. 137
Supplementary Figure 3.26. Visualization of photobleaching with SEER. ................................................ 138
Supplementary Figure 3.27. Morph mode algorithm pictorial abstraction.............................................. 139
Supplementary Figure 3.28. Autofluorescence visualization in volumetric data of unlabeled freshly
isolated mouse tracheal explant. ................................................................ 140
Figure 4.1. Schematic illustrating how Hybrid Unmixing (HyU) enhances analysis of multiplexed
hyperspectral fluorescent signals in vivo. .............................................................................. 152
xii
Figure 4.2: Hybrid Unmixing outperforms standard Linear Unmixing in both synthetic and live
spectral fluorescence imaging. ............................................................................................... 156
Figure 4.3: Hybrid Unmixing enhances unmixing for low-signal in vivo multiplexing and achieves
deeper volumetric imaging. ................................................................................................... 158
Figure 4.4: HyU reveals the dynamics of developing vasculature by enabling multiplexed volumetric
time-lapse. .............................................................................................................................. 160
Figure 4.5: HyU enables identification and unmixing of low photon intrinsic signals in conjunction
with extrinsic signals. .............................................................................................................. 163
Figure 4.6: HyU pushes the upper limits of live multiplexed volumetric timelapse imaging of intrinsic
and extrinsic signals. ............................................................................................................... 165
Supplementary Figure 4.1. HyU unmixing reduces noise and signal bleed-through compared to
traditional bandpass filter imaging. .............................................................. 183
Supplementary Figure 4.2. HyU algorithm outperforms current methods. ............................................. 184
Supplementary Figure 4.3. Comparison of unmixing results for synthetic data at different SNR
demonstrate improved HyU performance. .................................................. 185
Supplementary Figure 4.4. Quantification of HyU vs LU unmixing results for synthetic data highlight
increased HyU performance. ........................................................................ 186
Supplementary Figure 4.5. Residual analysis for synthetic data identifies locations with reduced
algorithm performance. ................................................................................ 187
Supplementary Figure 4.6. Schematic overview of residual calculation. ................................................. 188
Supplementary Figure 4.7. Unmixing of a quadra-transgenic zebrafish with HyU and LU highlights
improvements in contrast and spatial features. ........................................... 189
Supplementary Figure 4.8. Residual analysis of experimental data supports performance
improvement of HyU. ................................................................................... 190
Supplementary Figure 4.9. Application of denoising filters reveals improved results with lower
residuals. ....................................................................................................... 191
Supplementary Figure 4.10. Comparison of residual images for LU and HyU highlights improved HyU
performance. .............................................................................................. 192
Supplementary Figure 4.11. Residual maps facilitate identification of independent spectral
components. ............................................................................................... 193
Supplementary Figure 4.12. HyU analysis of 36 hpf Casper zebrafish demonstrates feasibility of
unmixing only intrinsic signals. ................................................................... 194
Supplementary Figure 4.13. Speed comparison and improvement plots of multiple unmixing
algorithms in their Original form vs HyU encoded. .................................... 195
Supplementary Figure 4.14. Residuals in synthetic data and experimental data. ................................... 196
Figure 5.1: Computational model for spectral detection of fluorescence................................................ 205
Figure 5.2: Simulated spectra can be generated individually or arranged into spatial images ................ 206
Figure 5.3: Simulated spectra are highly similar to experimentally acquired spectra ............................. 208
Figure 5.4: High similarity of mean vs variance plots between experimental and simulated images ..... 209
Figure 5.5: Simulations recapitulate spatial intensity distributions across all channels .......................... 211
Supplementary Figure 5.1. Stochastic emission of fluorophore results in randomized outputs ............. 221
Supplementary Figure 5.2. Example simulation of multiple fluorophores in a single acquisition ........... 222
Supplementary Figure 5.3. Simulations enable objective comparisons using the MSE ........................... 223
Supplementary Figure 5.4. Heatmap of HyU improvement across a multitude of parameters .............. 224
xiii
Supplementary Figure 5.5. Schematic for integrating the effects of the PSF to the simulation .............. 225
xiv
Abstract
The ability to generate data for biological research has accelerated greatly in the past decade,
especially in the field of optical microscopy, providing a wealth of information about biological
structures and processes which have been instrumental to advances in the field of biology. This
plethora of data, expanded in both quantity and detail, shifts the challenge to analysis and
information extraction. This is especially true for live biological imaging where the increase from
two dimension (x,y) to five dimensions (x,y,z,c (channel),t (time)) greatly increases the complexity
of analysis. Custom analytical tools and workflows are required to effectively analyze these large,
and complex multidimensional data sets from live biological imaging. In this dissertation, I
provide specific workflows and techniques for improving investigation and extraction of data for
two complementary areas of image processing:
(1) dynamic, structurally guided segmentation
(2) scalable multiplexing
For dynamic segmentation, I have determined effective routines to virtually dissect visually
homogenous biological populations within complex non-orthogonal biological structures. For
scalable multiplexing, I aim to increase the multiplexing capabilities of multidimensional
fluorescence microscopy by developing state-of-the-art computational tools to better visualize
and untangle hyperspectral imaging data. The projects developed for these two areas of image
processing provide real-world tools and applications which build on and reinforce one another in
order to overcome the challenges and pitfalls that occur when interacting with experimental
multidimensional biological data, positioning us to better understand the complex nature of
biology.
1
Chapter 1
Introduction
1.1 Holistic imaging is necessary for biological systems research
1.1.1 Biological systems are complex and require multidimensional insight
The field of modern biological sciences has been an ever-growing investigation into the
increasingly discovered number of multi-scale interactions across a wide variety of biological
systems. Early research in the life sciences has mainly been reductionist in approach with
scientists breaking down complex systems into singular parts, which are explored in isolation.
1–4
This reductionist approach has provided a great deal of insight into many biological systems and
even now is still used to make useful fundamental discoveries such as the functional
characterization of a newly identified, single gene within a model system.
5
Still, it is nevertheless
true that whole system biological processes are composed of many shifting, interconnected
components, which cannot be fully understood by only using reductionist approaches. One
representative example is embryonic development, where a single cell multiplies and transforms
in a complex multi-tissue organism. In embryonic development, there are a large number of
multi-level components and interactions which are deeply interconnected. It would be highly
difficult to understand the entire symphony of biological processes by only studying isolated
portions.
In order to understand these highly intricate biological systems, it is necessary to first capture
as much information as possible about all the parts within a system. The most common method
to collect the requisite multicomponent, spatiotemporal information is through direct optical
illumination and observation. However, for biological processes such as embryonic
2
development, observation and information collection has several complex considerations
because of the microscopic scale of the processes and the live nature of the observed subject.
Since these biological processes are very small, advanced microscopy techniques and hardware
are necessary to yield images with enough detail from tissue to cell to subcellular scales in order
to describe the relevant processes. Furthermore, these techniques must also account for the
multidimensionality of the various components within the biological systems which influence and
interact with one another across all three dimensions of space as well as time. Thankfully,
advancements over the past several decades have enabled the creation of tools needed to
acquire the essential multidimensional information within the described biological context.
1.1.2 Fluorescent labels and optical microscopy enable biological imaging
Fluorescence microscopy has been one of the most widely used techniques to collect
information within live biological samples due to its ability to acquire highly specific and sensitive
multidimensional images of targeted labels in a complex environment. Fluorescence is the
phenomenon whereby photons at a defined range of wavelengths can be released by a particular
type of molecule, called a fluorophore, with the ability to absorb photons at a lower specified
range of wavelengths.
6
Fluorophores with different types of molecular structures each emit
specific wavelength ranges of light, which allow them to be used as labels. Each fluorescent dye
has a distinct excitation (absorption) and emission profile with differing levels of excitation
efficiency at distinct excitation wavelengths (Figure 1.1). The emission profile, also referred to
as the fluorescent spectra, is a probability density function (PDF) which denotes the chances of
photons being emitted with a specific wavelength. Therefore, when fluorophores are excited by
a laser source with a specific wavelength, the excited fluorophores will stochastically emit
3
photons within the range of wavelengths denoted by the emission profile. The signal of the
emitted photons can then be recorded as an intensity value by separating out the emitted
photons with an optical filter for the desired range of wavelengths before the photons are finally
captured on a detector or a single camera pixel. The recorded signal corresponds to the
fluorophores’ relative concentration and spatial location.
6,7
For biological studies, molecular
components within either individual cells or throughout an entire model organism can be
specifically targeted by attaching distinct fluorophore groups to specified protein populations
related to the desired process being investigated.
8,9
By exciting and capturing the signal from the
fluorophore, the attached protein’s spatial location is also recorded. Therefore, fluorescence
images denote the spatial distributions and relative concentrations of specific targets of interest
within the desired model system. For example, an experiment may be carried out by imaging a
sample where a commercial fluorescent dye, such as LysoTracker Red, has been applied in order
to stain the cellular components within the sample (Figure 1.2). When comparing the brightfield
image to the fluorescence image, we can see that the fluorescence image provides a clearer,
more representative view of the acidic organelles. Fluorescence microscopy provides a clear
method of specifically distinguishing and capturing biological processes and components within
a complex environment.
4
Figure 1.1. Overview of common fluorophore excitation and emission spectra
This chart provides a quick overview of a very large number of unique fluorophores, which exist for use in
biological imaging, with a wide variety of excitation and emission spectral profiles. Each row represents
a single fluorophore with a given excitation spectrum (dashed lines) and a characteristic emission
spectrum (solid line with shaded area). The fluorophores are arranged in descending order by peak
emission wavelength and all together span the 300 nm to 850 nm range. The excitation profile provides
a measure of the efficiency of the fluorophore to output photons at the given excitation laser wavelength
at constant power. The emission profile is a probability density function for the wavelengths of the
photons emitted by the fluorophore. (Image source: modified from Abcam
10
)
5
Figure 1.2. Fluorescence imaging reveals a specific target of interest
Fluorescence imaging enables targeted visualization of desired components within a biological sample.
This high specificity in imaging is performed by associating the molecule that emits the fluorescence
(fluorophore) with the spatial location of the target of interest. (Left) In this example of fluorescence
imaging, LysoTracker Red, a commercially available fluorescent dye, stains acidic organelles within cells,
enabling the highly specific imaging of those organelles due to the spatial colocalization of the fluorescent
dye with the organelles. (Right) In contrast, brightfield imaging, which captures all transmitted light, only
captures contrast differences within the image (shadows/refraction/reflection), preventing the capture
of specific cellular structures and components. The outlined region containing fluorescence signal (gray
dotted lines), encircles the same spatial location in both types of imaging. The clearly defined cells of the
outlined region in the LysoTracker Red image (left) are nearly invisible in the brightfield image (right).
Scale bar: 10 µm. (Image source: modified from Ghashghaei, O et. al.
11
)
Standard fluorescence microscopy is performed using a widefield microscope with an
unfocused source of excitation light which illuminates the entire sample. By utilizing specific
excitation and emission filters, a widefield fluorescence microscope can acquire single snapshot
images of the specified fluorophore tagged targets within samples for the entire field of view.
However, this type of fluorescence microscopy provides a single overall 2D (x,y) snapshot of any
three-dimensional sample. This means that the signals at any depth (z) within the 3D volume of
the sample would be acquired simultaneously, leading to an obfuscation of the visual details and
6
an overall low-quality image with degraded features.
7
With the many rapid theoretical and
material developments within the field of photonics, a myriad of techniques including single- and
multi-photon laser scanning microscopy
12–14
, multi- or hyper-spectral microscopy
15–17
, spinning
disk microscopy
18,19
, light-sheet microscopy
20
, light-field microscopy
21
, and many more have
been implemented to extend the imaging capabilities of a fluorescence microscope to volumetric
data stacks and multiple volumetric stacks over time. In this thesis, we utilize both single- and
multi-photon fluorescence laser scanning microscopy along with hyperspectral detection as
needed for image acquisition.
1.1.3 Laser scanning microscopy extends fluorescence microscopy to 4D
Laser scanning microscopy (LSM) has both greatly increased the amount of resolvable detail
collected along the two standard dimensions, x and y, and further allowed the additional
selective acquisition of signal along the third dimension, z, as compared to widefield microscopy.
In single-photon (1P) LSM, a laser source, focused to a point and tuned to an exact amount of
energy per photon, is used to excite a point location within the 3D volume of the given sample.
Since the excitation light needs to pass through the sample in order to focus onto a single point
within the sample, a relatively large volume of the sample in the form of a narrow 3D cone is
excited in addition to the desired point location (Figure 1.3 left). By utilizing a confocal pinhole
placed in front of the detector, the microscope can block the out of focus light within the sample
excited by the 3D cone, thereby allowing for the selective collection of light from an exact point
in 3D space. Then, by scanning the laser along the x- and y-axis and moving either the focal point
of the laser or the sample along the z-axis, it is possible to gain an image stack, a sequence of
highly resolved 2D optical sections of the sample where each optical section represents a slice
7
taken along the z axis (z-plane).
7
Building on the photo-physics of 1P microscopy, two-photon
(2P) microscopy provides some improvements for multidimensional imaging. In contrast to 1P
microscopy, two-photon microscopy utilizes the requirement for two longer wavelength (lower
energy) photons to excite a specific fluorophore within 1 femtosecond of each other.
6
This action
of two photons simultaneously arriving at a single point will mainly occur at the focal point of the
laser, decreasing the probability that any photons outside of the focal point will cause excitation.
Since the only emission light will be coming from the focal point, there is no longer any need for
a pinhole to reject out-of-focus light (Figure 1.3 right). Since multiple lower energy photons are
needed in order to excite a fluorophore, the amount of energy applied (energy load) throughout
the entire sample is decreased compared to the point of excitation. This has two major benefits.
First, since Rayleigh scattering is proportional to
1
𝜆𝜆 4
� where λ is the wavelength of the incident
light, using two-photon excitation which has much longer wavelengths results in much fewer
scattering effects.
6,13
This improves both penetration of the excitation laser, leading to deeper
imaging. The second benefit is the prevention of high energy effects (photo-bleaching, photo-
toxicity/damage discussed in Section 1.1.5) on the sample. With either 1P or 2P LSM,
multidimensional fluorescence imaging of biological samples can be performed at high resolution
along the three spatial dimensions (x,y,z) within a time scale of seconds to minutes. This imaging
by LSM can then repeatedly scan the same 3D volume with a time resolution on the order of
seconds to minutes in order to more precisely capture changes within a biological sample.
Therefore, LSM can perform high resolution four-dimensional (x,y,z,t) imaging in live biological
samples.
8
Figure 1.3: Single photon excitation volumes are much greater than for two-photon.
A comparison of excitation volume sizes between single-photon and two-photon excitation demonstrates
the greater suitability of two-photon excitation for long-term, live imaging. A smaller excitation volume
means a lower probability of negatively impacting an imaged sample. (Left) Single photon excitation of a
point destination within a sample requires the creation of an excitation volume using the full light path of
the laser (cyan) which results in the excitation of a relatively large volume within the sample (left green)
surrounding the desired point destination. (Right) Two-photon excitation of a point destination within a
sample also requires the creation of an excitation using the full light path of the laser (red), but due to the
added requirement of two photons for excitation, the resulting excitation volume within the sample (right
green) is much smaller. (source: Sigma Aldrich
22
)
9
1.1.4 Hyperspectral imaging provides high resolution 5D imaging
While laser scanning microscopy enables high resolution imaging for up to four dimensions
(x,y,z,t), by imaging multiple fluorophores, it is also possible to further extend the imaging in
order to understand how multiple parts interact with each other within that four-dimensional
context. Standard image acquisition of multiple fluorophore populations currently consists of
imaging a limited number of fluorophore types, usually 3 or 4. There are two typical methods for
imaging multiple fluorophore types within a single sample. The first method is to individually
excite each type of fluorophore and acquire the signals in sequence. This involves choosing the
best laser wavelength for exciting one type of fluorophore, exciting that fluorophore population
and acquiring its corresponding image or set of images, then repeating those steps for the
remaining fluorophore populations. This cycle of fluorophore excitation may be performed after
the acquisition of either a single 2D image or a 3D volume, depending on the specific microscopy
system. The second method is the simultaneous excitation and signal collection of fluorophore
populations which are spectrally well separated with the use of proper optical filters for each
fluorophore type. Since one optical filter allows for the collection of one well defined range of
wavelengths, multiple filters, each centered around the peak emission wavelengths of each
fluorophore population, may be used to perform the simultaneous signal collection.
7
With the
collection of multiple fluorophore populations within a sample, the dimensionality of the imaging
dataset is then expanded to a fifth dimension (channel).
While standard multi-fluorophore imaging provides a new channel dimension, there are
several limitations and problems that arise with this standard channel imaging. For sequential
signal acquisition, one issue is the amount of time it takes to complete the full collection. Let’s
10
say that there are n fluorophore types within a sample. If each fluorophore population must be
excited and imaged in sequence, the amount of time it takes to collect the full channel image or
image stack would be n multiplied by the time it takes for a single acquisition. This would hinder
the monitoring of colocalized signals within live samples. In addition, sequential excitation would
multiply the energy load (section 1.1.3) by n, leading to highly deleterious effects on both the
sample and the signal as explained in section 1.1.5. For simultaneous signal acquisition, the main
issue that occurs is the signal bleed-through that occurs between multiple fluorophore types. As
explained in section 1.1.2 and shown in Figure 1.1, the emission spectra of fluorophores are
spread across a wide range of wavelengths. Even when choosing fluorophores with the least
amount of overlap, once more than 3 or 4 fluorophores are selected, there is inevitable overlap
between the emission spectra of the fluorophores. The overlap in emission spectra means that
when collecting signal for one fluorophore population, the signal from another fluorophore
population will also be collected, resulting in the previously well-defined spatial locations of the
individual fluorophore populations blending in with one another (Figure 1.4). Because of all of
these disadvantages, standard channel imaging is highly limited in the number of fluorophores
that can be imaged in a complex environment.
A better method is needed to capture more fluorophores while preserving their individual
information. One microscopy technique to accomplish this is hyperspectral imaging. In standard
fluorescence imaging, the use of an optical filter allows collection of all emitted photons within a
designated range of wavelengths, which are all recorded and outputted as a single intensity
signal. This means that there is no information about the wavelengths of the collected photons.
In contrast, for hyperspectral imaging, the entire emission spectrum of a fluorophore is sampled
11
along the wavelength dimension. This is done by exciting the sample and utilizing a method to
segregate the emitted photons along the entire output wavelength range (ex. 410-690 nm) by
small fixed-width wavelength ranges (ex. 8.9nm widths). The signal from each wavelength range
is acquired using either a single detector with many scans or simultaneously with multiple
detectors.
15,23
As compared to the limited information acquired from standard optical filters
which collapse the wavelength dimension, acquisition using a hyperspectral detector also
provides the wavelength information, resulting in a well characterized spectral profile for each
spatiotemporal voxel (Figure 1.5). The well characterized spectrum can then be used to deduce
individual spectral components which make up the total emission spectrum through
computational unmixing methods.
17
Hyperspectral imaging opens up a path to imaging a greater
number of fluorophore types, which in turn signifies that a greater number of components within
a system can be simultaneously imaged. Combining hyperspectral imaging with 4D LSM enables
high resolution imaging of biological systems and processes across all five dimensions of space
(x,y,z), time (t), and channel (c). As described in section 1.1.1, understanding biological systems
requires the holistic collection of multidimensional information, and with hyperspectral LSM
(HLSM), such imaging is now achievable.
12
Figure 1.4. Simultaneous signal acquisition of multiple fluorophores results in spectral overlap
An example of spectral overlap is provided using the emission spectra of four fluorophores (Citrine, mKO,
tdTomato, mRFP). Spectral overlap occurs as a consequence of multiple fluorophores having emission
spectra within overlapping wavelength ranges and results in obfuscation of the desired signal. If all four
fluorophores are excited at the same time, detection of tdTomato at its maximum emission wavelength
range (between the dotted lines) will include overlap from the three other fluorophores. The arrows point
to the sections of the spectra for Citrine, mKO, and mRFP which overlap with the spectrum of tdTomato
within the detection range and would result in additional, incorrectly attributed tdTomato signal (Source:
modified from Chroma
24
).
13
Figure 1.5. Hyperspectral acquisition records the wavelength dimension
Hyperspectral acquisition enables acquisition of the spectral profile of a target spectrum. When acquiring
the profile of the emission spectrum of a fluorophore (top) using hyperspectral imaging, an array of fixed-
width wavelength ranges (e.g., 30 nm width -> 490-520nm) across the entire wavelength range is utilized
to capture the emission spectrum (bottom left). Each element of the array (bottom left, space between
dotted lines) collects all of the signal within its wavelength range and is individually captured by a detector,
resulting in an array of values representing the profile of the emission spectra. In contrast, standard
optical filter imaging utilizes a single fixed width bandpass filter to record a single binned signal for the
desired fluorophore and emission spectra (bottom right).
1.1.5 Quality of data with respect to multidimensionality
One important concept that must be introduced when discussing multidimensional LSM data
is the idea of the photon budget and its effects on the quality of the acquired data. While
fluorescence microscopy has shown great utility in the field of biological imaging as previously
stated, one problem that occurs when acquiring these images is the interaction between the
excitation light to enable fluorescence emission and the sample that must be excited. As
explained in section 1.1.3, excitation light from a laser source must be applied on the sample in
14
order to accomplish fluorescence emission; this excitation light corresponds to a certain amount
of energy added to the sample which is defined by the power of the laser source. With higher
laser power, more excitation light is applied to the sample, which corresponds to a larger energy
load inserted into the sample. One useful aspect of a larger energy load is the larger amount of
emission photons produced by the excited fluorophores thereby increasing the acquirable signal.
However, there are two major drawbacks in inserting too much energy into a sample and the
fluorophores within the sample. First, if too much energy is supplied to the fluorophores, the
fluorophores can change their molecular configurations and become unable to emit photons
even when excitation light is applied. This is an effect called photo-bleaching
25,26
. As
fluorophores become photobleached, fewer fluorophores emit photons, leading to decreased
signal and an inability to properly record the spatial locations of the desired populations.
Separate from the effects on the fluorophore, when imaging is being performed on live samples,
an increased amount of excitation light (higher energy) leads to an effect called photo-toxicity,
where excess energy can cause morphological damage or even death to live samples through a
variety of secondary effects.
27
The application of high energy light on a sample can generate
highly reactive photochemicals, increase the heat of the sample to destructive levels, and cause
damage to DNA. Each of these effects negatively impact the live sample, which may result in a
failed experiment or faulty data. These two major drawbacks restrict the total amount of
excitation light which can be applied on the sample when performing live, multidimensional
biological imaging, denoting the use of the term “photon budget”. Let us imagine a single
“bucket” worth of photons (energy) which can be used whenever any kind of imaging is
performed (our photon budget). For a single standard 2D (x,y) fluorescence image, we can use
15
the entire bucket without causing negative effects on the sample. However, with each added
dimension, 3D (x,y,z or x,y,t), 4D (x,y,z,t or x,y,z,c or x,y,c,t), and 5D (x,y,z,t,c), we must portion
our bucket for each added dimension, leading to less and less energy for each 2D slice. Since
each excitation session (single 2D slice) must you a fraction of our total photon budget, live,
multidimensional biological imaging is limited to using a low power excitation laser source.
The need for low power excitation light in multidimensional imaging leads to degradation in
the quality of the acquired fluorescence data. As described in section 1.1.2, recording the
fluorescent signal for a single dimension (single point) involves the collection of photons by a
detector, which converts the count to a single intensity value. During this process, there are
three sources of fluorescence microscopy noise which appear and may significantly modify the
recorded fluorescence signal: added-signal noise, statistical noise, and instrumental noise. We
define added-signal noise as the total signal recorded by the detector from any photons that exist
within the specified region of excitation and does not correspond to photons emitted from the
desired fluorophore. One example of this occurs when imaging a biological sample; these
samples always contain additional bio-molecules which can be excited by the incident laser and
produce photons.
28
This results in what is commonly referred to as background
autofluorescence. We define statistical noise as the effect that occurs to the detection of the
signal due to the inherent uncertainty in the resulting wavelength (energy) of an emitted photon.
Since photon emission is a stochastic process defined by the emission profile of a fluorophore
(section 1.1.2), any emitted photon can have a variable wavelength within the range defined by
the profile. This means that unless there are enough photons to properly express the probability
distribution, the fluorescence signals have a chance of being incorrectly recorded.
29
This effect
16
is expressed as photons being outside of the range designated by the optical filter for standard
imaging or an ill-defined spectral profile for hyperspectral imaging (Chapter 5). The final noise
type is instrumental noise which is composed of the various effects which modify a recorded
signal during the detection and conversion of photons into a digital signal; this includes shot or
Poisson noise, dark current noise, and salt and pepper noise.
30–33
Each of the described
fluorescence microscopy noises is usually managed or ignored as long as there are a large number
of emitted photons from the desired fluorophore and the signal to noise ratio (SNR) is high.
Unfortunately, acquiring high SNR data usually requires a relatively high amount of energy on a
sample from the excitation laser source, which as we’ve explained previously is not possible when
performing live multidimensional imaging. This results in the fluorescence microscopy noise
causing the wide variety of signal modifications we have described that affect the quality of the
acquired data.
1.2 Multifaceted data extraction and analysis tools
1.2.1 Segmentation drives analysis and understanding of biological images
While advancements in microscopy have greatly improved the ability to acquire
multidimensional datasets, it is only one side of the equation to understanding complex biological
systems. When understanding a complex system, analysis of the collected data must be
performed; this requires first distilling the data into the key information which accurately
describes the system. For example, let us say we had a video of cars traveling on a section of
freeway and were investigating how driving behaviors affect the flow of traffic. Since we are
exploring the relationship of cars to traffic flow, we would first need to accurately extract the
information about all of the cars there are seen throughout the video. This extraction would
17
require us to distinguish each and every car in the video. Distinguishing a car would involve
identifying the group of pixels which constitute a single car in a frame and then identifying the
groups of pixels in all of the following frames for that same car. We would then need to repeat
this process for all of the cars in all of the frames of the video. By distinguishing all the cars within
the video, we can perform desired analysis such as determining the various properties of the
cars, including total number, individual and average speeds, and individual and aggregate routes,
which can all be combined into a hypothesis for determining how the interactions between cars
affect traffic flow. Therefore, once data is collected, a method to extract the key information
within the acquired data for proper analysis is the next crucial step.
Within the context of the typical biological image analysis workflow, this distillation
corresponds to several key steps which include segmentation, object identification, and object
quantification.
34–36
Within these steps, segmentation may be designated as the most important
role since it serves to distinguish specific elements within an imaging dataset, while object
identification and object quantification are only possible once segmentation takes place. The
basic idea for applying segmentation to images is to determine a common property for
discriminating designated elements and separating them from the background. This
discrimination would correspond to our identification of the group of pixels for a single car within
our example in the previous paragraph. A large variety of segmentation techniques have been
formulated to put that idea into practice, the most common of which are usually classified as
region-based techniques such as intensity thresholding, clustering, or region-growing methods,
or edge-based techniques such as gradient operator methods, graph searching, and contour
18
fitting algorithms. Other, more advanced segmentation modalities may include techniques such
as statistical modeling, classification algorithms, and neural networks.
34,37–44
1.2.2 Fluorescence microscopy data are represented as grayscale images
Before we can continue our discussion of segmentation and analysis methods, we must first
briefly describe the composition and structure of biological microscopy images on which these
methods are applied. As described previously, for fluorescence microscopy images, signals
corresponding to a range of emitted illuminations are collected from spectrally distinct
fluorophores with each set of fluorophore compound linked to a specific type of protein; this
results in grayscale images per fluorophore.
34–36,45
Each of these images are composed of pixels
defined by single values indicating the level of intensity, and so, any objects within an image are
defined by the proper organization of clustered black, gray, and white pixels. For 2D and 3D
spatial (x,y; x,y,z) images, objects within an image are recognized through the spatial organization
of similar intensity pixel clusters. For determining the dynamics of objects across time in 3D and
4D (x,y,t; x,y,z,t) data, similar 2D or 3D spatial pixel clusters are correlated between different time
frames. Finally, for distinguishing between different types of objects in 3D, 4D, and 5D (x,y,c;
x,y,z,c; x,y,z,t,c), each type of object is usually placed in different channels beforehand in the
acquisition stage. The focus of any segmentation method is to determine which of those clusters
of pixels represent characterized objects and should be demarcated from the background.
1.2.3 Increasing complexities with increasing dimensions hinders segmentation
Segmentation of biological images involves the demarcation of intensity pixels which
comprise a designated feature or features within an image of a biological process or sample
19
(section 1.2.1, 1.2.2). The most commonly recurring biological segmentation problems are the
segmentation of nuclei or membranes for images of cell. Several automated workflows have
been developed which easily and consistently extract the nuclei or membranes from a range of
n-dimensional image types, including 3D (x,y,t; x,y,z) or even 4D (x,y,z,t) data.
36,41,45,46
The
relative ease for solving the segmentation problem using these aforementioned workflows in
extracting nuclei or membranes is typically due to the high signal to noise ratio accompanying
these types of image acquisitions (section 1.1.5). In these common cases where researchers
image groups of cells, imaging experiments usually take place within an environment which
allows for a large photon budget leading to higher image quality. Since these images have pixel
intensities which do not fluctuate wildly, allowing them to be clustered into well-defined features
(section 1.2.2), these workflows allow for high throughput identification of the desired objects.
Furthermore, even if there are reductions in image quality, algorithms for segmenting common
biological structures such as the nuclei can still be implemented due to the simplistic shapes that
are usually associated with these biological structures (ex. ellipses/ellipsoids). These algorithms
usually utilize more advanced techniques such as incorporating the structural parameters or
utilizing the probability models of well-defined shapes to provide decent segmentation even with
degradation of image quality.
40,42–44,47,48
Unfortunately, when working with more complex
biological processes, which can only be captured using high resolution 5D data, these high-
throughput workflows are not as applicable. With high resolution 5D imaging, extraction of
usable information must consider interactivity between multiple components, navigation of
intricate structures, and limitations in the signal to noise ratio.
20
5D data present several challenges that must be addressed. One challenge which hinders
analysis of 5D datasets is the increased complexity of representing and understanding the various
components and structures within the multidimensional dataset. Any method of analyzing these
large multidimensional datasets must consider the interactions between all five dimensions (x,y
vs z vs t vs c). Furthermore, data with such high dimensionality leads to a limited photon budget
per image slice and the increase of noise during image acquisition, causing fluctuations in the
pixel intensities, disturbing the clusters of pixels that define an object. Finally, when
characterizing the acquired data, the amount of information that must be evaluated grows
multiplicatively with each additional dimension with datasets increased by several orders of
magnitude, entering the ranges of hundreds of gigabytes to terabytes. With the dramatic
increase in complexity with 5D data, methods to segment and extract the desired elements
within a biological sample need to be greatly customized according to the specific class of
problem at hand, usually by incorporating additional information, in order to separate the
multiple components within the data.
1.3 Virtual and optical segmentation approaches for 5D data
extraction
For this dissertation, we develop workflows which fall under two complementary approaches
for 5D segmentation: dynamic, structurally guided segmentation and analysis of scalable
fluorescence multiplexing. For dynamic segmentation, we seek to versatilely probe
spatiotemporal changes that occur during development within a model organism by untangling
multiple biological populations within a single live sample when it is not possible to discriminate
the desired components using spectrally distinct fluorescent labels. In Chapter 2, we investigate
21
the cellular dynamics that drive the formation of the embryonic heart tube from progenitor cells
in the anterior lateral plate mesoderm (LPM). Since the LPM is a 3-dimensional bilayer tissue,
visualization and analysis of these dynamic changes in 4D (x,y,z,t) with standard dimensionality
reduction methods is difficult and initially led to inconclusive results. We propose to determine
effective routines to properly analyze this complex non-orthogonal biological structure by
extracting these individual layers and clearly reveal the actin dynamics driving morphogenesis of
the LPM. After chapter 2, we shift the focus to analysis of scalable multiplexing, where we aim
to visualize and separate multispectral (5D) fluorescence microscopy data, thereby allowing us
to increase our ability to multiplex and follow the interactions between highly dependent
components. Multiplexed imaging has long been hampered by technical limitations in both signal
collection and analysis. One method to overcome these limitations is hyperspectral imaging,
which has gained popularity for its ability to multiplex a number of signals in samples; however,
the scalability and adoptability of the technique has been burdened by both complexity and long
computational times of the analysis. In Chapters 3 and 4, we build on a Fourier transform based
clustering algorithm, coined Hyperspectral Phasors (HySP), developed in our lab, and establish
custom algorithms in order to visualize and analyze this type of data in a scalable fashion. In
chapter 5, we partially shift gears as we introduce our fully developed simulation framework as
an in silico microscopy method. This framework allows us to properly assess our algorithms for
scalable multiplexing and further opens the way to improve all of the previously introduced
segmentation and analysis methods. Finally, we conclude the thesis with an outlook into the
implications of these projects and how all of them may be further advanced in order to benefit
the scientific community at large. In each of the following chapters, we seek to create specific
22
techniques and workflows to achieve our goals for each approach in order to investigate and
interpret the network of processes and structures within biological systems.
23
Chapter 2
4D Bilayer tissue segmentation reveals new spatially
synchronized patterns of actin dynamics during cardiac
organogenesis
2.1 Summary
In this chapter, we introduce a segmentation workflow to versatilely probe spatiotemporal
changes within complex environments that arise during biological development. Of particular
interest is the formation of the embryonic heart tube from progenitor cells in the anterior lateral
plate mesoderm (LPM). The formation of the embryonic heart tube during the morphogenesis
of the lateral plate mesoderm is a highly complex process which requires an intricate interplay
between the chemical-molecular and mechanical-physical sides of tissue morphogenesis. In this
project, we focus on the mechanical-physical aspects of the structural reformation, which guide
a bilayer primordial (precursor) pair of bilateral cell sheets to become a hollow cylindrical-like
tube. We reveal through genetic manipulation, dynamic imaging, and biologically inspired image
segmentation, the differences in actin dynamics which occur during the development of the LPM
and contribute to the restructuring. Since the LPM is a bilayer tissue, visualization and analysis
of these dynamic changes in 4D (x,y,z,t) with standard dimensionality reduction methods is either
highly difficult or may lead to inconclusive results. To determine these dynamic changes, a new
custom segmentation workflow is needed to isolate each layer of the tissue. Due to the complex
non-orthogonal 3D structure of the two layers, standard automatic segmentation fails to reliably
partition these two layers. Therefore, we develop new effective routines to swiftly and robustly
extract these individual layers. With these routines, we are able to segment the two layers of
24
the LPM and untangle the individual layer dynamics of the LPM, allowing us to explore the
fundamental physical interactions which guide the morphogenesis of the embryonic heart tube.
We have discovered that the bilayer LPM is not a monolithic tissue; it undergoes highly varied
differences in motion and actin dynamics for each individual layer as well as regions of cells within
the layers. This provides a foundation for understanding the mechanical framework which
supports embryonic development.
2.2 Introduction
2.2.1 Context for the LPM and heart embryogenesis
The lateral plate mesoderm (LPM) is an early embryonic tissue from which a wide range of
organs, including the heart and cardiovascular system, can trace their origins during embryonic
development. One of the pivotal processes that occurs during the morphogenesis of the LPM is
the formation of the embryonic heart tube, the foundational three-dimensional structure of the
heart. The formation of the asymmetric linear heart tube from simple sheets of cells is one of
the first major structural reconfigurations of a primordial tissue and is crucial for proper
downstream development of the adult heart.
49,50
Most studies for investigating the formation of
the embryonic heart tube within the zebrafish embryo have involved the chemical-molecular side
of tissue morphogenesis. These studies have included the determination of several key genes
such as nkx2.5, hand2, gata5, and key pathways such as the BMP and Wnt pathways, which drive
cardiac cell differentiation.
51–56
Additionally, genes affecting left-right symmetry such as
casanova, southpaw, and other Nodal signaling components have been well studied in heart tube
formation.
57–60
However, cardiac cell differentiation alone cannot explain how such a complex
structure as the linear heart tube can be given form from a mass of primordial cells.
25
2.2.2 Structural imaging of the LPM in literature has been limited
Several previous studies have delved into the formation of the LPM in zebrafish through live
imaging and visualization of the LPM; still, these studies, while sometimes involved in
distinguishing mechanical behaviors of the LPM, have usually been focused on characterizing
molecular determinants from signaling pathways.
51,53,61,62
These studies have mainly looked at
static fluorescence expression patterns in the developing heart, used LPM specific fluorescent
fusion proteins or analyzed 2D maximum intensity profiles of those expression patterns over
time. Few of these studies have taken into account the three-dimensional bilayer topology of
the LPM and the dynamic mechanics of this tissue across its layers. From preliminary work that
we have done in observing the development of the linear heart tube, the emerging structure of
the linear heart tube involves interactions between cellular and subcellular components
throughout the entire 3-dimensional shape of the LPM across time. Therefore, a proper
understanding of how this tube forms within the LPM would benefit from a more in-depth look
at the cellular and sub-cellular dynamics that occur throughout this multidimensional tissue.
2.2.3 A fresh perspective on the dynamics of the developing LPM
In this work, we present a workflow and foundational analysis of the four-dimensional cellular
and intracellular dynamics of the LPM during early embryonic heart tube formation. We do this
by first establishing a semi-automatic method for 4D tissue segmentation which allows us to
properly separate, visualize, and quantify sections of the LPM. Then we provide visual and
quantitative analysis of actin dynamics and tissue tectonics for segmented datasets of transgenic
zebrafish embryos to establish a baseline behavior of the LPM. Finally, we explore differences
between individual layers of the LPM for both wildtype and morphant zebrafish embryos to
26
discuss and conceptualize a hypothetical model for the possible physical forces and biomolecular
interactions required to guide the formation of the linear heart tube.
2.3 Results
2.3.1 Actin organization demarcates developmental features in the LPM during
heart tube morphogenesis
To capture the dynamic cellular organization and visualize its changes during LPM
morphogenesis, we imaged a transgene, TgBAC(gata5:LifeAct-GFP) that labels actin in the LPM
during the stages of embryonic heart formation (14hpf – 22hpf) (Methods). Two-photon (2P)
microscopy enabled long-term live imaging and increase depth penetration, making it possible
to capture fluorescence signals at the midline of the LPM, the deepest part of the tissue, 200um
beneath the neural tube. Maximum intensity projection (MIP) of Lifeact-GFP timelapse reveal
that actin within the LPM form distinct structural features that demarcate key processes and
subregions of the LPM (Figure 2.1 A). One of the first processes that appear is the intercalation
of cells and actin rich purse strings at the midline during the merging of the two bilateral
populations of the LPM from 16 to 18hpf (Figure 2.1 A, B). The supracellular actin purse string is
reminiscent of actin contractile filaments present during epithelial wound healing and Drosophila
dorsal closure (Sup Fig 2.1). Upon merging of the bilateral populations of the LPM, cortical actin
in cells anterior to the midline become prominent as the cells organize into concentric rings
during the formation of the heart cone at 20 hpf (Figure 2.1 A, B). The concentric rings of actin
break symmetry and shift as a whole towards the anterior left-axis of the embryo at 21 hpf (Figure
2.1 A, B). Finally, the cylindrical boundaries of the linear heart tube become observable as it
extends and becomes clearly defined at 22 hpf as the cells and actin boundaries lengthen and
27
coalesce (Figure 2.1 A, B). Double transgenic embryos that label the myocardial progenitors with
Tropomyosin4-mCherry and actin within the LPM with GFP (Gt(tpm4-
mCherry);TgBAC(gata5:LifeAct-GFP)) (Figure 2.1 C, Methods) show that the cells forming the
concentric rings of cortical actin correspond to myocardial progenitors that extend into the linear
heart tube. The accumulation of cortical actin in the myocardial progenitors indicate that these
cells have acquired a distinct morphological identity from the rest of the LPM. Furthermore, the
timelapse of actin dynamics (Figure 2.1 A) reveals that while the cells have distinct actin
organization with the LPM, they form supracellular actin filaments indicative of a continuous
tissue that exhibit organized tissue level movements.
28
Figure 2.1. Actin dynamics reveal key morphological features during embryonic heart tube formation
Embryonic heart tube formation involves complex morphological changes within a that transforms a
simple primordial tissue in to a functioning pump in less than a day, followed here using transgenic
zebrafish TgBAC(gata5:Lifeact-GFP) that label the cells as the Lateral Plate Mesoderm (LPM) moves to the
midline and undergo the first morphogenetic transformations to form the heart tube. (A) Frames from a
six-hour time lapse recording that highlights the cell motions that build the heart. Within the transgenic
zebrafish, Lifeact-GFP binds to actin, a key structural component within cells (Lifeact-GFP rendered in
gray), specifically within the LPM through the use of a BAC construct that contains the gata5 locus with
its transcriptional regulatory elements. Gata5 is a transcription factor expressed in the LPM during cardiac
development. The structural dynamics within the LPM are captured in this maximum intensity Z-
projection (MIP) of 3D timelapse images of Lifeact-GFP expressed in the cells fated of the LPM (gata5) at
two-hour intervals. This includes merging of the two cell populations at the midline starting at 16 hpf, the
initial transformation of the anterior cells into the heart cone at 18 hpf, the further restructuring of the
cone at 20 hpf, the initial extension of the cone towards the leftward direction, and finally the full
extension of the heart tube at 22 hpf. Interestingly, the actin shows features that are longer than single
cells, and thus reveals supracellular organization of actin during key developmental stages during heart
tube formation. (B) Schematic representation of the LPM (light green) and the cardiac progenitors (dark
green) within the LPM, as these morphogenetic events take place at 2-hour intervals, as in A. (C) MIP
images of a double transgenic zebrafish (Gt(tpm4-mCherry); TgBAC(gata5:Lifeact-GFP)) that labels the
cardiac progenitor cells (Tpm4-mCherry: pseudo-color green) along with the aforementioned actin
(Lifeact-GFP: gray) provide further support in demonstrating the higher signal and supracellular
organization of the actin colocalized with the cardiac progenitor cells.
29
2.3.2 Bilayer nature of the LPM requires a three-dimensional analysis of the
dynamics
Although actin dynamics on a tissue scale reveals supracellular actin structures and
processes, tracking actin organization across multiple time points within cells of the LPM reveal
that the actin filaments are discontinuous and unorganized when viewed in a maximum intensity
projection (MIP). Three-dimensional image visualization suggests the evident lack of
organization within the LPM can be accounted for by the topology of the LPM. The LPM is a
bilayer structure (Figure 2A). Virtual XZ cross-sections of the LPM images from an oblique
perspective reveal that the two layers are separated by acellular space with the two layers being
most distinguishable at the periphery of the two halves of the LPM (Figure 2.2 B). XZ cross-
sections further show that the intensity distributions from each of the two layers are not aligned
(Figure 2.2 B); hence, in a MIP, the differing intensity patterns from each of the two layers would
overlay on top of each other (Figure 2.2 C, D). These two layers correspond to the dorsal (top)
layer and ventral (bottom) layer with respect to the dorsal-ventral axis of the imaged embryo.
Since the MIP combines the intensity distributions from both layers, what would originally be
distinctly arranged patterns now appear to be unorganized (Figure 2.2 E) accounting for the
disorganized nature of the actin signal when viewed in a MIP. As such, proper analysis of the
actin dynamics within the LPM requires individual visualization of the image voxels corresponding
to either the dorsal or ventral layer. However, due to the curved nature of the LPM along all
three spatial axes (XY, XZ, YZ), image voxels corresponding to either of the DV layers may be
adjacent to one another within the same XY image slice, preventing proper visualization by simply
30
grouping different XY slices (Sup Fig 2.2). Thus, object extraction of the individual LPM layers
must be performed, necessitating an image segmentation algorithm and workflow.
31
Figure 2.2. Bilayer characteristics of the LPM presents challenges in visualization using the maximum
intensity Z-projection
(A) A schematic of the developing Lateral Plate Mesoderm (LPM) demonstrates the bilayer nature of the
LPM and its deep position within the zebrafish embryo, underneath the neural tube. (B) 3D cross sectional
view of the LPM (oblique angle off of XZ view) in a transgenic embryo specifically labeling the actin within
the LPM as well as ubiquitously labelling the nuclei further demonstrates the bilayer nature of the LPM
and its depth within the spatial context of the embryo. (C) Grayscale, zoomed-in view of the 3D cross
section in B better outlines the differences in signal between the dorsal and ventral layers. (D) Maximum
grayscale intensity Z-projection (XY view) of the block shown in C shows the overlayed actin signal from
the dorsal (magenta) and ventral (cyan) layers, highlighting the difficulty in determining which actin signal
comes from each layer. (E) Simplified schematic of the LPM demonstrates the concept of separating the
3D z-stack image into two separate z-stack images in order to properly visualize the actin patterns of each
individual layer.
32
2.3.3 Layer segmentation reveals distinct cell arrangements in dorsal and
ventral LPM layers
Analysis of the LPM requires virtually partitioning the thin, 3D region of voxels that
correspond to each layer across all time points for the 4D timelapse datasets. Standard
visualization tools for multidimensional fluorescence microscopy data such as FIJI or Imaris exist,
which enable exploration of complex multidimensional data with multiple layers. However, no
standardized tools exist for this particular problem of performing complex multidimensional layer
segmentation. Current segmentation algorithms and standardized toolkits require a highly
involved and convoluted custom process in order to perform this specific type of
multidimensional segmentation (Sup Note 2.1). For example, Imaris enables the manual drawing
of 2D contour lines within the 3D virtual space of the dataset in order to create virtual volumes.
While this volume can be used to mask specific objects within the dataset, the interface is both
difficult to use and time consuming for the specific purpose of segregating layers since that is not
its purpose. Therefore, we envisioned and coded a custom semi-automated software with a
graphical user interface (GUI) to enable virtual dissection of the two LPM layers of the 4D (x,y,z,t)
datasets.
The software provides the means to create and apply a well-defined 4D segmentation
hypersurface through a combination of automatic and manual steps. It begins by automatically
generating an imperfect segmentation surface between the pixels for the layers by utilizing a 1D
center of mass for the pixel intensity across the z-axis (Methods, Sup Note 2.2). The GUI for the
software provides an orthogonal view (XZ) of the 3D (x,y,z) image array at a specific time point,
t, in which a corrected segmentation line between the two layers of the LPM can then be
33
manually adjusted (Figure 2.3 A). After correcting the segmentation lines for a subset of all XZ
slices in the 3D image array (~2% of the total slices), those lines are then interpolated between
the subset of slices across the y axis to create a 3D surface which will segment the two layers in
a single time point (Figure 2.3 A). This process is then repeated for a subset of the 3D image
blocks (~ 8% of the total blocks) and again interpolated along the time axis in order to create a
4D segmentation hypersurface (Figure 2.3 B). The two designated voxel regions of the 4D image
array are then individually masked and placed into different visualization channels, creating a 5D
(x,y,z,t,c) image array with two channels, one for the dorsal layer and the other for the ventral
layer (Figure 2.3 C). The vast reduction in the number of manually drawn segmentation lines (~
0.2 %) enables segmentation of the two LPM layers, a previously unfeasible task (Sup. Note 2.1).
Individually rendering maximum intensity z-projections for the two virtually dissected layers
across time now provides unobscured visualization of the actin signal dynamics for each
individual layer as compared to the unsegmented image stacks (Figure 2.3 D). As before, MIP
reduces the difficulty in observing and analyzing spatiotemporal datasets, but now the
segmentation enables clearer observation of the previously hidden details even with MIP. The
MIP for the two separated layers reveals highly distinct actin patterns and movements within the
DV layers over time with each layer undergoing different motions and actin dynamics. One
prominent example for this is in the difference between the specific tissue-wide arrangements
of cortical actin formed throughout the dorsal and ventral layers. The actin patterns within the
dorsal layer appear to consist of smaller components, be not as well-formed, and have less
organization compared to the patterns within the ventral layer, which shows large, well-defined,
structurally organized, cobblestone-like patterns of actin (Sup Fig 2.3). These actin patterns and
34
movements exist across multiple scales, including subcellular, cellular, and tissue-wide (discussed
in detail below).
Figure 2.3. Layer segmentation workflow reveals distinct actin patterns for each layer of the LPM
(A) Workflow enables splitting of a multi-layer 3D image into individual 3D images for each layer by
segmenting a subset of XZ slices before interpolating to create a segmentation surface. (B) The creation
of a segmentation surface is repeated across a subset of time points before interpolating in order to create
two segmented layer 4D datasets. (C) Separation of two single channel XZ image slices in the anterior and
posterior regions of the LPM result in two channel images for the dorsal (magenta) and ventral (cyan)
layers. (D) Maximum intensity Z-projections of the virtually split dorsal and ventral layers of the LPM at
two different time points (16hpf, 22 hpf) demonstrate the distinct signal patterns of the two layers.
35
2.3.4 Distinct actin cable orientations emerge between in LPM layers across
multiple time points
The segmented layer time-lapse datasets exhibit specifically recurring, aligned supracellular
actin filaments in each individual layer that suggest the two layers experience distinct
morphogenetic forces. Several cables with stable spatial localizations emerge for each layer
during the formation of the embryonic heart tube and undergo distinct movement paths over
time. These cables appear to align across multiple cells, with the movements of the cables either
following or being followed by a wavefront of a group of cells which demonstrate high
interconnectivity. Of these cables, we have identified three sets which arise with the most
consistency across the two layers for all samples: dorsal-lateral, dorsal-midline, and ventral-
midline cables. Maximum intensity projection (MIP) allows for dimensional reduction, enabling
us to more easily track cables across time through simple visual inspection; recognition of actin
cable features after MIP is only made possible due to the segmentation performed using our
layer segmentation workflow.
The dorsal-lateral set of actin cables emerge as two separate arcs in the lateral portions of
the dorsal layer and are easily detected in MIP images between 17 and 20 hpf (Figure 2.4 A). The
two dorsal-lateral cables follow lateral movement paths in opposite directions, with the left arc
moving to the left and the right arc moving right (Figure 2.4 D). These cables mirror each other
across the midline in placement; however, they differ in the size and curvature of the bilateral
cables indicating differences in left-right asymmetry within the LPM prior to previously described
symmetry breaking events in the leftward placement of the linear heart tube. The left dorsal-
lateral cable has a shorter arc length and a narrower curvature than the right dorsal-lateral cable.
36
These differences in supercellular actin cables suggest that long before the leftward shift of the
developing heart cone, there are cell-to-cell interactions within the dorsal layer of the LPM which
exhibit left and right differences. The shorter arc length, paired with the narrower curvature of
the left dorsal-lateral cable suggests that the left side of the dorsal layer may be undergoing a
higher leftward tension leading to an anterior-posterior compression of the cells compared to
the rightward tension of the cells for the right side of dorsal layer.
Unlike the dorsal-lateral cables, the dorsal-midline cable forms a single cable within the dorsal
layer at approximately 21 hpf and extends across the midline (Figure 2.4 B). This cable remains
constant in position and length throughout the time span where the cable is distinguishable as
both sides of the tissue expand. The minimal changes in position and length of the actin cables
indicates that much of the dorsal layer, especially the midline section, is stable during heart tube
formation. This suggests that the dorsal layer works as a whole to support the rest of the LPM,
providing a form of anchoring or perhaps a counter balance to the rest of the morphological
reconstruction of the tissue during development.
The last supracellular cable to appear, the ventral-midline cable, is visible in the ventral layer
at around 22 hpf and extends laterally across the midline, with lengths greater than 100 µm,
spanning multiple cells within the ventral layer. This cable has a greater curvature than the
dorsal-midline cable and is located posterior to the extending heart cone. Unlike the dorsal-
midline cable which extends horizontally across the midline from left to right, the ventral-midline
cable is composed of three sections. It first starts at the left as a straight line which extends
diagonally towards the posterior right, halfway to the midline. Next, the cable extends into an
arc, which curves and borders the posterior boundary of the heart cone. The final section then
37
extends diagonally straight to the anterior right. The movement path of the ventral-midline cable
over time as illustrated by the time map (Figure 2.4 E, F) follows that of the extension of the linear
heart tube sweeping in an anteriorly left direction suggesting that the dynamics which cause the
reconfiguration of the heart cone into an asymmetric heart cone, come from the actin and
cellular motions and interactions within the ventral layer. This motion may also arise from a type
of purse string motion reminiscent of wound healing which “draws” the formation of the linear
heart tube towards the anterior left through left and right tension along the ends of the cable.
Overall, separating the two layers of the LPM has provided a more coherent picture of the
actin dynamics that occur during LPM morphogenesis compared to the standard MIP renderings.
Through the visualization by time map, we are able to classify and identify clear differences that
occur for both actin cable shape and movement profiles depending on the specific
spatiotemporal localizations of the cables with respect to the anterior-posterior axis, the left-
right axis, and the dorsal or ventral layer. This analysis of the distinct actin signals and dynamics
in each layer thus points to spatially organized roles for each layer of the LPM, with each layer
having its own distinct tissue dynamics.
38
Figure 2.4. Actin boundary formations denote differential morphogenetic forces between dorsal and
ventral layers of LPM.
The two layers of the LPM contain at least three consistent sets of actin cables. Each set is different in
terms of location and orientation, which denote differential actin dynamics between layers. (A) Maximum
intensity z-projection of the dorsal layer of the LPM at 17 hpf display the first set composed of two actin
cables marked laterally in magenta. (B) Maximum intensity z-projection of the dorsal layer of the LPM at
21 hpf shows another with one actin cable stretched across the midline marked in magenta. (C) Maximum
intensity z-projection of the ventral layer of the LPM at 22 hpf depicts the final set with a continuous actin
cable extending from left to right across the midline. (D-F) Time maps of the actin cables denoted in A-C
demonstrate the lateral motions of the dorsal cables in A, the small anterior movements of the dorsal
cable in B, and the larger anterior movements of the ventral cable in C. These motions demonstrate the
overall differences in dynamics and structures between the two LPM layers. Color order is blue, green,
yellow, red for earliest time to latest time.
2.3.5 Endodermless mutant reveal that midline fusion does not inherently
affect dorsal-lateral actin cable formation and movement
The stability and minimal motion of the dorsal cables over time as the heart tube forms and
undergoes leftward extension suggest a potential role as anchoring points for LPM tissue
morphogenesis. The endoderm, a tissue that interacts with the dorsal LPM, is required for LPM
morphogenesis as mutations in genes affecting endoderm formation disrupts midline
39
fusion.
52,57,63,64
Additionally, the formation of the midline actin cables within both individual
layers either starts with or occurs after the midline fusion suggesting that disruption to endoderm
development would affect their formation. Therefore, we tested whether the interaction with
the endoderm is required for the formation of the dorsal cables and if fusion of the bilateral
populations is required for overall cable formation. To address these questions, we used a
genetic knock-down of the transcription factor, Sox32 and performed timelapse microscopy in
TgBAC(gata5:Lifeact-GFP) embryos. In sox32 morphants, the endoderm does not form, leading
to a lack of LPM fusion at the midline and results in the embryo having cardiac bifidia
57
.
Comparing the dynamics in morphant with those in wildtype provides a starting point for possible
sources of the mechanical cues which direct morphogenesis of the LPM.
The segmented dorsal layer in sox32 morphant (sox32MO) embryos reveals bilateral dorsal-
lateral cables; these cables appear to be very similar to those observed in wildtype embryos
(Figure 2.5 A, D right). Interestingly, the changes in curvatures of the dorsal-lateral actin cable
arcs to the left and right in the dorsal layer also appear to be similar between the wildtype and
sox32MO embryos (Figure 2.5 A, D left). To assess the overall similarity between the dorsal-
lateral actin cables in wildtype and sox32MO embryos, we performed quantitative analysis for
these sets of cables. Since the two dorsal-lateral cables form dynamic arcs of differing lengths
and curvatures, we needed a relative quantification for comparing the cables across multiple
time points. We utilized the Fréchet speed, a normalized form of the Fréchet distance which
measures shape similarity between paths (Methods). A greater difference in the shape of the
cable should result in a larger Fréchet distance; therefore, a higher Fréchet speed points to a
greater shape change in a cable. In wildtype embryos, the Fréchet speed of the left cable was
40
consistently higher than that of the right by a factor of 1.4 ± 0.2 (Figure 2.5 E). In sox32
morphants, a similar ratio, 1.25 ± 0.15, was determined for the Fréchet speed of the left and right
cables (Figure 2.5 E). In addition to measuring length and curvature changes over time by the
Fréchet speed, we quantify and compared the directions of the left and right dorsal-lateral cables
by tracking the center of mass of the actin cable. In wildtype embryos, the dorsal arcs move left
and right at a mean of 175.9° ± 2.2° for the left arc and a mean of -12.2° ± 9.5° for the right arc
(Figure 2.5 F). In sox32 morphants the dorsal left-right arcs move left and right in a highly similar
manner at a mean of 177.4° ± 0.6° for the left arc and a mean of -11.3° ± 0.3° for the right arc
(Figure 2.5 F), showing very small differences to wildtype embryos. Therefore, even without the
presence of endoderm and fusion at the midline, the formation and behavior of the actin cables
in the dorsal layer in sox32 morphants do not appear to be affected with respect to those in
wildtype embryos, with the tissue maintaining left-right differences. Overall, in the absence of
the endoderm and without its interaction with the dorsal layer, the actin cables within the dorsal
layer for the sox32 morphants form similar spatiotemporal organizations with those of the
wildtype, suggesting that the dorsal-lateral cables are most likely not acting as anchoring sites to
the endoderm.
The missing midline fusion in sox32MO embryos creates a major disruption in the ventral
layer as compared to the wildtype, which would evoke an expectation of disorganized structural
elements within the layer. However, areas within the ventral layer in sox32MO embryos still
form half cones in locations corresponding to where the cardiac cone forms within wildtype
embryos. Furthermore, examination of the actin dynamics within the ventral layer of sox32MO
embryos reveals the existence of actin cables which are reminiscent of the single ventral-midline
41
cable in the wildtype sample, albeit separated into two parts. Due to the lack of midline signal
for sox32 morphants, we traced the actin cables for the ventral layers by marking two split left
and right cable sections instead of a single continuous cable as had previously been performed
for the wildtype analysis (Supplementary Note 2.3). These two ventral-lateral cables in sox32MO
embryos demonstrate similarity in shape and movement direction with the lateral sections of the
ventral-midline actin cable in wildtype embryos; the sox32MO ventral-lateral actin cables flank
the cardiac cone in the ventral layer during its development (Figure 2.5 B, E). In terms of shape,
the ventral-lateral cables in sox32MO embryos and the lateral parts of the ventral-midline cable
in wildtype embryos are both straight with no discernable curvature. To assess the degree of
similarity, we quantified and compared the directions of the left and right ventral cables or
sections between sox32 morphants and wildtype embryos. In wildtype embryos, the left ventral
cables move anteriorly left at a mean of 114.1° ± 0.2° while in sox32MO embryos, the ventral arcs
move anteriorly left at a mean of 118.5° ± 0.6° (Figure 2.5 G), showing very little difference in
directional movements. However, for the right arcs, the wildtype cables move with a mean of
126.5° ± 0.4° while the sox32MO cables move with a mean of 63.9° ± 0.1°. This directional
difference suggests that while the right arc in wildtype embryos is moving anteriorly leftwards in
a similar direction as the left arc, the right arc in sox32MO embryos is moving more in an
anteriorly rightward direction. Thus, while the fusion at the midline does not affect whether
actin cables form in either layer, it does appear to contribute to the anteriorly leftward motion
of the lateral right section of the ventral-midline actin cable.
42
Figure 2.5. Shape changes and movement of supracellular actin cables in the endodermless mutant
bilayers are unaffected by the lack of midline fusion
Comparisons between wildtype and sox32 morphants (sox32MO) provide a first look into the mechanical
consequences to the LPM by removing the endoderm. Left and right actin cable delineations across time
provide representative cell arrangements and motions for (A) the wildtype dorsal layer, (B) the wildtype
ventral layer, (C) the sox32MO dorsal layer, and (D) the sox32MO ventral layer. (E) Barplot denotes the
average left to right Fréchet speed ratios for wildtype (cyan) and sox32MO (magenta) dorsal-lateral actin
cables and demonstrates the high degree of similarity between both. (F) Center of mass angular directions
for the dorsal and (G) ventral layers between wildtype (cyan) and sox32MO (magenta) further
demonstrate similarities in direction of motion, except for the case of the ventral right cables.
2.3.6 Defects in left-right asymmetry disrupts the shape and movement of
supercellular actin cables in LPM layers
Both our analysis in wildtype and examination of the sox32MO morphants provide ample
indication of the differences between the left and right that occur within the LPM even before
symmetry breaking occurs at the tissue scale. For all dorsal-lateral cables, there are quantifiable
43
differences in the shape changes between the left and right actin cables before leftward shifting
of the heart cone into the asymmetric linear heart tube. Overall, the sox32MO data points to
clear left-right differences in actin cable formation which are inherent to the LPM and do not
require midline fusion to occur. However, this provides a contrasting viewpoint to what is shown
in literature, where the lack of forerunner cells in sox32 mutants lead to a demonstration of
cascading defects in left-right asymmetry, including absence of Kupffer’s vesicle as well as further
disruptions in downstream left/right signaling.
57,65
Therefore, this opened further questions as
to how left-right signaling relates to the mechanical interactions relayed through the actin
dynamics. To better probe how actin dynamics are affected by or may contribute to the left/right
balancing of tissue mechanics during development with proper adjacent structures (endoderm),
we investigated the actin dynamics in southpaw (spaw) morphants. In spaw morphants
(spawMO), expression of the nodal related gene, spaw, is knocked-down. spaw is one of the
earliest molecular markers of left/right asymmetry with spaw expression in the left side of the
LPM driving the usual leftward formation of the linear heart tube. Its disruption enables targeted
modifications in left/right placements of structure without causing embryo-wide tissue
deformations.
58,60
In spaw knockdown embryos, two classes of phenotypes arise in the LPM as
detected by timelapse microscopy of TgBAC(gata5:LaGFP);spawMO embryos. For the first
phenotype, spaw morphants have situs inversus where the heart tube forms to the right (spaw
right). For the second phenotype, the morphants develop with the heart tube forming at the
midline (spaw middle). In this chapter, we focus on the first phenotype, spawMOr.
The dorsal layer over time for the spaw right morphants (spawMOr) more revealed the
formation of two dorsal-lateral actin cables corresponding to those previously identified in both
44
the wildtype and sox32MO embryos. We determined that the dorsal-lateral cables appear to be
flipped across the left-right axis in spawMOr embryos, with the right dorsal-lateral cable
appearing similar to the left dorsal-lateral cables of wildtype and sox32MO and vice-versa (Figure
2.6 A, C). Interestingly, the forms of the actin cables as arcs also appear to be less smooth and
continuous for the spaw morphant as compared to those in sox32 morphants and wildtype
embryos. We quantify the shape changes and motions of the actin cables in spawMOr embryos.
The shape changes of the dorsal cables in spawMOr embryos using the Fréchet speed analysis
results in ratios which appear to be less than one at 0.8 ± 0.15, denoting that the shape changes
of the right arc are greater than that of the left, confirming our qualitative observation that the
actin cable formations appear to be reversed across the left-right axis (Figure 2.6 E). However,
our quantification of the movements of the spawMOr dorsal-lateral cables show a leftward
direction at a mean of 77° ± 55° for the left cable and a rightward direction at a mean of -29° ±
34° for the right cable (Figure 2.6 F), demonstrating motions which do not directly correspond to
the motions of the wildtype reversed across the left-right axis. These results demonstrate largely
unordered and non-mirrored directions for the motion of the dorsal-lateral actin cables. The
mean direction angles for these left and right cables are not directly opposing one another (left
angle – right angle = ±180°) compared to those measured for the wildtype. Standard deviations
of these motions are also very large (>30°). Taken together with our qualitative observation of
the spawMOr dorsal-lateral cables being less smooth and continuous, it appears that disruption
of spaw leads to randomness of the actin cable formation and movement with the rightward
formation of the heart tube occurs by chance instead of being directed.
45
For the ventral layer, a ventral-midline cable is also present in the spaw right morphants. This
cable still has similar shape characteristics with those in wildtype and sox32MO embryos and is
located posterior of the developing heart cone. However, this cable is again more nebulous in
structure (Figure 2.6 B, D). The lateral sections of the ventral-midline cable are similar to the
dorsal-lateral cables in demonstrating “non-opposite” behavior for the direction of motion.
Quantifications show the lateral sections of the ventral-midline cable moving anteriorly right at
a mean of 16° ± 0° for the left section and a mean of 47° ± 13° for the right (Figure 2.6 G). These
results provide further support for the less organized formation and more randomness in motion
of the actin cables for the spaw right morphants. Overall, there are at least two possible
explanations for these results. One possibility is that the standard left-sided expression of
southpaw inherently influences the actin dynamics within the individual layers of the LPM and
drives the leftward formation of the embryonic heart tube. The second is that the organized
structure and directed dynamics of the actin cable formation within the LPM serve to act as a
readout for the morphogenesis of the developing embryo. Therefore, the characterized
phenotype of the spaw right morphants may either follow from disorganization of the cables, or
the disorganization may follow from improper morphogenesis of cells within the spawMOr LPM.
Both possibilities indicate a very strong link between the actin cable dynamics and left-right
asymmetry during LPM development.
46
Figure 2.6. Disruption to left-right signaling in southpaw right morphants appear to cause
disorganization of actin cable formation and movements with respect to the left-right axis
Comparisons between wildtype and spaw right morphants (spawMOr) provide an insight into the effects
of left-right signaling on the actin dynamics during LPM morphogenesis. Left and right actin cable
delineations across time provide representative cell arrangements and motions for (A) the wildtype dorsal
layer, (B) the wildtype ventral layer, (C) the spaw right morphant (spawMOr) dorsal layer, and (D) the
spawMOr ventral layer. (E) Barplot denotes the average left to right Fréchet speed ratios for wildtype
(cyan) and spawMOr (magenta) dorsal-lateral actin cables and demonstrates the reversal in shape change
ratio (greater than 1 vs less than 1) between wildtype and spawMOr. (F) Center of mass angular directions
for the dorsal and (G) ventral layers between wildtype (cyan) and spawMOr (magenta) demonstrate much
higher variations in direction of motion as well, with the dorsal left motions of spawMOr cables having a
variance of over 180° and both ventral spawMOr cables moving right instead of left.
47
2.4 Discussion
2.4.1 Complex multidimensional LPM topography necessitates a custom
segmentation workflow
Investigating the 3D structure of the LPM across time during morphogenesis necessitates the
creation of a custom segmentation workflow, since standard segmentation techniques and tools
do not exist to perform complex multidimensional layer segmentation with fluorescence images.
Layer segmentation is necessary for a variety of reasons. Visualizing 3D imaging data is commonly
achieved using MIP which necessitate viewing a 3D scene two-dimensionally; however, correct
visualization and analysis of the 3D LPM is not possible unless the two layers are viewed
separately (section 2.3.2). Still, several challenges exist which make the visualization of each
individual layer highly difficult. Current labeling methods do not allow for the separate
acquisition of the two LPM layers, requiring their separation using the pixels in image space.
Since computational segmentation methods are needed, acquisition of high SNR imaging data is
preferred, especially if automatic segmentation methods are to be utilized. However, high SNR
imaging data is not an option with high resolution, live, volumetric timelapse imaging of the LPM,
resulting in an inability to easily perform the desired image processing (Supplementary Note 2.1,
2.2). By creating custom layer segmentation, we have been able to begin a detailed analysis of
the biomechanical dynamics of the LPM and provide a foundation for developing better layer
segmentation algorithms.
One prominent challenge in segmenting the LPM data is the four-dimensionality of the data;
the added dimension requires that the layer segmentation must be performed for every single
time-point of the complex 3D LPM structure. This has a much higher complexity as compared to
48
the usual segmentation that is either performed on separate multiple 3D structures in a single
time-point (separate organs in a whole organism) or a simple 3D structure for multiple time-
points (extraction of isolated cells or nuclei). As an example, one wildtype LPM dataset may have
a size of (x = 512, y = 512, z = 70, t = 190) pixels. Therefore, the amount of time needed to properly
segment the single 3D (x,y,z) volume, which may already a costly process (Supplementary Note
2.1), would need to be repeated 190 times, resulting in an unworkable amount of time to perform
any quantitative analysis. In utilizing this semi-automated method of correcting the
segmentation lines for a subset of XZ slices across a subset of timepoints, we drastically reduce
the amount of time that is needed to perform the segmentation. If we look at the hypothetical
case where someone would segment the LPM layers by creating segmentation lines for every
single XZ slice and timepoint, we can see for our previous example that this would require about
100,000 segmentation lines. On the other hand, if that person were to segment the LPM using
our segmentation workflow, we would only need to take a subset of the XZ slices (ex. 10 slices)
and a subset of the time points (ex. 12), resulting in only approximately 120 segmentation lines,
resulting in a speedup of over 800-fold. In short, this custom software has enabled us to segment
multiple large multidimensional datasets in a practical time frame, making it possible to begin to
understand the interactions that direct LPM morphogenesis.
2.4.2 Hypothetical model of the physical mechanics of heart tube formation
From our observations of the actin cable organizations and motions within the wildtype
embryo, we hypothesize that the dorsal and ventral LPM layers serve different functions during
the structural reconfiguration of the LPM. From the lateral motions of the dorsal actin cables in
wildtype embryos (Fig. 2.4 A,B,D,E) and the horizontally extended cellular shapes and layout of
49
the dorsal layer during later time points (Sup Fig 2.3 A), we infer that the dorsal layer does not
directly contribute to the regional motions that follow the extension of the heart tube towards
the anterior left. In contrast to the indirect contributions of the dorsal layer, the anterior-
leftward motion (Fig 2.4 C, F) and anteriorly leftward biased cellular shape and layout of the
ventral layer (Sup Fig 2.3 B) in wildtypes embryos are much more discernable. These motions in
conjunction with overall growth of the LPM during morphogenesis suggest that the extension of
the linear heart tube is mainly directed by cellular motions and physical interactions in the ventral
layer with the dorsal layer providing a stabilizing or countering motion.
Genetic perturbations using the sox32 and southpaw morphants in comparison to the
wildtype embryos suggest that the cellular changes and movements leading to cardiac cone
generation and heart tube extension appear to be inherently controlled by the mechanical cues
of cells within the LPM, which are also affected by nodal signaling pathway. In the sox32
morphant, the lack of endoderm and midline fusion did not appear to affect the actin cable
formation and movements on the lateral regions of the LPM. The only difference in cable shape
and motion appears to come from the ventral right cable in sox32 morphants. Since the
formation of actin cables in both layers of the LPM appear to be unaffected in sox32 morphants
despite the lack of endoderm, the highly directed leftward motion of the ventral layer appears
to be coordinated more by the left region, with the right region mainly contributing to the
anterior motion of the LPM rather than the leftward motion. In spaw right morphants, the actin
cables still form, similar to those seen in the wildtype. However, the organization of these cables
have a reversed orientation with the shape change ratios of the spawMOr dorsal cables less than
1, and large differences in the angles of motion of cables in both dorsal and ventral layers. The
50
modification in actin cable organization and behavior when disrupting spaw suggests that either
spaw directs the proper formation and configurations of actin cables in both the dorsal and
ventral layers of the LPM or the actin cable dynamics may serve as readouts for defective
morphogenesis caused by improper spaw expression. Either way, this does present a possible
contradiction with regards to the results in sox32 morphants, which demonstrate minimal change
in left/right behavior, when, in literature, sox32 disruption has also led to left/right signaling
defects. However, this inconsistency may be reconciled with the consideration that only a limited
number of sox32 morphants were observed for this analysis and spaw expression was not
profiled. There is a high chance that these particular sox32 morphants retained left-sided spaw
expression, leading to the aforementioned results. Regardless, observation of the cables for the
spaw morphants details less aligned filaments for both the dorsal and ventral cables, which in
conjunction with the high variance in the angles of motion seem to imply that the situs inversus
that sometimes results in spaw morphants points to a high correlation between the dynamics of
actin cable formation and movement and the morphological changes caused by knockdown of
spaw. Further experiments will be required to determine whether this link is more due to a
randomization of actin cable alignment and motion causing the reversal or the structural changes
from knockdown of spaw being conveyed through the actin cables. In total, we have determined
through live imaging and analysis that actin cable formation and motion is highly intertwined and
may contribute greatly to the morphological restructuring of the LPM from simple cell sheets to
a much more complex tube. While further research is necessary, it may also be the main medium
through which these changes are accomplished.
51
2.5 Methods
2.5.1 Zebrafish lines
Adult fish were raised and maintained as described
66
in strict accordance with the
recommendations in the Guide for the Care and Use of Laboratory Animals by the University of
Southern California, where the protocol was approved by the Institutional Animal Care and Use
Committee (IACUC) (Permit Number: 12007 USC). Upon crossing appropriate adult lines, the
embryos obtained were raised in Egg Water (60 μg/ml of Instant Ocean and 75 μg/ml of CaSO4
in Milli-Q water) at 28.5°C.
The Transgenic FlipTrap Gt(tpm4-mCherry)
ct31aR
line is the result of the previously reported
screen.
133
The TgBAC(gata5:Lifeact-GFP) transgenic line was obtained from Michel Bagnet
(Department of Cell Biology, Duke University, Durham, NC).
2.5.2 Sample preparation
Transgenic zebrafish lines were intercrossed over multiple generations to obtain embryos
with either single or double expressing transgenes. All lines were maintained as heterozygous
for each transgene. Embryos were screened using a fluorescence stereo microscope (Axio Zoom,
Carl Zeiss) for expression patterns of individual fluorescence proteins before imaging
experiments.
For in vivo imaging, 5–6 zebrafish embryos at 14 to 22 hpf were immobilized and placed into
1% UltraPure low-melting-point agarose (catalog no. 16520-050, Invitrogen) solution prepared in
30% Danieau (17.4 mM NaCl, 210 M KCl, 120 M MgSO47H2O, 180 M Ca(NO3)2, 1.5 mM HEPES
buffer in water, pH 7.6) with 0.01% tricaine (MS-222) in an imaging dish with no. 1.5 coverglass
52
bottom, (catalog no. D5040P, WillCo Wells). Imaging was started following solidification of
agarose at room temperature (1–2 min).
2.5.3 Imaging Acquisition
Images were acquired on a Zeiss LSM 780 inverted laser confocal scanning microscope using
40x/1.1 W LD C-Apochromat Korr UV-VIS-IR lens at 28ºC. Imaging on the inverted confocal
microscope was performed by positioning the imaging dish, described in sample preparation, on
the microscope stage. Samples of TgBAC(gata5:LifeAct-GFP) and TgBAC(gata5:LifeAct-
GFP)+H2B-tdTomato mRNA were imaged with either 488 nm and 561 nm 1-photon or 940 nm 2-
photon laser excitation, for GFP or GFP and tdTomato. A narrow 488 nm or 488/561 dichroic
mirror was used to separate excitation and fluorescence emission for 1-photon excitation, and a
690nm lowpass filter was used to separate excitation and fluorescence for 2-photon excitation.
When imaging with 2P excitation, a non-descanned detector (NDD) unit was used as an
attachment to the Zeiss LSM 780 in order to improve signal collection. For the NDD, filter cubes
with ranges 500 nm – 510 nm and 570 nm – 610 nm were used to collect the GFP and tdTomato
signals, respectively.
Separately, samples of (Gt(tpm4-cherry); TgBAC(gata5:Lifeact-GFP)) were also imaged using
the same 1P imaging conditions.
2.5.4 3D Image Registration
LSM files containing the multidimensional image stacks were converted, combined, then
equally split along the time dimension into Multi-TIFF image files in FIJI (https://imagej.net/Fiji).
67
53
All time points for a single sample were registered using the “Correct 3D drift” plugin
(https://imagej.net/Correct_3D_Drift) in FIJI.
68
2.5.5 Actin Cable Marking and Visualization
Bitplane Imaris (Oxford Instruments, Abingdon, UK) 3D visualization and analysis software
was used to trace and track sections of actin cables across multiple dimensions. The Point tool
was used to draw connected line segments in virtual 3D space.
2.5.6 Layer Segmentation
Custom semi-automated segmentation software
A custom software with a graphical user interface (GUI) was created to enable and accelerate
segmentation of the multiple layers of the LPM. This software was built in Python 3 using the Qt
toolkit.
69
The software utilizes an automatic algorithm to calculate a segmentation surface for
the entire 4D dataset, which can then be corrected using the GUI. The automatic segmentation
algorithm is based on calculating the z location of the 1D center of mass for each group of voxels
along the Z-axis (Supplementary Figure 2.4). The source code for the software is available at
https://github.com/danielk605/Layer_Segmentation.
2.5.7 Quantitative Measures
Fréchet Distance
The Fréchet distance is used as a measure of similarity between two curves.
70
It is intuitively
described as the minimum straight-line distance that allows any point on one curve to be joined
to the closest point on another curve for all points on both curves. This measure was used to
compare the changes in in a single actin cable across multiple time points. When a curve is
54
defined as a continuous mapping 𝑓𝑓 : [ 𝑎𝑎 , 𝑏𝑏 ] → 𝑉𝑉 , with 𝑎𝑎 , 𝑏𝑏 ∈ ℜ, 𝑎𝑎 ≤ 𝑏𝑏 , ( 𝑉𝑉 , 𝑑𝑑 ) is a metric space,
and we have two curves, 𝑓𝑓 : [ 𝑎𝑎 , 𝑏𝑏 ] → 𝑉𝑉 and 𝑔𝑔 : [ 𝑎𝑎 ′ , 𝑏𝑏 ′ ] → 𝑉𝑉 , the Fréchet distance is defined as:
𝐹𝐹𝐹𝐹 é 𝑐𝑐ℎ 𝑒𝑒𝑒𝑒 𝑑𝑑𝑑𝑑 𝑑𝑑 𝑒𝑒 𝑎𝑎 𝑑𝑑 𝑐𝑐𝑒𝑒 ( 𝑓𝑓 , 𝑔𝑔 ) = inf
𝛼𝛼 , 𝛽𝛽 max
𝑡𝑡 ∈[ 0, 1]
� distance �𝑓𝑓 � 𝛼𝛼 ( 𝑒𝑒 ) � , 𝑔𝑔 � 𝛽𝛽 ( 𝑒𝑒 ) � � �
where 𝛼𝛼 and 𝛽𝛽 are arbitrary continuous nondecreasing parametric functions from 𝑒𝑒 ∈ [0,1]
onto [ 𝑎𝑎 , 𝑏𝑏 ] and [ 𝑎𝑎 ′ , 𝑏𝑏 ′ ], respectively.
Fréchet Speed
Since the actin dynamics needed to be synchronized across multiple samples with time-lapses
of different samples sometimes acquired across different time ranges. A normalized version of
the Fréchet distance was used. The Fréchet speed is defined as the Fréchet distance of a curve
taken at two different time points divided by the difference in time.
𝐹𝐹𝐹𝐹 é 𝑐𝑐ℎ 𝑒𝑒𝑒𝑒 𝑑𝑑 𝑠𝑠 𝑒𝑒𝑒𝑒𝑑𝑑 =
𝐹𝐹𝐹𝐹 é 𝑐𝑐ℎ 𝑒𝑒𝑒𝑒 𝑑𝑑𝑑𝑑 𝑑𝑑 𝑒𝑒 𝑎𝑎 𝑑𝑑 𝑐𝑐𝑒𝑒 𝑡𝑡 1
𝑡𝑡 2
𝑒𝑒 2
− 𝑒𝑒 1
55
2.6 Supplementary Material
2.6.1 Supplementary Figures
Supplementary Figure 2.1. Actin cables form across multiple cells during midline fusion
Live 3D fluorescent imaging of a TgBAC(gata5:Lifeact-GFP) zebrafish reveal the organization and
formation of supracellular actin cables within the LPM at the midline along the anterior-posterior axis at
(A) 16 hpf, and at (B) 18 hpf using maximum intensity z-projections. These cables demonstrate that cell
movements are ordered along the boundaries of the two cell populations as the midline fusion occurs.
(C, D) Zoomed-in views of the actin signal from A and B provide a clearer display of the supracellular actin
cables (dotted lines) denoting the wavefronts of the actin within the LPM during the midline fusion.
56
Supplementary Figure 2.2. 3D LPM imaging data is composed of small amounts of LPM signal on 2D
virtual slices.
(A) Visualization using the Imaris software enables an oblique 3D view of the entire bilayer LPM. Since
the layers of the LPM are approximately a single cell think, this holistic 3D visualization only occurs by
viewing the entire data at once. The data is constructed from multiple virtual 2D (x,y) slices of the LPM
along the z-axis with only fractional portions of the thin LPM layers on each slice. XY slices at (B) 49 µm,
(C) 67 µm, and (D) 75 µm from the top of the volume demonstrate how only very small sections of the
LPM layers are present on each slice.
57
Supplementary Figure 2.3. Layer segmented timepoints of the LPM reveal cellular organization and
signal distribution differences between the dorsal and ventral layers over time
The segmented dorsal and ventral layers of the LPM provide a striking visual difference in the actin
organization across time. (A) The actin signal within the dorsal layer of the LPM at four time points
organized from left to right (17 hpf, 18 hpf, 19.5 hpf, 23 hpf) have highly dissimilar signal distributions
with (B) the corresponding time points of the ventral layer. For the first time point (17 hpf), the dorsal
layer has actin signal organized into arcs flanking the hole in the middle at the anterior section (top half).
For the later time points (18 hpf – 23 hpf), the cells within the dorsal layer appear to be organized in a
more horizontal (stretched left to right) fashion compared to the more vertical distribution of the anterior.
Scale: 30 µm
58
Supplementary Figure 2.4. Schematic detailing the creation of the automatic segmentation surface
This schematic provides a visual representation for how the Z-positions of the automatic 3D segmentation
surface are calculated from the intensity values of the 4D dataset. In the ideal case, the automatic
segmentation surface would provide a perfect virtual dissection of the LPM layers, but due to high noise
and unpredictable discontinuity of signal throughout the LPM, the segmentation surfaces require
correction.
2.6.2 Supplementary Notes
Supplementary Note 2.1: Complexities with visualization and segmentation of actin in the
LPM
There are several difficulties with regard to the structure and data composition that must be
taken into account when determining the best way to visualize and analyze the actin signal within
the LPM. The 3D structure of the lateral plate mesoderm is not one that is commonly imaged
with multidimensional fluorescence microscopy. The LPM is composed of two layers with
curvatures which are not monotonic and each layer only one or two cells thick. This means that
59
when performing multidimensional laser scanning microscopy to acquire the full volume, each
individual XY image slice only contains a contour made from sections of the cells composing the
LPM. Furthermore, the curvature of the ventral layer of the LPM may also place portions of the
ventral layer on the same XY image slice as those from the dorsal layer. (Supplementary Figure
2.4) Since the dorsal and ventral layers can share the same Z-planes, taking just a subset of the
data along the Z-axis is not sufficient to provide a clear separation between the two layers.
Another aspect of the LPM layers that prevents automatic segmentation techniques is the
unsmooth nature of certain areas of the LPM. The LPM has endothelial fringes at the anterior-
lateral dorsal regions which obscure some portions of the dorsal layer. The LPM is also directly
adjacent to the yolk on its ventral side; the yolk is highly autofluorescent and may further contain
actin molecules along the surface which are acquired as high enough intensity signal that it can
be difficult to distinguish from that of the ventral layer. In summary, all of these structural and
labeling complexities that occur due to the biology and imaging modality greatly hinder the
development of any automatic layer segmentation algorithm and prevent the straightforward
use of standard tools such as FIJI or Imaris for visualization and analysis.
Supplementary Note 2.2: Limitations in acquisition and analysis for live imaging of deep
complex tissues
Since, the LPM is a deep, non-orthogonal 3D tissue which undergoes a variety of shape
changes, fluorescence time-lapse imaging of the tissue involves several considerations. Unlike
more simple model systems, the LPM’s depth of 150-300 um means that standard 1-photon (1P)
confocal laser scanning microscopy requires a high laser excitation power in order to receive
proper signal due to the issue of scattering effects.
6,7,14
However, this high power is unsuitable
60
for live imaging of the LPM as it causes both phototoxicity, leading to death of the organism
before and/or bleaching of the fluorophores, leading to loss of signal. Therefore, we require 2-
photon (2P) laser scanning microscopy to get enough dynamic range for the fluorescence signal
and to cover the entire developmental time of the linear heart tube morphogenesis.
32
Even with
2P microscopy and non-descanned detectors (NDD), we are sometimes unable to capture high
enough signal from the midline for proper image processing or analysis since the midline sits
directly beneath the neural tube, leading to even greater scattering effects. As such, our actin
dynamics analysis mainly emphasizes on the lateral sections of the DV layers. Still, even when
focusing on the lateral sections, delineation of the actin cables in our samples requires careful
judgement of which sections of signal correspond to continuous cables due to the varying SNR of
the time-lapse datasets. Other than the inherent noisiness of the data due to photon starvation,
these variations in quality are further exacerbated due to differences in local contrast throughout
the volumetric dataset caused by shadowing from the neural tube (Supplementary Figure 2.5) as
well as shifts in the orientation of the sample which are an inherent consequence of the inexact
nature of the mounting process. The variability caused by the imaging process as well as the
inherent biological variability of the organism greatly hinders automated and/or objective
quantification of the dynamics in question. Future such analysis will require further advances in
long term time-lapse deep tissue imaging.
Supplementary Note 2.3: Actin cable demarcation choices for quantitative analysis
Due to the greater mixing of the actin signal and increased noise arising from both the effects
of the point spread function across very close layers and greater tissue depth of the midline, we
decided to quantify sets of two cables, one each on the left and right sides, for each tissue layer,
61
(dorsal and ventral) regardless of whether the actual cable appears to be joined across the
midline, between the two types of samples. We manually marked each cable in 3D using Imaris
as described (Methods 2.5.5) for the time span of the cables’ appearance. For the dorsal layer,
we marked the same set of cables qualitatively utilized in the visual analysis of the previous
section. For the ventral layer, we also marked cables that resemble those previously described,
but for the quantitative analysis, we marked individual cables split across the midline of the LPM
on the left and right sides, keeping separate what may be a single cable into two parts. This was
done to keep cable quantifications consistent across all wildtype and morphant samples. All
samples were manually synchronized in time, to provide similar developmental periods for the
actin dynamics. For proper comparison across all of the conditions, a total of four cables, two in
each layer, were marked and tracked over synchronized time frames as described in the following
paragraph.
2.7 Chapter Conclusion
Previous imaging of cardiac cone has examined the individual layers of the LPM as a single
entity, focusing on signaling pathways that drive the leftward extension of the heart tube with
little consideration for mechanisms that drive the structural reconfigurations. In order to
investigate these physical mechanisms, we need to be able to visualize and analyze the
morphological changes that occur within the 3D structure of the LPM. We have developed a
custom-built software with a graphical user interface (GUI), which provides the ability to
automatically generate and correct segmentation lines along different 3D axes. This allows us to
perform a virtual dissection of the dynamic bilayer tissue of the LPM in a semi-automated fashion,
a feat that has previously not been practically achievable. Initial visualization of these segmented
62
layers reveals a wealth of information regarding cellular motions between the dorsal (top) versus
the ventral (bottom) layer for a large variety of conditions. This segmentation workflow provides
a foundation for tracking and analyzing the architecture and differential motions of the LPM that
occur during embryonic tissue development, and preliminary results depict a unique dynamic
model for asymmetric placement of the linear heart tube.
63
Chapter 3
Scalable, real-time, multidimensional visualization with SEER
3.1 Author Contribution
Wen Shi
1,2,8
, Daniel E.S. Koo
1,2,8
, Masahiro Kitano
1,3
, Hsiao J. Chiang
1,2
, Le A. Trinh
1,3,4
, Gianluca
Turcatel
5,6
, Benjamin Steventon
7
, Cosimo Arnesano
1,3
, David Warburton
5,6
, Scott E. Fraser
1,2,3
&
Francesco Cutrale
1,2,3
1 Translational Imaging Center, University of Southern California, 1002 West Childs Way, Los
Angeles, CA 90089, USA.
2 Biomedical Engineering, University of Southern California, 1002 West Childs Way, Los
Angeles, CA 90089, USA.
3 Molecular and Computational Biology, University of Southern California, 1002 West Childs
Way, Los Angeles, CA 90089, USA.
4 Department of Biological Sciences, University of Southern California, 1050 Childs Way, Los
Angeles, CA 90089, USA.
5 Developmental Biology and Regenerative Medicine Program, Saban Research Institute,
Children’s Hospital, 4661 Sunset Blvd, Los Angeles, CA 90089, USA.
6 Keck School of Medicine and Ostrow School of Dentistry, University of Southern California,
Los Angeles, CA, USA.
7 Department of Genetics, University of Cambridge, Downing Street, Cambridge CB2 3EH, UK.
8 These authors contributed equally: Wen Shi, Daniel E.S. Koo
64
3.2 Chapter Summary
In this chapter, I introduce a method to efficiently visualize large, spectrally dense
multispectral images in a single snapshot. As explained previously, hyperspectral fluorescence
imaging has enabled multiplexing of spatiotemporal dynamics across scales for molecules, cells
and tissues with multiple fluorescent labels by adding the dimension of wavelength. This has
resulted in datasets with very high information density which often require lengthy analyses to
separate the overlapping fluorescent spectra. Understanding and visualizing these large multi-
dimensional datasets during acquisition and pre-processing is a necessary task but has become
extremely challenging. The method presented in this chapter is called Spectrally Encoded
Enhanced Representations (SEER), an approach for improved and computationally efficient
simultaneous color visualization of multiple spectral components of hyperspectral fluorescence
images. This method exploits the mathematical properties of the phasor method to encode the
multispectral components of multiplexed fluorescence markers into information rich RGB images
for easy visualization. We present multiple biological fluorescent samples and highlight SEER’s
enhancement of specific and subtle spectral differences, providing a fast, intuitive and
mathematical way to interpret hyperspectral images during collection, pre-processing and
analysis.
3.3 Introduction
Fluorescence hyperspectral imaging (fHSI) has become increasingly popular in recent years
for the simultaneous imaging of multiple endogenous and exogenous labels in biological samples
23,71–74
. Among the advantages of using multiple fluorophores is the capability to simultaneously
65
follow differently labeled molecules, cells or tissues spatiotemporally. This is especially
important in the field of biology where tissues, proteins and their functions within organisms are
deeply intertwined, and there remain numerous unanswered questions regarding the
relationship between individual components
75
. fHSI empowers scientists with a more complete
insight into biological systems with multiplexed information deriving from observation of the full
spectrum for each point in the image
76
.
Standard optical multi-channel fluorescence imaging differentiates fluorescent protein
reporters through band-pass emission filters, selectively collecting signals based on wavelength.
Spectral overlap between labels limits the number of fluorescent reporters that can be acquired
and background signals are difficult to separate. fHSI overcomes these limitations, enabling
separation of fluorescent proteins with overlapping spectra from the endogenous fluorescent
contribution, expanding to a fluorescent palette that counts dozens of different labels with
corresponding separate spectra
17,76,77
.
The drawback of acquiring this vast multidimensional spectral information is an increase in
complexity and computational time for the analysis, showing meaningful results only after
lengthy calculations. To optimize experimental time, it is advantageous to perform an informed
visualization of the spectral data during acquisition, especially for lengthy time-lapse recordings,
and prior to performing analysis. Such preprocessing visualization allows scientists to evaluate
image collection parameters within the experimental pipeline as well as to choose the most
appropriate processing method. However, the challenge is to rapidly visualize subtle spectral
differences with a set of three colors, compatible with displays and human eyes, while minimizing
loss of information. As the most common color model for displays is RGB, where red, green, and
66
blue are combined to reproduce a broad array of colors, hyper- or multispectral datasets are
typically reduced to three channels to be visualized. Thus, spectral information compression
becomes the critical step for proper display of image information.
Dimensional reduction strategies are commonly used to represent multidimensional fHSI
data
78
. One strategy is to construct fixed spectral envelopes from the first three components
produced by principal component analysis (PCA) or independent component analysis (ICA),
converting a hyperspectral image to a three-band visualization
78–82
. The main advantage of
spectrally weighted envelopes is that it can preserve the human-eye perception of the
hyperspectral images. Each spectrum is displayed with the most similar hue and saturation for
tri-stimulus displays in order for the human eye to easily recognize details in the image
83
.
Another popular visualization technique is pixel-based image fusion, which preserves the spectral
pairwise distances for the fused image in comparison to the input data
84
. It selects the weights
by evaluating the saliency of the measured pixel with respect to its relative spatial neighborhood
distance. These weights can be further optimized by implementing widely applied mathematical
techniques, such as Bayesian inference
85
, by using a filters-bank
86
for feature extraction
87
or by
noise smoothing
88
.
A drawback to approaches such as Singular Value Decomposition to compute PCA bases and
coefficients or generating the best fusion weights is that it can take numerous iterations for
convergence
89
Considering that fHSI datasets easily exceed GigaBytes range and many cross the
TeraBytes threshold
75
, such calculations will be both computationally and time demanding.
Furthermore, most visualization approaches have focused more on interpreting spectra as RGB
colors and not on exploiting the full characterization that can be extracted from the spectral data.
67
Our approach is based on the belief that preserving most spectral information and enhancing the
distinction of spectral properties between relevant pixels, will provide an ideal platform for
understanding biological systems. The challenge is to develop tools that allow efficient
visualization of multidimensional datasets without the need for computationally demanding
dimensionality reduction, such as ICA, prior to analysis.
In this work, we build maps based on Phasors (Phase Vectors). The Phasor approach to
fluorescence microscopy has multiple advantages deriving from its properties for fluorescent
signals
90–93
. After transforming the spectrum at each pixel into its Fourier components, the
resulting complex value is represented as a two-dimensional histogram where the axes represent
the real and imaginary components. Such histogram has the advantage of providing a
representative display of the statistics and distributions of pixels in the image from a spectral
perspective, simplifying identification of independent fluorophores. Pixels in the image with
similar spectra generate a cluster on the phasor plot. While this representation is cumulative
across the entire image, each single point on the phasor plot is easily remapped to the original
fluorescent image
94–97
.
Exploiting the advantages of the phasor approach, Hyper-Spectral Phasors (HySP) has enabled
analysis of 5D hyperspectral time-lapse data semi-automatically as similarly colored regions
cluster on the phasor plot. These clusters have been characterized and exploited for simplifying
interpretation and spatially lossless denoising of data, improving both collection and analysis in
low signal conditions
97
. Phasor analysis generally explores the 2d histogram of spectral
fingerprints by means of geometrical selectors
90,91,93,95,96,98
, which is an effective strategy but
requires user involvement. While capable of imaging multiple labels and separating different
68
spectral contributions as clusters, this approach is inherently limited in the number of labels that
can be analyzed and displayed simultaneously. Prior works directly utilize phase and modulation
for quantifying, categorizing, and representing features within Fluorescence Lifetime and Image
Correlation Spectroscopy data
99–101
. Our method differs from previous implementations
90,91,93,95,96,98
, as it focuses instead on providing a mathematically constructed, holistic
preprocessing visualization of large hyperspectral data.
The solution we propose extracts from both the whole denoised phasor plot and image to
reconstruct a one-shot view of the data and its intrinsic spectral information. Spectrally Encoded
Enhanced Representations (SEER) is a dimensionality reduction-based approach, achieved by
utilizing phasors, and automatically creating spectrally representative color maps. The results of
SEER show an enhanced visualization of spectral properties, representing distinct fluorophores
with distinguishable pseudo-colors and mathematically highlighted differences between intrinsic
signals during live imaging. SEER has the potential of optimizing the experimental pipeline, from
data collection during acquisition to data analysis, greatly improving image quality and data size.
3.4 Results
3.4.1 Spectrally Encoded Enhanced Representations (SEER)
The execution of SEER has a simple foundation. Each spectrum is assigned a pseudo-color,
based on its real and imaginary Fourier components, by means of a reference color map. This
concept is illustrated in detail in Fig. 3.1 using an example of Zebrabow
102
embryo dataset, where
cells within the sample express different ratios of cyan, yellow, and red fluorescent proteins,
resulting in a wide-ranging pallet of discrete spectral differences. The data are acquired as a
hyperspectral volume (x, y, z, λ) (Fig. 3.1a), providing a spectrum for each voxel. The spectra
69
obtained from multiple regions of interest are complex, showing both significant overlap and the
expected difference in ratios (Fig. 3.1b). Discriminating the very similar spectra within the original
acquisition space is challenging using standard multispectral dataset visualization approaches
(Fig. 3.1c). SEER was designed to create usable spectral contrast within the image by
accomplishing five main steps. First, the Sine and Cosine Fourier transforms of the spectral
dataset at one harmonic (usually 1
st
or 2
nd
owing to Riemann surfaces) provide the components
for a 2D phasor plot (Fig. 3.1d). The phasor transformation compresses and normalizes the image
information, reducing a multidimensional dataset into a 2D-histogram representation and
normalizing it to the unit circle.
Second, the histogram representation of the phasor plot provides insight on the spectral
population distribution and improvement of the signal through summation of spectra in the
histogram bins. Pixels with very similar spectral features, for example expressing only a single
fluorophore, will fall within the same bin in the phasor plot histogram. The color of a bin on the
phasor plot is an indicator for the number of pixels within that bin. Also, because of the linear
property of the phasor transform, if an image pixel contains a mixture of two fluorophores, its
position on the phasor plot will lie proportionally along the line connecting the phasor
coordinates of those two components. Altogether, this means that the entire multidimensional
dataset can be represented by the distribution of a handful of 2D locations on the phasor plot.
The relation of the spectral properties of the pixels is therefore intuitively linked to the geometric
properties of the phasor. This step highlights importance of geometry and distribution of bins
in the phasor representation.
70
Third, spatially lossless spectral denoising, previously presented
97
, is performed 1–2 times in
phasor space to reduce spectral error. In short, median filters are applied on both the Sine and
Cosine transformed images, reducing the spectral scatter error on the phasor plot, while
maintaining the coordinates of the spectra in the original image (Fig. 3.1e). Filters affect only the
phasor space, producing an improvement of the signal.
Fourth, we designed multiple SEER maps exploiting the geometry of phasors. For each bin,
we assign RGB colors based on the phasor position in combination with a reference map (Fig.
3.1f). Subtle spectral variations can be further enhanced with multiple contrast modalities,
focusing the map on the most frequent spectrum, the statistical center of mass of the distribution
or scaling the colors to the extremes of the phasor distribution (Fig. 3.1g).
Finally, colors in the original dataset are remapped based on SEER results (Fig. 3.1h). This
permits a dataset in which spectra are visually indistinguishable (Fig. 3.1a–c) to be rendered so
that even these subtle spectral differences become readily discernible (Fig. 3.1i). SEER rapidly
produces three channel color images (Supplementary Fig. 3.1) that approximate the visualization
resulting from a more complete spectral unmixing analysis (Supplementary Fig. 3.2).
71
Figure 3.1. Spectrally Encoded Enhanced Representations (SEER) conceptual representation.
The SEER workflow is a fast, computationally efficient process which enables the simultaneous view of
multiple spectral components within hyperspectral fluorescence images by color-coding spatial regions
with similar colors. (a) A multispectral fluorescent dataset is acquired using a confocal instrument in
spectral mode (32-channels). Here we show a Tg(ubi:Zebrabow)
102
dataset where cells contain a stochastic
combination of cyan, yellow and red fluorescent proteins. (b) Average spectra within six regions of interest
(colored boxes in a) show the level of overlap resulting in the sample. (c) Standard multispectral
visualization approaches have limited contrast for spectrally similar fluorescence. (d) Spectra for each
voxel within the dataset are represented as a two-dimensional histogram of their Sine and Cosine Fourier
coefficients S and G, known as the phasor plot. (e) Spatially lossless spectral denoising is performed in
phasor space to improve signal
97
. (f) SEER provides a choice of several color reference maps that encode
positions on the phasor into predetermined color palettes. The reference map used here (magenta
selection) is designed to enhance smaller spectral differences in the dataset. (g) Multiple contrast
modalities allow for improved visualization of data based on the phasor spectra distribution, focusing the
reference map on the most frequent spectrum, on the statistical spectral center of mass of the data
(magenta selection), or scaling the map to the distribution. (h) Color is assigned to the image utilizing the
chosen SEER reference map and contrast modality. (i) Nearly indistinguishable spectra are depicted with
improved contrast, while more separated spectra are still rendered distinctly.
3.4.2 Standard reference maps
Biological samples can include a multitude of fluorescent spectral components, deriving from
fluorescent labels as well as intrinsic signals, each with different characteristics and properties.
Identifying and rendering these subtle spectral differences is the challenge. We found that no
72
one rendering is sufficient for all cases, and thus created four specialized color map references
to enhance color contrast in samples with different spectral characteristics. To simplify the
testing of the color map references, we designed a Simulated Hyperspectral Test Chart (SHTC), in
which defined areas contain the spectra we obtained from CFP, YFP, and RFP zebrafish embryos.
Each section of the test chart offers different image contrast, obtained by shifting the CFP and
RFP spectra maxima position with respect to the YFP spectrum (Supplementary Fig. 3.3, see
Methods). We render the SHTC in grayscale image and with SEER for comparison (Fig. 3.2a).
These representations can each be rapidly deployed in order for users to determine which has
the highest information content for their particular dataset.
We label our four specialized color map references as “standard reference maps” where a
standard reference map is defined as an organization of a palette of colors where each spectrum
is associated with a color based on only its phasor position. The color distribution in each of the
reference maps is a function of the coordinates of the phasor plot. In the angular map (Fig. 3.2b),
hue is calculated as a function of angle, enhancing diversity in colors when spectra have different
center wavelengths (phases) on the phasor plot. For the radial map (Fig. 3.2c), we assign colors
with respect to different radii, highlighting spectral amplitude and magnitude. The radial position
is, in general, related to the intensity integral of the spectrum, which in turn can depend on the
shape of the spectrum, with the null-intensity localizing at the origin of the plot (Supplementary
Fig. 3.4). In our simulation (Fig. 3.2c), the colors obtained with this map mainly represent
differences in shape, however, in a scenario with large dynamic range of intensities, colors will
mainly reflect changes in intensity, becoming affected, at low signal-to-noise, by the uncorrelated
background (Supplementary Fig. 3.5). In the gradient ascent and descent models (Fig. 3.2d, e),
73
the color groups differ according to angle as seen in the angular map with an added variation of
the color intensity strength in association with changes in the radius. Gradient maps enhance
similar properties as the angular map. However, the gradient ascent (Fig. 3.2d) map puts a greater
focus on distinguishing the higher intensity spectra while de-emphasizing low intensity spectra;
whereas, the gradient descent (Fig. 3.2e) map does the opposite, highlighting the spectral
differences in signals with low intensity. The complementary attributes of these four maps
permits renderings that distinguish a wide range of spectral properties in relation to the phasor
positions. It is important to note that the idea of Angular and Radial maps have been previously
utilized in a variety of applications and approaches
100,101
and are usually introduced as Phase and
Modulation, respectively. Here, we have recreated and provided these maps for our
hyperspectral fluorescence data as simpler alternatives to our more adaptable maps.
The standard reference maps simplify comparisons between multiple fluorescently labeled
specimens as the palette representation is unchanged across samples. These references are
centered at the origin of the phasor plot; hence their color distributions remain constant,
associating a predetermined color to each phasor coordinate. Fluorophores positions are
constant on the phasor plot
97
, unless their spectra are altered by experimental conditions. The
ability of the standard reference maps to capture either different ratios of labels or changes in a
label, such as a calcium indicator, offers the dual advantage of providing a rapid, mathematically
improved overview and simplifying the comparison between multiple samples.
74
Figure 3.2. Spectrally Encoded Enhanced Representation (SEER) map designs.
A set of Standard Reference Maps and their corresponding result on a Simulated Hyperspectral Test Chart
(SHTC) deliberately designed to contain a gradient of spectral overlaps provides an overview of the
improvements that SEER introduces. (a) The Standard phasor plot with corresponding average grayscale
image provides the positional information of the spectra on the phasor plot. The phasor position is
associated to a color in the rendering according to a set of Standard Reference Maps, each highlighting a
different property of the dataset. (b) The angular map enhances spectral phase differences by linking color
to changes in angle (in this case, with respect to origin). This map enhances changes in maximum emission
wavelength, as phase position in the plot is most sensitive to this feature, and largely agnostic to changes
in intensity. (c) The radial map, instead, focuses mainly on intensity changes, as a decrease in the signal
to noise generally results in shifts towards the origin on the phasor plot. As a result, this map highlights
spectral amplitude and magnitude, and is mostly insensitive to wavelength changes for the same
spectrum. (d) The gradient ascent map enhances spectral differences, especially within the higher
intensity regions in the specimen. This combination is achieved by adding a brightness component to the
color palette. Darker hues are localized in the center of the map, where lower image intensities are
plotted. (e) The gradient descent map improves the rendering of subtle differences in wavelength.
Colorbars for b,c,d,e are provided as simplified indicators of the spectral signatures within the samples;
similar colors denote similar spectral signatures in nm. (f) The tensor map provides insights in statistical
changes of spectral populations in the image. This visualization acts as a spectral edge detection on the
image and can simplify identification of spectrally different and infrequent areas of the sample such as
the center of the SHTC. Colorbar represents how quickly the number of pixels changes across multiple
spectral signatures.
75
3.4.3 Tensor map
The SEER approach provides a straightforward means to assess statistical observations within
spectral images through the use of the tensor a nonstandard reference map called the tensor
map. In addition to the four standard reference maps, the tensor map recolors each image pixel
based on the gradient of counts relative to its surrounding spectra (Fig. 3.2f). Considering that
the phasor plot representation is a two-dimensional histogram of real and imaginary Fourier
components, then the magnitude of each histogram bin is the number of occurrences of a
particular spectrum. The tensor map is calculated as a gradient of counts between adjacent bins,
and each resulting value is associated a color based on a color map (here we use a jet color map).
The image is recolored according to changes in spectral occurrences, enhancing the spectral
statistical fluctuations for each phasor cluster. The speed of spectral change represented by the
recoloring can provide insights in the dynamics of the population of spectra inside dataset. A
visible result is a type of spectral edge detection that works in the wavelength dimension,
facilitating detection of chromatic changes in the sample. Alternatively, tensor map can aid in
identifying regions which contain less frequent spectral signatures relative to the rest of the
sample. An example of such a case is shown in the upper left quadrant of the simulation (Fig.
3.2f) where the center part of each quadrant has different spectrum and appears with lower
frequency compared with its surroundings.
3.4.4 Modes (scale and morph)
We have implemented two different methods to improve our ability to enhance spectral
properties: scaled mode and morphed mode.
76
Scaled mode provides an adaptive map with increased color contrast by normalizing the
standard reference map extreme values to the maximum and minimum phasor coordinates of
the current dataset, effectively creating the smallest bounding unit circle that contains all phasor
points (Fig. 3.3b). This approach maximizes the number of hues represented in the rendering by
resizing the color map based on the spectral range within the image. Scaled mode increases the
difference in hue and the contrast of the final false-color rendering. These characteristics set the
scaled mode apart from the standard reference maps (Fig. 3.3a), which constantly cover the full
phasor plot area and eases comparisons between datasets. Scaled mode sacrifices this
uniformity, but offers spectral contrast stretching that improves contrast depending on the
values represented in individual image datasets. The boundaries of the scaled mode can be set
to a constant value across different samples to facilitate comparison.
Morph mode exploits the dataset population properties captured in the image’s phasor
representation to enhance contrast. From the phasor histogram, either the most frequent
spectral signature or the center of mass (in terms of histogram counts) is used as the new center
reference point of the SEER maps. We call this new calculated center the apex of the SEER. The
result is an adaptive palette that changes depending on the dataset. In this representation mode,
the edges of the reference map are held anchored to the phasor plot circular boundary, while
the center point is shifted, and the interior colors are linearly warped (Fig. 3.3c, d). By shifting the
apex, contrast is enhanced for datasets with off-centered phasor clusters. A full list of the
combination of standard reference maps and modes is reported (Supplementary Figs. 3.6, 3.7)
for different levels of spectral overlap in the simulations and for different harmonics. The
supplement presents results for SHTC with very similar spectra (Supplementary Fig. 3.3), using
77
second harmonic in the transform (see Methods), and for an image with frequently encountered
level of overlap (Supplementary Figs. 3.8 and 3.9) using first harmonic. In both scenarios, SEER
improves visualization of multispectral datasets (Supplementary Figs. 3.6, 3.7, 3.9, 3.10)
compared with standard approaches (Supplementary Figs. 3.3 and 3.8). A detailed description of
the choice of harmonic for visualization is presented in Supplementary Note 3.1. Implementation
of 1x to 5x spectral denoising filters
97
further enhances visualization (Supplementary Figs. 3.11,
3.12).
78
Figure 3.3. Enhanced contrast modalities.
For each SEER standard reference map design, four different modes can provide improved contrast during
visualization by highlighting different statistical properties of the distribution of spectra within the
dataset. As a reference, we use the gradient descent map applied on a Simulated Hyperspectral Test Chart
(SHTC). (a) Standard mode is the standard map reference. It covers the entire phasor plot circle, centering
on the origin and anchoring on the circumference. The color palette is constant across samples, simplifying
spectral comparisons between datasets. (b) Scaled mode adapts the gradient descent map range to the
values of the dataset, effectively performing a linear contrast stretching. In this process the extremities of
the map are scaled to wrap around the phasor representation of the viewed dataset, resulting in the
largest shift in the color palette for the phase and modulation range in a dataset. (c) Max Morph mode
shifts the map center to the maximum of the phasor histogram. The boundaries of the reference map are
kept anchored to the phasor circle, while the colors inside the plot are warped. The maximum of the
phasor plot represents the most frequent spectrum in the dataset. This visualization modality remaps the
color palette with respect to the most recurring spectrum, allowing insights on the distribution of spectra
inside the sample. (d) Mass Morph mode, instead, uses the histogram counts to calculate a weighted
average of the phasor coordinates and uses this color-frequency center of mass as a new center for the
SEER map. The color palette now maximizes the palette color differences between spectra in the sample.
3.4.5 Color maps enhance different spectral gradients
To demonstrate the utility of SEER and its modes, we present four example visualizations of
images taken from unlabeled mouse tissues and fluorescently tagged zebrafish. In live samples,
a number of intrinsic molecules are known to emit fluorescence, including NADH, riboflavin,
retinoids, and folic acid
103
. The contribution of these generally low signal-to-noise intrinsic
79
signals to the overall fluorescence is generally called autofluorescence
76
. Hyperspectral imaging
and HySP
97
can be employed to diminish the contribution of autofluorescence to the image. The
improved sensitivity of the phasor, however, enables autofluorescence to become a signal of
interest and allows for exploration of its multiple endogenous molecules’ contributions. SEER is
applied here for visualizing multispectral autofluorescent data of an explant of freshly isolated
trachea from a wild-type C57Bl mouse. The tracheal epithelium is characterized by a very low
cellular turnover, and therefore the overall metabolic activity is attributable to the cellular
function of the specific cell type. Club and ciliated cells are localized in the apical side of the
epithelium and are the most metabolically active as they secrete cytokines and chemically and
physically remove inhaled toxins and particles from the tracheal lumen. Contrarily, basal cells
which represent the adult stem cells in the upper airways, are quiescent and metabolically
inactive
104,105
. Because of this dichotomy in activity, the tracheal epithelium at homeostasis
constituted the ideal cellular system for testing SEER and validating with FLIM imaging. The slight
bend on the trachea, caused by the cartilage rings, allowed us to visualize the mesenchymal
collagen layer, the basal and apical epithelial cells and tracheal lumen in a single focal plane.
The explant was imaged with 2-photon laser scanning microscopy in multispectral mode. We
compare the state of the art true-color image (Fig. 3.4a, see Methods), and SEER images (Fig.
3.4b, c). The gradient descent morphed map (Fig. 3.4b) enhances the visualization of metabolic
activities within the tracheal specimen, showing different metabolic states when moving from
the tracheal airway apical surface toward the more basal cells and the underlying collagen fibers
(Fig. 3.4b). The visualization improvement is maintained against different implementations of
RGB visualization (Supplementary Fig. 3.13) and at different depths in volumetric datasets
80
(Supplementary Fig. 3.28). The tensor map increases the contrast of cell boundaries (Fig. 3.4c).
Changes in autofluorescence inside live samples are associated to variations in the ratio of
NAD+/NADH, which in turn is related to the ratio of free to protein bound NADH
106
. Despite very
similar fluorescence emission spectra, these two forms of NADH are characterized by different
decay times (0.4 ns free and 1.0–3.4 ns bound)
107–112
. FLIM provides a sensitive measurement
for the redox states of NADH and glycolytic/ oxidative phosphorylation. Metabolic imaging by
FLIM is well established and has been applied for characterizing disease progression in multiple
animal models, in single cells and in human as well as to distinguish stem cells differentiation and
embryo development
109–116
. Previous work has shown that both Hyperspectral Imaging and FLIM
correlate with metabolic changes in cells from retinal organoids
117
. Here, the dashed squares
highlight cells with distinct spectral representation through SEER, a difference which the FLIM
image (Fig. 3.4d, Supplementary Fig. 3.14) confirms.
81
Figure 3.4. Autofluorescence visualization comparison for unlabeled freshly isolated mouse tracheal
explant.
SEER provides an improvement to the visualization of spectrally acquired datasets for intrinsic molecules
used as reporters for metabolic activity in tissues. Standard practice for imaging these reporters is to use
fluorescence lifetime imaging, instead of wavelength, due to the closely overlapping emission spectra of
the intrinsic molecules. The sample was imaged using multi-spectral two-photon microscopy (740nm
excitation, 32 wavelength bins, 8.9nm bandwidth, 410-695 nm detection) to collect the fluorescence of
the intrinsic molecules, folic acid, retinoids and NADH in its free and bound states. This overlap increases
the difficulty in distinguishing spectral changes when utilizing a (a) TrueColor image display (Zen Software,
Zeiss, Germany) (b) The gradient descent morphed map shows differences between apical and basal
layers, suggesting different metabolic activities of cells based on the distance from the tracheal airway.
Cells on the apical and basal layer (dashed boxes) are rendered with distinct color groups. Colorbar is
provided as a simplified indicator of the spectral signatures within the samples; similar colors denote
similar spectral signatures in nm. (c) The tensor map image provides an insight of statistics in the spectral
dataset, associating image pixels’ colors with corresponding gradient of phasor counts for pixels with
similar spectra. The spectral counts gradients in this sample highlights the presence of fibers and edges of
single cells. Colorbar represents how quickly the number of pixels changes across multiple spectral
signatures. (d) Average spectra for the cells in dashed boxes (1 and 2 in panel c) show a blue spectral shift
in the direction of the apical layer. (e) Fluorescence Lifetime Image Microscopy (FLIM) of the sample,
acquired using a frequency domain detector validates the interpretation from panel b, Gradient Descent
Map, where cells in the apical layer exhibit a more Oxidative Phosphorylation phenotype (longer lifetime
in red) compared to cells in the basal layer (shorter lifetime in yellow) with a more Glycolytic phenotype.
The selections correspond to areas selected in phasor FLIM analysis (e, top left inset, red and yellow
selections) based on the relative phasor coordinates of NAD+/NADH lifetimes
107
.
82
Figure 3.5. Visualization of a single fluorescence label against multiple autofluorescences.
Tg(fli1:mKO2) (pan-endothelial fluorescent protein label) zebrafish was imaged with intrinsic signal arising
from the yolk and xanthophores (pigment cells) to demonstrate the existence of multiple spectral
signatures even within an experimental sample traditionally thought to have a single fluorescence
signature. Live imaging was performed using a multi-spectral confocal (32 channels) fluorescence
microscope with 488nm excitation. The endothelial mKO2 signal is difficult to distinguish from intrinsic
signals in a (a) maximum intensity projection TrueColor 32 channels Image display (Bitplane Imaris,
Switzerland). The SEER angular map highlights changes in spectral phase, rendering them with different
colors (reference map, bottom right of each panel). (b) Here we apply the angular map with scaled mode
on the full volume. Previously indistinguishable spectral differences (boxes 1,2,3 in panel a) are now easy
to visually separate. Colorbar is provided as a simplified indicator of the spectral signatures within the
samples; similar colors denote similar spectral signatures in nm. (c-h) Zoomed-in views of regions 1-3
(from a) visualized in TrueColor (c, e, g) and with SEER (d, f, h) highlight the differentiation of the pan-
endothelial label (yellow) distinctly from pigment cells (magenta). The improved sensitivity of SEER further
distinguishes different sources of autofluorescence arising from yolk (blue and cyan).
83
3.4.6 SEER improvements provide clear visualization of intrinsic signals
Microscopic imaging of fluorophores in the cyan to orange emission range in tissues is
challenging due to intrinsic fluorescence. A common problem is bleed-through of autofluorescent
signals into the emission wavelength of the label of interest. Bleed-through is the result of two
fluorophores overlapping in emission and excitation profiles, so that photons from one
fluorophore fall into the detection range of the other. While bleed-through artifacts can be
partially reduced with a stringent choice of the emission filters, this requires narrow collection
channels, which reject any ambiguous wavelength and greatly decreases collection efficiency.
This strategy generally proves difficult when applied to broad spectra autofluorescence.
mKusabira-Orange 2 (mKO2) is a fluorescent protein whose emission spectrum significantly
overlaps with autofluorescence in zebrafish. In a fli1:mKO2 zebrafish, where all of the vascular
and endothelial cells are labeled, the fluorescent protein, mKO2, and autofluorescence signals
due to pigments and yolk are difficult to distinguish (Fig. 3.5a, boxes). Grayscale renderings
(Supplementary Fig. 3.15) provide information on the relative intensity of the multiple
fluorophores in the sample but are not sufficient for specifically detecting the spatial distribution
of the mKO2 signal. True-color representation (Fig. 3.5a, Supplementary Fig. 3.16) is limited in
visualizing these spectral differences. SEER’s angular map (Fig. 3.5b) provides a striking contrast
between the subtly different spectral components inside this 4D (x, y, z, λ) dataset. The angular
reference map enhances changes in phase on the phasor plot which nicely discriminates shifts in
the center wavelength of the spectra inside the sample (Supplementary Movie 3.1).
Autofluorescence from pigment cells is considerably different from the fli1:mKO2 fluorescence
(Fig. 3.5c–h). For example, the dorsal area contains a combination of mKO2 and pigment cells
84
(Fig. 3.5e–f) not clearly distinct in the standard approaches. The angular map permits SEER to
discriminate subtle spectral differences. Distinct colors represent the autofluorescence from yolk
and from pigment cells (Fig. 3.5g, h), enriching the overall information provided by this single-
fluorescently labeled specimen and enhancing the visualization of mKO2 fluorescently labeled
pan-endothelial cells.
Imaging and visualization of biological samples with multiple fluorescent labels are hampered
by the overlapping emission spectra of fluorophores and autofluorescent molecules in the
sample, complicating the visualization of the sample. A triple labeled zebrafish embryo with
Gt(desm-Citrine)
ct122a/+
;Tg(kdrl:eGFP), H2B-Cerulean labeling, respectively, muscle, vasculature
and nuclei, with contributions from pigments autofluorescence is rendered with standard
approaches and SEER in 1D and 3D (Fig. 3.6). TrueColor representation (Fig. 3.6a, Supplementary
Fig. 3.17) provides limited information on the inner details of the sample. Vasculature (eGFP) and
nuclei (Cerulean) are highlighted with shades of cyan whereas autofluorescence and muscle
(Citrine) are in shades of green (Fig. 3.6a) making both pairs difficult to distinguish. The intrinsic
richness of colors in the sample is an ideal test for the gradient descent and radial maps. The
angular map separates spectra based mainly on their central (peak) wavelength, which
correspond to phase differences in the phasor plot. The gradient descent map separates spectra
with a bias on subtle spectral differences closer to the center of the phasor plot. Here we applied
the mass morph and max morph modes to further enhance the distinction of spectra (Fig. 3.6b,
c). With the mass morph mode, the muscle outline and contrast of the nuclei are improved by
increasing the spatial separation of the fluorophores and suppressing the presence of
autofluorescence from skin pigment cells (Fig. 3.6e). With the max morph mode (Fig. 3.6c), pixels
85
with spectra closer to skin autofluorescence are visibly separated from muscle, nuclei and
vasculature.
The enhancements of SEER are also visible in volumetric visualizations. The angular and
gradient maps are applied to the triple-labeled 4D (x, y, z, λ) dataset and visualized as maximum
intensity projections (Fig. 3.6d–f). The spatial localization of fluorophores is enhanced in the mass
morphed angular map, while the max morphed gradient descent map provides a better
separation of the autofluorescence of skin pigment cells (Supplementary Movie 3.2). These
differences are also maintained in different visualization modalities (Supplementary Fig. 3.18).
SEER helps to discern the difference between fluorophores even with multiple contributions
from bleed though between labels and from autofluorescence. Particularly, morphed maps
demonstrate a high sensitivity in the presence of subtle spectral differences. The triple-labeled
example (Fig. 3.6) shows advantage of the morphed map, as it places the apex of the SEER map
at the center of mass of the phasor histogram and compensates for the different excitation
efficiencies of the fluorescent proteins at 458 nm.
86
Figure 3.6. Triple label fluorescence visualization.
Zebrafish embryo Tg(kdrl:eGFP); Gt(desmin-Citrine);Tg(ubiq:H2B-Cerulean) labelling respectively
vasculature, muscle and nuclei provide a background to demonstrate the utility of SEER in easily
discriminating between regions with different spectral signatures. Live imaging with a multi-spectral
confocal microscope (32-channels) using 458nm excitation. Single plane slices of the tiled volume are
rendered with TrueColor and SEER maps. (a) TrueColor image display (Zen, Zeiss, Germany). (b) Angular
map in center of mass morph mode improves contrast by distinguishable colors. The resulting visualization
enhances the spatial localization of fluorophores in the sample. (c) Gradient Descent map in max morph
mode centers the color palette on the most frequent spectrum in the sample, highlighting the spectral
changes relative to it. In this sample, the presence of skin pigment cells (green) is enhanced. 3D
visualization of SEER maintains these enhancement properties. Here we show (d, e, f) TrueColor 32
channels Maximum Intensity Projections (MIP) of different sections of the specimen rendered in
TrueColor, Angular map center of mass mode and Gradient Descent max mode. The selected views
highlight SEER’s performance in the (d) overview of somites, (e) zoom-in of somite boundary, and (f)
lateral view of vascular system.
87
3.4.7 Spectral differences visualized in combinatorial approaches
Zebrabow
102
is the result of a powerful genetic cell labeling technique based on stochastic
and combinatorial expression of different relative amounts of a few genetically encoded,
spectrally distinct fluorescent proteins
102,118,119
. The Zebrabow (Brainbow) strategy combines the
three primary colors red, green and blue, in different ratios, to obtain a large range of colors in
the visual palette, similar to modern displays
120
. Unique colors arise from the combination of
different ratios of RFP, CFP, and YFP, achieved by a stochastic Cre-mediated recombination
118
.
This technique has been used in multiple applications, from axon and lineage tracing
119–123
to
cell tracking during development
124,125
, in which a specific label can be used as a cellular identifier
to track descendants of individual cells over time and space. The challenge is acquiring and
analyzing the subtle differences in hues among these hundreds of colors. Multispectral imaging
provides the added dimension required for an improved acquisition; however, this modality is
hampered by both instrumental limitations and spectral noise. Furthermore, current image
analysis and visualization methods interpret the red, yellow, and cyan fluorescence as an RGB
additive combination and visualize it as a color picture, similar to the human eye perception of
color. This approach is not well poised for distinguishing similar, yet spectrally unique,
recombination ratios due to our difficulty in reliably identifying subtly different colors.
SEER overcomes this limitation by improving the analysis’ sensitivity using our phasor-based
interpretation of colors. Recombinations of labels belong to separate areas of the phasor plot,
simplifying the distinction of subtle differences. The standard reference maps and modes
associate a color easily distinguishable by eye, enhancing the subtle spectral recombination. SEER
88
simplifies the determination of differences between cells for combinatorial strategies, opening a
novel window of analysis for Brainbow samples.
We imaged an Tg(ubi:Zebrabow) sample and visualized its multiple genetic recombinations
using SEER. The results (Fig. 3.7, Supplementary Fig. 3.19) highlight the difficulty of visualizing
these datasets with standard approaches as well as how the compressive maps simplify the
distinction of both spectrally close and more separated recombinations.
Figure 3.7. Visualization of combinatorial expression on Zebrabow
102
samples.
Maximum Intensity Projection renderings of Tg(ubi:Zebrabow) muscle acquired live in multi-spectral
confocal mode with 458nm excitation provides an example for using SEER to distinguish spectrally distinct
regions even with a large number of spectral combinations. (a) The elicited signal (e.g., white arrows) is
difficult to interpret in the TrueColor Image display (Zen Software, Zeiss, Germany). (b) Discerning spectral
differences is increasingly simpler with Gradient Descent map scaled to intensities, while compromising
on the brightness of the image. (c) Gradient Descent and (d) Gradient Ascent RGB Masks in scale mode
show the color values assigned to each pixel and greatly improve the visual separation of recombined CFP,
YFP and RFP labels. Colorbars are provided as simplified indicators of the spectral signatures within the
samples; similar colors denote similar spectral signatures in nm.
3.5 Discussion
Standard approaches for the visualization of hyperspectral datasets trade computational
expense for improved visualization. In this work, we show that the phasor approach can define a
new compromise between computational speed and rendering performance. The wavelength
89
encoding can be achieved by proper conversion and representation by the spectral phasor plot
of the Fourier transform real and imaginary components. The phasor representation offers
effortless interpretation of spectral information. Originally developed for fluorescence lifetime
analysis
90
and subsequently brought to spectral applications
94,96,97
, here the phasor approach
has been applied to enhance the visualization of multi- and hyperspectral imaging. Because of
the refined spectral discrimination achieved by these phasor-based tools, we call this approach
Spectrally Enhanced Encoded Representations (SEER).
SEER offers a computationally efficient and robust method that converts spectral (x, y, λ)
information into a visual representation, enhancing the differences between labels. This
approach makes more complete use of the spectral information. Prior analyses employed the
principal components or specific spectral bands of the wavelength dimension. Similarly, previous
phasor analyses interpreted the phasor using selected regions of interest. Our work explores the
phasor plot as a whole and represents that complete information set as a color image, while
maintaining efficiency and minimizing user interaction. The function can be achieved quickly and
efficiently even with large data sizes, circumventing the typical computational expense of
hyperspectral processing. Our tests show SEER can process a 3.7 GB dataset with 1.26 × 108
spectra in 6.6 s and a 43.88 GB dataset with 1.47 × 109 spectra in 87.3 s, including denoising of
data. Comparing with the python module, scikit-learn’s implementation of fast independent
component analysis (fastICA), SEER provides up to a 67-fold speed increase (Supplementary Fig.
3.1) and lower virtual memory usage.
Processing speed comparison between SEER and fastICA for the multispectral fluorescent
data shown in Figs. 3.4–3.7 is presented in Supplementary Table 3.2. SEER’s computation time
90
ranged between 0.44 (for Fig. 3.4) and 6.27 s (for Fig. 3.5) where the corresponding timing for
fastICA was 3.45 and 256.86 s, respectively, with a speed up in the range of 7.9–41-folds
(Supplementary Fig. 3.20), in accordance with the trend shown in Supplementary Fig. 3.1. These
results were obtained using Python, an interpreted language. Implementation of SEER with a
compiled language could potentially increase speed by one order of magnitude. The spectral
maps presented here reduce the dimensionality of these large datasets and assign colors to a
final image, providing an overview of the data prior to a full-scale analysis.
A simulation comparison with other common visualization approaches such as Gaussian
kernel and peak wavelength selection (Supplementary Fig. 3.21, see Methods) shows an
increased spectral separation accuracy (see Methods) for SEER to associate distinct colors to
closely overlapping spectra under different noise conditions (Supplementary Fig. 3.22). The
spectral separation accuracy improvement was 1.4–2.6-fold for highly overlapping spectra, 0–8.9
nm spectra maxima distance, and 1.5–2.7-fold for overlapping spectra with maxima separated
by 17.8–35.6 nm (Supplementary Figs. 3.23, 3.24, see Methods).
Quantification of RGB images by colorfulness, contrast and sharpness show that SEER
generally performs better than standard visualization approaches (Supplementary Fig. 3.25, see
Methods). SEER’s average enhancement was 2–19% for colorfulness, 11–27% for sharpness and
2–8% for contrast (Supplementary Table 3.3) for the datasets of Figs. 4–7. We then performed a
measure of the Color Quality Enhancement (CQE)
126
, a metric of the human visual perception of
color image quality (Supplementary Table 3.4). The CQE score of SEER was higher than the
standard, with improvement of 11–26% for Fig. 3.4, 7–98% for Fig. 3.5, 14–25% for Fig. 3.6, and
12–15% for Fig. 3.7 (Supplementary Fig. 3.25, Supplementary Note 3.3, see Methods).
91
Flexibility is a further advantage of our method. The users can apply several different
standard reference maps to determine which is more appropriate for their data and enhance the
most important image features. The modes provide a supplementary enhancement by adapting
the References to each dataset, in terms of size and distribution of spectra in the dataset. Scaling
maximizes contrast by enclosing the phasor distribution, it maintains linearity of the colormap.
Max and center of mass modes shift the apex of the distribution to a new center, specifically the
most frequent spectrum in the dataset or the weighted color-frequency center of mass of the
entire dataset. These modes adapt and improve the specific visualization properties for each map
to the dataset currently being analyzed. As a result, each map offers increased sensitivity to
specific properties of the data, amplifying, for example, minor spectral differences or focusing on
major wavelength components. The adaptivity of the SEER modes can prove advantageous for
visually correcting the effect of photobleaching in samples, by changing the apex of the map
dynamically with the change of intensities (Supplementary Fig. 3.26, Supplementary Movie 3.3).
SEER can be applied to fluorescence, as performed here, or to standard reflectance hyper-
and multispectral imaging. These phasor remapping tools can be used for applications in
fluorescence lifetime or combined approaches of spectral and lifetime imaging. With
multispectral fluorescence, this approach is promising for real-time imaging of multiple
fluorophores, as it offers a tool for monitoring and segmenting fluorophores during acquisition.
Live-imaging visualization is another application for SEER. The gradient descent map, for
example, in combination with denoising strategies
97
can minimize photobleaching and -toxicity,
by enabling the use of lower excitation power. SEER overcomes the challenges in visualization
and analysis deriving from low signal-to-noise images, such as intrinsic signal autofluorescence
92
imaging. Among other complications, such image data can result in a concentrated cluster
proximal to the phasor center coordinates. The gradient descent map overcomes this limitation
and provides bright and distinguishable colors that enhance subtle details within the dim image.
It is worth noticing that the method is generally indifferent to the dimension being
compressed. While in this work we explore the wavelength dimension, SEER can be utilized, in
principle, with any n-dimensional dataset where n is larger than two. For instance, it can be used
to compress and compare the dimension of lifetime, space or time for multiple datasets. Some
limitations that should be considered are that SEER pseudo-color representation sacrifices the
true color of the image, creating inconsistencies with the human eyes’ expectation of the original
image and does not distinguish identical signals arising from different biophysical events
(Supplementary Note 3.2). SEER is currently intended as a preprocessing visualization tool and,
currently, is not utilized for quantitative analysis. Combining SEER with novel color-image
compatible segmentation algorithms
38,127
might expand the quantitative capabilities of the
method. New multidimensional, multi-modal instruments will more quickly generate much larger
datasets. SEER offers the capability of processing this explosion of data, enabling the interest of
the scientific community in multiplexed imaging.
3.6 Methods
3.6.1 Simulated hyperspectral test chart
To account for the Poisson noise and detector noise contributed by optical microscopy, we
generated a simulated hyperspectral test chart starting from real imaging data with a size of x:
300 pixels, y: 300 pixels, and lambda: 32 channels. S1, S2, and S3 spectra were acquired,
respectively, from zebrafish embryos labeled only with CFP, YFP, and RFP, where the spectrum
93
in Fig. 3.1a is identical to the center cell of the test chart Fig. 3.1d. In each cell three spectra are
represented after shifting the maxima by d1 or d2 nanometers with respect to S2. Each cell has
its corresponding spectra of S1, S2, and S3 (Supplementary Fig. 3.3).
3.6.2 Standard RGB visualizations
The TrueColor RGB image (Supplementary Figure 3.3, 3.8, 3.21) is obtained through
compression of the hyperspectral cube into the RGB 3 channels color space by generating a
Gaussian radial basis function kernel
128
𝐾𝐾 for each RGB channel. This kernel 𝐾𝐾 acts as a similarity
factor and is defined as:
𝐾𝐾 𝑖𝑖 � 𝑥𝑥 𝑖𝑖 , 𝑥𝑥 ′
�= 𝑒𝑒 −| 𝑥𝑥 𝑖𝑖 − 𝑥𝑥 ′|
2
2 𝜎𝜎 2
(eq. 3.1)
where 𝑥𝑥 ′ is the center wavelength of R or B or G. For example, when 𝑥𝑥 ′ = 650 𝑑𝑑 𝑛𝑛 , the
associated RGB color space is (R:1, G:0, B:0). Both 𝑥𝑥 and 𝐾𝐾 are defined as 32 × 1 vectors,
representing, respectively, the 32-channel spectrum of one single pixel and the normalized
weight of each R, G and B channel. i is the channel index of both vectors. 𝐾𝐾 𝑖𝑖 represents how
similar channel i is to each R/G/B channel, and 𝜎𝜎 is the deviation parameter.
We compute RGB color space 𝑐𝑐 by a dot product of the weight vector 𝐾𝐾 and 𝜆𝜆 at
corresponding channel R/G/B:
𝑐𝑐 = ∑ 𝜆𝜆 𝑖𝑖 × 𝐾𝐾 𝑖𝑖 𝑖𝑖 = 3 2
𝑖𝑖 = 1
(eq. 3.2)
where 𝜆𝜆 is a vector of the wavelengths captured by the spectral detector in an LSM 780
inverted confocal microscope with lambda module (Zeiss, Jena, Germany) and 𝜆𝜆 𝑖𝑖 is the center
wavelength of channel i. Gaussian kernel was set at 650nm, 510nm, 470nm for RGB respectively
94
as Default (Supplementary Figure 3.3s, Supplementary Figure 3.8, Supplementary Figure 3.13e,
Supplementary Figure 3.16e, Supplementary Figure 3.17e, Supplementary Figure 3.19j).
The same Gaussian kernel was also changed adaptively to the dataset to provide a spectral
contrast stretching on the visualization and focus the visualization on the most utilized channels.
The average spectrum for the entire dataset is calculated and normalized. The intersect at 10%
(Supplementary Figure 3.13f, 3.16f, 3.17f, 3.19f), 20% (Supplementary Figure 3.13g, 3.16g, 3.17g,
3.19g) and 30% (Supplementary Figure 3.13h, 3.16h, 3.17h, 3.19h). of the intensity is obtained
and used as a center for the blue and red channels. The green channel is centered halfway
between red and blue. Representations of these adaptations are reported in Supplementary
Figure 3.21g,h,i.
The TrueColor 32 Channels image (Figure 3.1c, Figure 3.5a,c,e,g, Figure 3.6a,d,e,f,
Supplementary Figure 3.13c, 3.16c, 3.17c, 3.19c) was rendered as a 32 channels Maximum
Intensity Projection using Bitplane Imaris (Oxford Instruments, Abingdon, UK). Each channel has
a known wavelength center (32 bins, from 410.5 nm to 694.9 nm with 8.9 nm bandwidth). Each
wavelength was associated with a color according to classical wavelength-to-RGB conversions
129
as reported in Supplementary Figure 3.21f. The intensity for all channels was contrast-adjusted
(Imaris Display Adjustment Settings) based on the channel with the largest information. A
meaningful range for rendering was identified as the top 90% in intensity of the normalized
average spectrum for the dataset (Supplementary Figure 3.13b, 3.16j, 3.17j, 3.19b). Channels
outside of this range were excluded for rendering. Furthermore, for 1 photon excitation, channels
associated to wavelength lower than the laser excitation (for example channels 1 to 5 for laser
458nm) were excluded from rendering.
95
Peak Wavelength representation (Supplementary Figures 3.13d, 3.16d, 3.17d, 3.19d, 3.21
and 3.23) reconstructs an RGB image utilizing, for each pixel, the color associated to the
wavelength at which maximum intensity is measured. Wavelength-to-RGB conversion was
performed using a python function adapted from Dan Bruton’s work
129
. A graphical
representation is reported in Supplementary Figure 3.21f.
3.6.3 Spectral separation accuracy calculation
We utilize the Simulated Hyperspectral Test Chart to produce different levels of spectral
overlap and signal to noise ratio (SNR). We utilize multiple RGB visualization approaches for
producing compressed RGB images (Supplementary Figure 3.21, Supplementary Figure 3.22).
Each panel of the simulation is constructed by three different spectra, organized as three
concentric squares Q 1, Q 2, Q 3 (Supplementary Figure 3.3). The maximal contrast visualization is
expected to have three well separated colors, in this case red, green and blue. For quantifying
this difference, we consider each (R,G,B) vector, with colors normalized [0,1], in each pixel as a
set of Euclidean coordinates (x,y,z) and for each pixel calculate the Euclidean distance:
𝑙𝑙 1 2
= � ∑ � 𝑠𝑠 𝑄𝑄 1
− 𝑠𝑠 𝑄𝑄 2
�
𝑖𝑖 2
𝐵𝐵 𝑖𝑖 = 𝑅𝑅 (eq. 3.3)
where l 12 is the color distance between square Q 1 and Q 2. p Q1 and p Q2 are the (R,G,B) vectors
in the pixels considered, i is the color coordinate R, G or B. The color distances l 13 and l 23 between
squares Q 1Q 3 and Q 2Q 3 respectively, are calculated similarly. The accuracy of spectral separation
(Supplementary Figure 3.23) is calculated as:
𝑆𝑆 𝑠𝑠 . 𝑑𝑑𝑒𝑒 𝑠𝑠 . 𝑎𝑎 𝑐𝑐𝑐𝑐 =
( 𝑙𝑙 1 2
+ 𝑙𝑙 1 3
+ 𝑙𝑙 2 3
)
l
re d − g re e n
+ l
re d − bl ue
+ l
g re e n − bl ue
(eq. 3.4)
96
where the denominator is the maximum color distance which can be achieved in this
simulation, where Q 1 , Q 2 and Q 3 are respectively pure red, green and blue, therefore:
l
red − g r een
+ l
red − blue
+ l
g r een − blue
= 3 √2 (eq. 3.5)
3.6.4 Compressive spectral algorithm and map reference design: phasor
calculations
For each pixel in an image, we acquire the sequence of intensities at different wavelengths
𝐼𝐼 ( 𝜆𝜆 ). Each spectrum 𝐼𝐼 ( 𝜆𝜆 ) is discrete Fourier transformed into a complex number 𝑔𝑔 𝑥𝑥 , 𝑦𝑦 , 𝑧𝑧 , 𝑡𝑡 +
𝑑𝑑 𝑑𝑑 𝑥𝑥 , 𝑦𝑦 , 𝑧𝑧 , 𝑡𝑡 . Here 𝑑𝑑 is the imaginary unit, while ( 𝑥𝑥 , 𝑦𝑦 , 𝑧𝑧 , 𝑒𝑒 ) denotes the spatio-temporal coordinates
of a pixel in a 5D dataset.
The transforms used for real and imaginary components are:
𝑔𝑔 𝑥𝑥 , 𝑦𝑦 , 𝑧𝑧 , 𝑡𝑡 ( 𝑘𝑘 )
| 𝑘𝑘 = 2
=
∑ 𝐼𝐼 ( 𝜆𝜆 ) ∗ 𝑐𝑐𝑐𝑐 𝑐𝑐 (
2 𝜋𝜋𝜋𝜋 𝜋𝜋 𝑁𝑁 ) ∗ ∆
𝜋𝜋 𝑁𝑁 𝜋𝜋 0
𝜆𝜆
∑ 𝐼𝐼 ( 𝜆𝜆 ) ∗ ∆
𝜋𝜋 𝑁𝑁 𝜋𝜋 0
𝜆𝜆 (eq. 3.5)
𝑑𝑑 𝑥𝑥 , 𝑦𝑦 , 𝑧𝑧 , 𝑡𝑡 ( 𝑘𝑘 )
| 𝑘𝑘 = 2
=
∑ 𝐼𝐼 ( 𝜆𝜆 ) ∗ 𝑐𝑐𝑖𝑖 𝑠𝑠 (
2 𝜋𝜋𝜋𝜋 𝜋𝜋 𝑁𝑁 ) ∗ ∆ 𝜆𝜆 𝜋𝜋 𝑁𝑁 𝜋𝜋 0
∑ 𝐼𝐼 ( 𝜆𝜆 ) ∗ ∆ 𝜆𝜆 𝜋𝜋 𝑁𝑁 𝜋𝜋 0
(eq. 3.6)
Where 𝜆𝜆 0
and 𝜆𝜆 𝑁𝑁 are the initial and final wavelengths respectively, 𝑁𝑁 is the number of
spectral channels, ∆ 𝜆𝜆 is the wavelength band width of a single channel. k is the harmonic. In this
work, we utilized harmonic 𝑘𝑘 = 2. The effects of different harmonic numbers on the SEER
visualization are reported in (Supplementary Note 3.1)
3.6.5 Standard map reference
Association of a color to each phasor coordinate ( 𝑔𝑔 , 𝑑𝑑 ) is performed in two steps. First, the
reference system is converted from Cartesian to polar coordinates ( 𝐹𝐹 , 𝜃𝜃 ).
( 𝐹𝐹 , 𝜃𝜃 ) = � � 𝑔𝑔 2
+ 𝑑𝑑 2
,
𝑐𝑐 𝑔𝑔 � (eq. 3.7)
97
These polar coordinate values are then transformed to the Hue, Saturation, Value (HSV) color
model utilizing specific settings for each map, as listed below. Finally, any color generated
outside of the r=1 boundary is set to black.
Gradient Descent:
ℎ 𝑢𝑢 𝑒𝑒 = 𝜃𝜃
𝑑𝑑 𝑎𝑎 𝑒𝑒 𝑢𝑢𝐹𝐹 𝑎𝑎 𝑒𝑒 𝑑𝑑 𝑠𝑠 𝑑𝑑 = 1
𝑣𝑣 𝑎𝑎 𝑙𝑙 𝑢𝑢𝑒𝑒 = 1 − 0.85 ∗ 𝐹𝐹
Gradient Ascent
ℎ 𝑢𝑢 𝑒𝑒 = 0
𝑑𝑑 𝑎𝑎 𝑒𝑒 𝑢𝑢𝐹𝐹 𝑎𝑎 𝑒𝑒 𝑑𝑑 𝑠𝑠 𝑑𝑑 = 1
𝑣𝑣 𝑎𝑎 𝑙𝑙 𝑢𝑢𝑒𝑒 = 𝐹𝐹
Radius:
Each r value from 0 to 1 is associated to a level in the jet colormap from the matplotlib
package.
Angle:
ℎ 𝑢𝑢 𝑒𝑒 = 0
𝑑𝑑 𝑎𝑎 𝑒𝑒 𝑢𝑢𝐹𝐹 𝑎𝑎 𝑒𝑒 𝑑𝑑 𝑠𝑠 𝑑𝑑 = 1
𝑣𝑣 𝑎𝑎 𝑙𝑙 𝑢𝑢𝑒𝑒 = 1
3.6.6 Tensor map
Visualization of statistics on the phasor plot is performed by means of the mathematical
gradient. The gradient is obtained in a two-step process.
98
First, we compute the two-dimensional derivative of the phasor plot histogram counts by
utilizing an approximation of the second order accurate central difference.
Each bin 𝐹𝐹 ( 𝑔𝑔 , 𝑑𝑑 ) has a central difference:
𝜕𝜕𝜕𝜕
𝜕𝜕 𝑔𝑔 ,
𝜕𝜕𝜕𝜕
𝜕𝜕 𝑐𝑐 , with respect to the differences in 𝑔𝑔
(horizontal) and 𝑑𝑑 (vertical) directions with unitary spacing ℎ. The approximation becomes:
𝜕𝜕𝜕𝜕
𝜕𝜕 𝑐𝑐 =
𝜕𝜕 � 𝑐𝑐 +
1
2
ℎ, 𝑔𝑔 � − 𝜕𝜕 � 𝑐𝑐 −
1
2
ℎ, 𝑔𝑔 �
ℎ
=
𝐹𝐹 ( 𝑠𝑠 + ℎ, 𝑔𝑔 ) + 𝐹𝐹 ( 𝑠𝑠 , 𝑔𝑔 )
2
−
𝐹𝐹 ( 𝑠𝑠 , 𝑔𝑔 ) + 𝐹𝐹 ( 𝑠𝑠 − ℎ, 𝑔𝑔 )
2
ℎ
=
𝜕𝜕 ( 𝑐𝑐 + ℎ, 𝑔𝑔 ) − 𝜕𝜕 ( 𝑐𝑐 − ℎ, 𝑔𝑔 )
2 ℎ
(eq. 3.8)
And similarly:
𝜕𝜕𝜕𝜕
𝜕𝜕 𝑔𝑔 =
𝜕𝜕 ( 𝑐𝑐 , 𝑔𝑔 + ℎ) − 𝜕𝜕 ( 𝑐𝑐 , 𝑔𝑔 − ℎ)
2 ℎ
(eq. 3.9)
Second, we calculate the square root of the sum of squared differences 𝐷𝐷 ( 𝑑𝑑 , 𝑔𝑔 ) as:
𝐷𝐷 ( 𝑔𝑔 , 𝑑𝑑 ) = �(
𝜕𝜕𝜕𝜕
𝜕𝜕 𝑔𝑔 )
2
+ (
𝜕𝜕𝜕𝜕
𝜕𝜕 𝑐𝑐 )
2
(eq. 3.10)
obtaining the magnitude of the derivative density counts. With this gradient histogram, we
then connect the phasor coordinates with same 𝐷𝐷 ( 𝑑𝑑 , 𝑔𝑔 ) gradients with one contour. All gradients
are then normalized to (0,1). Finally, pixels in the hyperspectral image corresponding to the same
contour in phasor space will be rendered the same color. In the reference map, red represents
highly dense gradients, usually at the center of a phasor cluster. Blue, instead, represents the
sparse gradient that appears at the edge circumference of the phasor distributions.
3.6.7 Scale mode
In this mode, the original square Standard Reference Maps are transformed to a new
boundary box adapted to each dataset’s spectral distribution.
99
The process of transformation follows these steps. We first determine the boundary box
(width 𝜔𝜔 , height ℎ) based on the cluster appearance on the phasor plot. We then determine the
largest ellipsoid that fits in the boundary box. Finally, we warp the unit circle of the original map
to the calculated ellipsoid.
Using polar coordinates, we represent each point P of the standard reference map with
phasor coordinates ( 𝑔𝑔 𝑖𝑖 , 𝑑𝑑 𝑖𝑖 ) as:
𝑃𝑃 ( 𝑔𝑔 𝑖𝑖 , 𝑑𝑑 𝑖𝑖 ) = 𝑃𝑃 ( 𝐹𝐹 𝑖𝑖 ∗ 𝑐𝑐 𝑠𝑠𝑑𝑑 𝜃𝜃 𝑖𝑖 , 𝐹𝐹 𝑖𝑖 ∗ 𝑑𝑑𝑑𝑑 𝑑𝑑 𝜃𝜃 𝑖𝑖 ) (eq. 3.11)
The ellipsoid has semi-major axes:
𝑎𝑎 =
𝜔𝜔 2
(eq. 3.12)
and semi-minor axes
𝑏𝑏 =
ℎ
2
(eq. 3.13)
Therefore, the ellipse equation becomes:
(
𝑔𝑔 𝑖𝑖 𝜔𝜔 2
�
)
2
+ (
𝑐𝑐 𝑖𝑖 ℎ
2
�
)
2
= 𝐹𝐹 𝑎𝑎𝑑𝑑
2
(eq. 3.14)
Where rad is a ratio used to scale each radius 𝐹𝐹 𝑖𝑖 in the reference map to a proportionally
corresponding distance in the boundary box-adapted ellipse, which in polar coordinates
becomes:
𝐹𝐹 𝑎𝑎𝑑𝑑
2
= 𝐹𝐹 𝑖𝑖 2
∗ ((
𝑐𝑐𝑐𝑐 𝑐𝑐 𝜃𝜃 𝑖𝑖
𝜔𝜔 2
)
2
+ (
𝑐𝑐𝑖𝑖 𝑠𝑠 𝜃𝜃 𝑖𝑖
ℎ
2
)
2
) (eq. 3.15)
Each point P ( 𝑔𝑔 𝑖𝑖 , 𝑑𝑑 𝑖𝑖 ) of the standard reference map is geometrically scaled to a new
coordinate ( 𝑔𝑔 𝑐𝑐 , 𝑑𝑑 𝑐𝑐 ) inside the ellipsoid using forward mapping, obtaining the equation:
( 𝐹𝐹 𝑐𝑐 , 𝜃𝜃 𝑐𝑐 ) = � � 𝑔𝑔 𝑖𝑖 2
+ 𝑑𝑑 𝑖𝑖 2
/𝐹𝐹 𝑎𝑎𝑑𝑑 , tan
− 1
𝑔𝑔 𝑖𝑖 𝑐𝑐 𝑖𝑖 � (eq. 3.16)
100
This transform is applied to all Standard Reference Maps to generate the respective scaled
versions.
3.6.8 Morph mode
We linearly morph each point P( 𝑔𝑔 𝑖𝑖 , 𝑑𝑑 𝑖𝑖 ) to a new point 𝑃𝑃′ ( 𝑔𝑔 𝑐𝑐 , 𝑑𝑑 𝑐𝑐 ) by utilizing a shifted-cone
approach. Each standard map reference is first projected on to a 3D conical surface centered on
the phasor plot origin and with unitary height (Supplementary Figure 3.27a-c, Supplementary
Movie 3.4). Each point P on the standard map is given a 𝑧𝑧 value linearly, starting from the edge
of the phasor universal circle. The original standard map (Supplementary Figure 3.27c) can, thus,
be interpreted as a top view of a right cone with z=0 at the phasor unit circle and z=1 at the origin
(Supplementary Figure 3.27a).
We then shift the apex 𝐴𝐴 of the cone to the computed weighted average or maxima of the
original 2d histogram, producing an oblique cone (Supplementary Figure 3.27b) centered in 𝐴𝐴 ′ .
In this oblique cone, any horizontal cutting plane is always a circle with center 𝑂𝑂 ′. Its
projection 𝑂𝑂 ⏊ ′ is on the line joining the origin 𝑂𝑂 and projection of the new center
𝐴𝐴 ⏊ ′ (Supplementary Figure 3.27b-d). As a result, all of the points in each circle are shifted towards
the new center 𝐴𝐴 ⏊ ′ on the phasor plot. We first transform the coordinates ( 𝑔𝑔 𝑖𝑖 , 𝑑𝑑 𝑖𝑖 ) of each point
P to the morphed map coordinates ( 𝑑𝑑 𝑐𝑐 , 𝑔𝑔 𝑐𝑐 ), and then obtain the corresponding ( 𝐹𝐹 𝑐𝑐 , 𝜃𝜃 𝑐𝑐 ) necessary
for calculating Hue, Saturation, and Value.
In particular, a cutting plane with center 𝑂𝑂 ′ has a radius of 𝐹𝐹 ′ (Supplementary Figure 3.27).
This cross-section projects on a circle centered in 𝑂𝑂 ⏊ ′ with the same radius. Using geometrical
calculations, we obtain:
𝑂𝑂𝑂𝑂 ⏊ ′= 𝛼𝛼 ∗ 𝑂𝑂 𝐴𝐴 ⏊ ′ , (eq. 3.17)
101
where 𝛼𝛼 is a scale parameter. By taking the approximation,
∆ 𝑂𝑂 ′
𝑂𝑂𝑂𝑂 ⏊
′
~ ∆ 𝐴𝐴 ′
𝑂𝑂 𝐴𝐴 ⏊
′
, (eq. 3.18)
we can obtain
𝑂𝑂𝑂𝑂 ′ = 𝛼𝛼 ∗ 𝑂𝑂 𝐴𝐴 ′. (eq. 3.19)
Furthermore, given a point 𝑁𝑁 ′
on the circumference centered in 𝑂𝑂 ′
, eq.14 also implies that:
𝑂𝑂 ′ 𝑁𝑁 ′
= (1 − 𝛼𝛼 ) ∗ 𝑂𝑂 𝑁𝑁 ⏊ ′ , (eq. 3.20)
which is equivalent to
𝐹𝐹 ′ = (1 − 𝛼𝛼 ) ∗ 𝑅𝑅 . (eq. 3.21)
where R is the radius of the phasor plot unit circle.
With this approach, provided a new center 𝐴𝐴 ⏊ ′ with a specific 𝛼𝛼 , we obtain a collection of
scaled circles with centers on line 𝑂𝑂 𝐴𝐴 ⏊ ′ . In boundary cases, when 𝛼𝛼 = 0, the scaled circle is the
origin, while 𝛼𝛼 = 1 is the unit circle. Given any cutting plane 𝑂𝑂 ′ , the radius of this cross section
always satisfies this identity:
𝐹𝐹 ′
2
= ( 𝑔𝑔 𝑖𝑖 − 𝛼𝛼 ∗ 𝑔𝑔 𝐴𝐴 ⏊
′)
𝟐𝟐 + ( 𝑑𝑑 𝑖𝑖 − 𝛼𝛼 ∗ 𝑑𝑑 𝐴𝐴 ⏊
′)
𝟐𝟐 = (1 − 𝛼𝛼 )
𝟐𝟐 ∙ 𝑅𝑅 2
(eq. 3.22)
The coordinates of a point 𝑃𝑃′ ′( 𝑔𝑔 𝑐𝑐 , 𝑑𝑑 𝑐𝑐 ) for a new morphed map centered in 𝐴𝐴 ⏊ ′ are:
( 𝑔𝑔 𝑐𝑐 , 𝑑𝑑 𝑐𝑐 ) = ( 𝑔𝑔 𝑖𝑖 − 𝛼𝛼 ∗ 𝑔𝑔 𝐴𝐴 ⏊
′, 𝑑𝑑 𝑖𝑖 − 𝛼𝛼 ∗ 𝑑𝑑 𝐴𝐴 ⏊
′) (eq. 3.23)
Finally, we compute
( 𝐹𝐹 𝑐𝑐 , 𝜃𝜃 𝑐𝑐 ) = � � 𝑔𝑔 𝑐𝑐 2
+ 𝑑𝑑 𝑐𝑐 2
, tan
− 1
𝑐𝑐 𝑜𝑜 𝑔𝑔 𝑜𝑜 � (eq. 3.24)
and then assign colors based on the newly calculated Hue, Saturation, and Value to generate
the morph mode references.
102
3.6.9 Color image quality calculations: colorfulness
Due to the inherent lack of ground truth in experimental fluorescence microscopy images,
we utilized an established model for calculating the color quality of an image without a
reference
130
. The colorfulness is one of three parameters, together with sharpness and contrast,
utilized by Panetta et al
126
to quantify the overall quality of a color image. Two opponent color
spaces are defined as:
𝛼𝛼 = 𝑅𝑅 − 𝐺𝐺 (eq. 3.25)
𝛽𝛽 = 0.5 ( 𝑅𝑅 + 𝐺𝐺 ) − 𝐵𝐵 (eq. 3.26)
Where R, G, B are the red, green and blue channels respectively, α and β are red-green and
yellow-blue spaces. The colorfulness utilized here is defined as:
𝐶𝐶 𝑠𝑠 𝑙𝑙𝑠𝑠 𝐹𝐹𝑓𝑓𝑢𝑢 𝑙𝑙𝑑𝑑 𝑒𝑒𝑑𝑑𝑑𝑑 = 0.02 log �
𝜎𝜎 𝛼𝛼 2
| 𝜇𝜇 𝛼𝛼 |
0. 2
�log �
𝜎𝜎 𝛽𝛽 2
� 𝜇𝜇 𝛽𝛽 �
0. 2
� (eq. 3.27)
With 𝜎𝜎 𝛼𝛼 2
, 𝜎𝜎 𝛽𝛽 2
, 𝜇𝜇 𝛼𝛼 , 𝜇𝜇 𝛽𝛽 respectively as the variances and mean values of the α and β spaces
126
.
3.6.10 Sharpness
We utilize EME
131
, a Weber based measure of enhancement. EME is defined as follows:
𝐸𝐸 𝐸𝐸 𝐸𝐸 𝑐𝑐 ℎ𝑎𝑎𝑎𝑎 𝑎𝑎 =
2
𝑘𝑘 1
𝑘𝑘 2
∑ ∑ 𝑙𝑙 𝑠𝑠𝑔𝑔 �
𝐼𝐼 𝑚𝑚𝑚𝑚 𝑥𝑥 , 𝜋𝜋 , 𝑙𝑙 𝐼𝐼 𝑚𝑚 𝑖𝑖 𝑚𝑚 , 𝜋𝜋 , 𝑙𝑙 �
𝑘𝑘 2
𝑙𝑙 = 1
𝑘𝑘 1
𝑙𝑙 = 1
(eq. 3.28)
Where 𝑘𝑘 1
, 𝑘𝑘 2
are the blocks used to divide the image and 𝐼𝐼 𝑚𝑚 𝑎𝑎 𝑥𝑥 , 𝑘𝑘 , 𝑙𝑙 and 𝐼𝐼 𝑚𝑚 𝑖𝑖𝑠𝑠 , 𝑘𝑘 , 𝑙𝑙 are the
maximum and minimum intensities in the blocks. EME has been shown to correlate with a human
observation of sharpness in color images
126
when associated with a weight 𝜆𝜆 𝑐𝑐 for each color
component.
103
𝑆𝑆 ℎ 𝑎𝑎 𝐹𝐹𝑠𝑠 𝑑𝑑 𝑒𝑒𝑑𝑑𝑑𝑑 = ∑ 𝜆𝜆 𝑐𝑐 𝐸𝐸 𝐸𝐸 𝐸𝐸 𝑐𝑐 ℎ𝑎𝑎𝑎𝑎 𝑎𝑎 3
𝑐𝑐 = 1
(eq. 3.29)
Where the weights for different color components used in this article are 𝜆𝜆 𝑅𝑅 = 0.299, 𝜆𝜆 𝐺𝐺 =
0.587, 𝜆𝜆 𝐵𝐵 = 0.114 in accordance with NTSC standard and values reported in literature
126
.
3.6.11 Contrast
We utilize Michelson-Law measure of enhancement AME
132
, an effective evaluation tool for
contrast in grayscale images, designed to provide larger metric value for larger contrast images.
AME is defined as:
𝐴𝐴 𝐸𝐸 𝐸𝐸 𝑐𝑐𝑐𝑐 𝑠𝑠𝑡𝑡 𝑎𝑎 𝑎𝑎 𝑐𝑐 𝑡𝑡 =
1
𝑘𝑘 1
𝑘𝑘 2
∑ � ∑ 𝑙𝑙 𝑠𝑠𝑔𝑔 �
𝐼𝐼 𝑚𝑚𝑚𝑚 𝑥𝑥 , 𝜋𝜋 , 𝑙𝑙 + 𝐼𝐼 𝑚𝑚 𝑖𝑖 𝑚𝑚 , 𝜋𝜋 , 𝑙𝑙 𝐼𝐼 𝑚𝑚𝑚𝑚 𝑥𝑥 , 𝜋𝜋 , 𝑙𝑙 − 𝐼𝐼 𝑚𝑚 𝑖𝑖 𝑚𝑚 , 𝜋𝜋 , 𝑙𝑙 �
𝑘𝑘 2
𝑙𝑙 = 1
�
− 0. 5
𝑘𝑘 1
𝑙𝑙 = 1
(eq. 3.30)
Where 𝑘𝑘 1
, 𝑘𝑘 2
are the blocks used to divide the image and 𝐼𝐼 𝑚𝑚 𝑎𝑎𝑥𝑥 , 𝑘𝑘 , 𝑙𝑙 and 𝐼𝐼 𝑚𝑚 𝑖𝑖𝑠𝑠 , 𝑘𝑘 , 𝑙𝑙 are the
maximum and minimum intensities in the blocks. The value of contrast for color images was then
calculated as:
𝐶𝐶 𝑠𝑠 𝑑𝑑𝑒𝑒 𝐹𝐹 𝑎𝑎 𝑑𝑑𝑒𝑒 = ∑ 𝜆𝜆 𝑐𝑐 𝐴𝐴 𝐸𝐸 𝐸𝐸 𝑐𝑐 𝑐𝑐 𝑠𝑠𝑡𝑡 𝑎𝑎 𝑎𝑎 𝑐𝑐𝑡𝑡
3
𝑐𝑐 = 1
(eq. 3.31)
With the same weights 𝜆𝜆 𝑐𝑐 utilized for sharpness.
3.6.12 Color Quality Enhancement
We utilize Color Quality Enhancement (CQE) a polling method to combine colorfulness,
sharpness and contrast into a value that has both strong correlation and linear correspondence
with human visual perception of quality in color images
126
. CQE is calculated as:
𝐶𝐶𝐶𝐶 𝐸𝐸 = 𝑐𝑐 1
𝑐𝑐 𝑠𝑠 𝑙𝑙𝑠𝑠 𝐹𝐹𝑓𝑓𝑢𝑢 𝑙𝑙𝑑𝑑 𝑒𝑒𝑑𝑑 𝑑𝑑 + 𝑐𝑐 2
𝑑𝑑 ℎ 𝑎𝑎 𝐹𝐹𝑠𝑠𝑑𝑑 𝑒𝑒𝑑𝑑 𝑑𝑑 + 𝑐𝑐 3
𝑐𝑐 𝑠𝑠 𝑑𝑑𝑒𝑒 𝐹𝐹 𝑎𝑎 𝑑𝑑 𝑒𝑒 (eq. 3.31)
Where the linear combination coefficients for CQE measure were set to evaluate contrast
change according to values reported in literature
126
, 𝑐𝑐 1
= 0.4358, 𝑐𝑐 2
= 0.1722 and 𝑐𝑐 3
=
0.3920.
104
3.6.13 Mouse lines
Mice imaging was approved by the Institutional Animal Care and Use Committee (IACUC) of
the Children’s Hospital of Los Angeles (permit number: 38616) and of the University of Southern
California (permit number: 20685). Experimental research on vertebrates complied with
institutional, national and international ethical guidelines. Animals were kept on a 13:11 h
light:dark cycle. Animals were breathing double filtered air, temperature in the room was kept at
68–73 F, and cage bedding was changed weekly. All these factors contributed to minimize intra-
and inter-experiment variability. Adult 8-week-old C57Bl mice were euthanized with euthasol.
Tracheas were quickly harvested from the mouse, washed in PBS, and cut longitudinally alongside
the muscolaris mucosae in order to expose the lumen. A 3 mm× 3mm piece of the trachea was
excised and arranged onto a microscope slide for imaging.
3.6.14 Zebrafish lines
Lines were raised and maintained following standard literature practice
66
and in accordance
with the Guide for the Care and Use of Laboratory Animals provided by the University of Southern
California. Fish samples were part of a protocol approved by the IACUC (permit number: 12007
USC).
Transgenic FlipTrap Gt(desm-Citrine)
ct122a/+
line is the result of previously reported screen
133
,
Tg(kdrl:eGFP)
s843
line
134
was provided by the Stainier lab (Max Planck Institute for Heart and Lung
Research). The Tg(ubi:Zebrabow) line was a kind gift from Alex Schier
102
. Controllable
recombination of fluorophores was obtained by crossing homozygous Tg(ubi:Zebrabow) adults
with a Tg(hsp70l: Cerulean-P2A-CreER
T2
) line. Embryos were raised in Egg Water (60 μgml−1 of
105
Instant Ocean and 75 μgml−1 of CaSO4 in Milli-Q water) at 28.5 °C with addition of 0.003% (w
v−1) 1-phenyl-2-thiourea (PTU) around 18 hpf to reduce pigment formation
135
.
Zebrafish samples with triple fluorescence were obtained by crossing Gt(desm- Citrine)
ct122a/+
with Tg(kdrl:eGFP) fish followed by injection of 100 pg per embryo of mRNA encoding H2B-
Cerulean at one-cell stage
97
. Samples of Gt(desm-Citrine)
ct122a/+
;Tg(kdrl:eGFP); H2B-Cerulean
were imaged with 458 nm laser to excite Cerulean, Citrine and eGFP and narrow 458–561 nm
dichroic for separating excitation and fluorescence emission.
3.6.15 Plasmid constructions
pDestTol2pA2-hsp70l:Cerulean-P2A-CreERT2 (for generating Tg(hsp70l:Cerulean-P2A-
CreERT2) line)
The coding sequences for Cerulean, CreERT2, and woodchuck hepatitis virus
posttranscriptional regulatory element (WPRE) were amplified from the vector for
Tg(bactin2:cerulean-cre)
133
, using primers #1 and #2 (complete list of primers is reported in
Supplementary Table 5), pCAG-ERT2CreERT2 (Addgene #13777) using primers #3 and #4, and the
vector for Tg(PGK1:H2B-chFP)
136
using primers #5 and #6, respectively. Then Cerulean and
CreERT2 sequences were fused using a synthetic linker encoding P2A peptide
137
. The resultant
Cerulean-P2A-CreERT2 and WPRE sequences were cloned into pDONR221 and pDONR P2R-P3
(Thermo Fisher Scientific), respectively. Subsequent MultiSite Gateway reaction was performed
using Tol2kit vectors according to developer’s manuals
138
. p5E-hsp70l (Tol2kit #222), pDONR221-
Cerulean- P2A-CreER, and pDONR P2R-P3-WPRE were assembled into pDest- Tol2pA2 (Tol2kit
#394)
139,140
.
106
pDestTol2pA2-fli1:mKO2 (for generating Tg(fli1:mKO2) line)
The coding sequence for mKO2 was amplified from mKO2-N1 (addgene #54625) using
primers #7 and #8, and cloned into pDONR221. Then p5Efli1ep (addgene #31160), pDONR221-
mKO2, and pDONR P2R-P3-WPRE were assembled into pDest- Tol2pA2 as described above.
3.6.16 Microinjection and screening of transgenic zebrafish lines
2.3 nL of a solution containing 20 pg nL
−1
plasmid DNA and 20 pg nL
−1
tol2 mRNA was injected
into the one-cell stage embryo obtained through crossing AB with Casper zebrafish
141
. The
injected F0 embryos were raised and crossed to casper zebrafish for screening. The F1 embryos
for prospective Tg(hsp70l:Cerulean-P2A-CreER
T2
) line and Tg(fli1:mKO2) were screened for
ubiquitous Cerulean expression after heat shock for 30 min at 37 °C, and mKO2 expression
restricted in vasculatures, respectively. Positive individual F1 adults were subsequently
outcrossed to casper zebrafish, and their offspring with casper phenotype were then used for
experiments when 50% transgene transmission was observed in the subsequent generation,
indicating single transgene insertions.
3.6.17 Sample preparation and multispectral image acquisition and
instrumentation
Images were acquired on a Zeiss LSM780 inverted confocal microscope equipped with
QUASAR detector (Carl Zeiss, Jena, Germany). A typical dataset comprised 32 spectral channels,
covering the wavelengths from 410.5 nm to 694.9 nm with 8.9 nm bandwidth, generating an x,y,λ
image cube. Detailed acquisition parameters are reported in Supplementary Table 1.
Zebrafish samples for in vivo imaging were prepared by placing 5–6 embryos at 24–72 hpf
into 1% agarose (cat. 16500-100, Invitrogen) molds created in an imaging dish with no. 1.5
107
coverglass bottom, (cat. D5040P, WillCo Wells) using a custom-designed negative plastic mold
113
.
Stability of the embryos was ensured by adding ~2 ml of 1% UltraPure low-melting-point agarose
(cat. 16520-050, Invitrogen) solution prepared in 30% Danieau (17.4mM NaCl, 210 μM KCl, 120
μM MgSO4·7H2O, 180 μM Ca(NO3)2, 1.5mM HEPES buffer in water, pH 7.6) with 0.003% PTU
and 0.01% tricaine. This solution was subsequently added on top of the mounted embryos. Upon
agarose solidification at room temperature (1–2 min), the imaging dish was topped with 30%
Danieau solution and 0.01% tricaine at 28.5 °C. Imaging on the inverted confocal microscope was
performed by positioning the imaging dish on the microscope stage. For Tg(ubi:Zebrabow)
samples, to initiate expression of CreER
T2
, embryos were heat-shocked at 15 h post fertilization
at 37 °C in 50 ml falcon tubes within a water bath before being returned to a 28.6 °C incubator.
To initiate recombination of the Zebrabow transgene, 5 μM 4-OHT (Sigma; H7904) was added to
culture media 24 h post fertilization. Samples of Tg(ubi:Zebrabow) were imaged using 458 nm
laser to excite CFP, YFP, and RFP in combination with a narrow 458 nm dichroic.
Mouse tracheal samples were collected from wild-type C57Bl mice and mounted on a
coverslip with sufficient Phosphate Buffered Saline to avoid dehydration of the sample. Imaging
was performed in 2-photon mode exciting at 740 nm with a 690+ nm dichroic.
3.6.18 Non-de-scanned (NDD) multiphoton fluorescence lifetime imaging (FLIM)
and analysis.
Fluorescence lifetime imaging microscopy (FLIM) data were acquired with a two-photon
microscope (Zeiss LSM-780 inverted, Zeiss, Jena, Germany) equipped with a Ti:Sapphire laser
system (Coherent Chameleon Ultra II, Coherent, Santa Clara, California) and an ISS A320
FastFLIM
142
(ISS, Urbana- Champaign, Illinois). The objective used was a 2-p optimized 40 × 1.1
108
NA water immersion objective (Korr C-Apochromat, Zeiss, Jena, Germany). Images with size of
256 × 256 pixels were collected with pixel dwell time of 12.6 μs pixel
−1
. A dichroic filter (690+ nm)
was used to separate the excitation light from fluorescence emission. Detection of fluorescence
comprised a combination of a hybrid photomultiplier (R10467U-40, Hamamatsu, Hamamatsu
City, Japan) and a 460/80 nm band-pass filter. Acquisition was performed using VistaVision
software (ISS, Urbana-Champaign, Illinois). The excitation wavelength used was 740 nm with an
average power of about 7mW on the sample. Calibration of lifetimes for the frequency domain
system was performed by measuring the known lifetime of the Coumarin 6 with a single
exponential of 2.55 ns. FLIM data were collected until 100 counts in the brightest pixel of the
image were acquired. Data was processed using the SimFCS software developed at the Gratton
Lab (Laboratory of Fluorescence Dynamics (LFD), University of California Irvine, www.lfd.uci.edu).
FLIM analysis of intrinsic fluorophores was performed as previously described and reported in
detail
91,107,111
. Phasor coordinates (g,s) were obtained through Fourier transformations. Cluster
identification was utilized to associate specific regions in the phasor to pixels in the FLIM dataset
according to published protocols
107
.
109
3.7 Supplementary Material
3.7.1 Supplementary Tables
Scaling
X-Y
[µm]
Scaling
Z
[µm]
Imaged Volume
(x, y, z, λ, t)
[pixels]
Pixel
Dwell
[µs]
Gain
[au]
Pinhole
[µm]
Beam
Splitters
Laser
power
[%]
Size
[GB]
Fig3.1 1.661 15.837
x:2048, y: 768,
z:20,
channels:33
3.15
µs
825 73 µm 458
20 %
@458
nm
1.93
Fig3.4,
SupFig
3.13,
3.14
0.208
x:1024, y:1024,
channels:33
12.6
µs
820 601 µm
690+
1.8 %
@740
nm
0.06
Fig3.5,
SupFig3.
15, 3.16
0.923 6.000
x:3840, y:768,
z:19,
channels:33
3.15
µs
805 229 µm 488
5.5 %
@488
nm
3.44
Fig3.6,
SupFig3.
2, 3.17,
3.18
0.415 5.000
x:2048, y:512,
z:7, channels:33
6.30
µs
800 186 µm 458
1.0 %
@458
nm
0.45
Fig3.7,
SupFig3.
19
0.104
x:2048, y:1024,
channels:33
1.58
µs
827 40 µm 458
47.7 %
@458
nm
0.13
SupFig
3.24
0.069 0.069
x:1024, y:1024,
channels: 33,
time: 100
1.59
µs
800 70 µm 488/561
1.2% @
561 nm
6.5% @
488 nm
6.93
SupFig3.
1
0.923 6
x:365, y:196, z:4,
channels:33
6.30
µs
796 147 µm 458
5.8% @
458 nm
0.02
SupFig3.
1
0.052 1
x:451, y:825, z:6,
channels:33
6.30
µs
827 40 µm
458 /
690+
60% @
458 nm
0.14
110
SupFig3.
1
0.052 1
x:1024, y:1024,
z:6, channels:33
6.30
µs
827 40 µm
458 /
690+
60% @
458 nm
0.39
SupFig3.
1
0.865 2
x:512, y:512,
z:70,
channels:32
1.58
µs
827 601 µm 690+
12% @
900 nm
1.10
SupFig3.
1
0.346 2
x:1024, y:1024,
z:60,
channels:32
1.58
µs
727 75 µm 405
0.3% @
405 nm
3.75
SupFig3.
1
0.371 2
x:1024, y:1024,
z:62,
channels:32
3.15
µs
820 41 µm 458/561
2% @
561 nm
3.88
SupFig3.
1
0. 865 2
x:512, y:512,
z:70,
channels:32,
time:10
1.58
µs
800 75 µm 488/561
3% @
561 nm
2.6% @
488 nm
10.97
SupFig3.
1
0. 865 2
x:512, y:512,
z:70,
channels:32,
time:10
1.58
µs
800 75 µm 488/561
3% @
561 nm
2.6% @
488 nm
21.94
SupFig3.
1
0. 865 2
x:512, y:512,
z:70,
channels:32,
time:10
1.58
µs
800 75 µm 488/561
3% @
561 nm
2.6% @
488 nm
43.88
Supplementary Table 3.1: Parameters for in vivo imaging. All data points are 16-bit depth, acquired
using LD C-Apochromat 40x/11 W lens.
111
Processing Time
SEER
[sec]
ICA (3c)
[sec]
Speed Up
[folds]
Figure 3.4 0.44 3.45 7.9
Figure 3.5 6.27 256.86 41.0
Figure 3.5 subset 2.89 77.82 26.9
Figure 3.6 1.49 33.58 22.6
Figure 3.7 0.52 10.19 19.5
Supplementary Table 3.2: Processing time comparison SEER vs Independent Component Analysis
(scikit-learn implementation) for Figures 3.4-3.7.
Average Score
Colorfulness Contrast Sharpness
Gauss. Def. 2.11 53.84 10.83
Gauss r=.1 2.06 52.19 10.60
Gauss r=.2 1.97 59.57 11.17
Gauss r=.3 1.96 60.00 11.20
Peak Wav. 2.29 58.61 11.14
SEER h=1 2.34 66.32 11.40
SEER h=2 2.34 65.15 11.40
Supplementary Table 3.3: Average colorfulness, contrast and sharpness score across Figures 3.4-3.7
for different visualization methods
112
Color Quality Enhancement
Figure 3.4 Figure 3.5 Figure 3.6 Figure 3.7
Gauss. Def. 39.47 22.72 58.37 46.64
Gauss r=.1 38.26 16.39 61.29 46.34
Gauss r=.2 43.55 30.11 62.53 46.71
Gauss r=.3 43.07 30.26 63.89 46.89
Peak Wav. 43.15 29.09 61.70 47.27
SEER h=1 45.72 31.08 72.87 53.18
SEER h=2 48.38 32.37 69.33 49.58
Supplementary Table 3.4: Color Quality Enhancement score for datasets in Figures 3.4-3.7. Parameters
calculations are reported in methods section.
# Name Sequence
1 attB1-Cerulean-P2A1-F ggggacaagtttgtacaaaaaagcaggctaccatggtgagcaagggcgaggagctg
2 attB1-Cerulean-P2A1-R ggttctcctccacgtctccagcctgcttcagcaggctgaagttagtagctccgcttcccttgtacagctcgtccatgccg
3 P2A1-CreERT2-attB2-F caggctggagacgtggaggagaaccctggacctaatttactgaccgtacaccaaaatttg
4 P2A1-ERT2CreERT2-attB2-R ggggaccactttgtacaagaaagctgggtaggagtgcggccgctatcaagc
5 attB2r-WPRE-attB3-F ggggacagctttcttgtacaaagtggggtcaacctctggattacaaaatttgtg
6 attB2r-WPRE-attB3-R ggggacaactttgtataataaagttggtgcggggaggcggcccaaagg
7 attB1-mKO2-F1 ggggacaagtttgtacaaaaaagcaggcttcaccatggtgagtgtgattaaaccagag
8 mKO2-attB2-R1 ggggaccactttgtacaagaaagctgggttttaatgagctactgcatcttctacctgc
Supplementary Table 3.5: Primer list for plasmid constructions
113
3.7.2 Supplementary Figures
Supplementary Figure 3.1. Computational time comparison of SEER and ICA for different file sizes.
(a) HySP and ICA run times (plot in log scale) were measured on a HP workstation with two 12 core CPUs,
128 GB RAM, and 1TB SSD. SEER run times were measured within a modified version of the software. ICA
run times were measured using a custom script and the FastICA submodule of the python module, scikit-
learn. Timers using the perf_counter function within the python module, time, were placed around
specific functions corresponding to the calculations required for the creation of SEER maps in HySP and
extracting individual component outputs from the custom ICA script. Data size varies from 0.02-10.97GB,
with constant number of bands (32 bands, 410.5 nm to 694.9 nm with 8.9 nm bandwidth) corresponding
to a range of 2.86 ⋅10
5
-1.83 ⋅10
8
spectra. ICA testing was limited to 10.97GB maximum as for higher values
the RAM requirements exceeded the 128GB available on our workstation. (b) For the custom ICA script,
timers were placed to measure the time to reshape the hyperspectral data for ICA input, to run the ICA
algorithm, and to convert values of the ICA components into image intensity values, reaching minutes of
computation at just 1.1GB (plot in log scale). (c) For HySP, timers were placed to measure the generation
of the phasor values from hyperspectral data, including initial calculations of the real and imaginary
components (g and s) and creation of the phasor plot histogram. A timer was also placed around all
preparatory functions required for on-the-fly creation of SEER maps. The more memory-efficient phasor
process allowed us to compute datasets of size 0.02-43.9GB, corresponding to a range of 2.86 ⋅10
5
-
7.34 ⋅10
8
spectra (plot in log scale).
114
Supplementary Figure 3.2. Comparison of SEER with visualized HySP
97
results.
Here we show a zebrafish embryo Tg(kdrl:eGFP); Gt(desmin-Citrine);Tg(ubiq:H2B-Cerulean) labelling
respectively vasculature, muscle, and nuclei. Live imaging with a multi-spectral confocal microscope (32-
channels) using 458nm excitation. Single plane slices of the tiled volume are rendered with SEER maps (3
channel, RGB) and compared to rendering of the same dataset analyzed with HySP (here 5 channels)
97
. (a)
Rendering of a 5 channel HySP analyzed dataset, the dashed box is expanded in the zoomed-in portion of
panel a with its (b) line profile to the right along the solid line all 5 separate channels, eGFP, Citrine,
Cerulean, Pigments and autofluorescence at 458nm. (c) Visualization of the 5-channel dataset as a
blended RGB, similarly to how it appears on a screen. The (d) morphed mode center of mass visualization
shows patterns in accordance with HySP with a differently color coded (e) line profile along the solid line
in panel d, which shows intensities in the 3 R,G,B channels of the image. The profiles of the single R,G,B
do not match the unmixed HySP profiles in panel b. However, (f) color visualization of the same line plot
(as R,G,B vectors), shows patterns in accordance to the on-screen visualization of HySP unmixed data.
Similarly, (g) morphed mode max visualization shows an image in accordance to the rendered HySP
analyzed data in panel a with its (h) line profile along the solid line of the zoomed-in portion of panel g
being comparable to both the HySP 5 separate channels and the R,G,B profiles of the different morphed
center of mass map in panel e. (i) The color on-display visualization of the RGB intensities in g reveals
different color features as those of the HySP unmixed channels (panel b).
115
Supplementary Figure 3.3. Simulated Hyperspectral Test Chart I rendered in TrueColor shows nearly
indistinguishable spectra.
The simulation is represented here in “TrueColor RGB” (Methods). 𝑆𝑆 1
, 𝑆𝑆 2
, and 𝑆𝑆 3
spectra acquired
respectively from CFP, YFP, RFP zebrafish embryos are used to generate a (a-i) 3-by-3 Simulated
Hyperspectral Test Chart. In each panel (a-i) of the chart, three spectra ( 𝑆𝑆 1
to 𝑆𝑆 3
) are represented as
concentric squares (see panel a) outer: S1 - blue, intermediate: S2 - yellow, middle: S3 - red spectra
respectively). The spectrum 𝑆𝑆 2
(intermediate square in each panel) is kept unchanged in all panels. The
maximum of spectrum 𝑆𝑆 1
is shifted by d1 (-2 wavelength bins, -17.8 nm steps) with respect to the fixed
spectrum 𝑆𝑆 2
maximum. 𝑆𝑆 3
max value is shifted by d2 (2 wavelength bins, 17.8nm nm steps) respect to
𝑆𝑆 2
maximum. The changes are applied for 2 steps along the vertical (d1) and horizontal (d2) axis of the
center panel assembly (a-i), starting from d1=d2=0 (panel a). The spectra utilized in each panel (a-i) are
represented in panels j-r. Each plot (j-r) represents the averaged normalized 𝑆𝑆 1
- 𝑆𝑆 3
spectra as 32
wavelength bins, 8.9nm bandwidth, 410-695 nm detection. Each panel has different visual contrast but is
generally difficult to distinguish by eye due to significant overlap in spectra. (s) R,G,B channels used in the
Gaussian Kernel for True color representation (red, green, blue lines) and average spectrum for panels (a-
i) (yellow line) for reference.
116
Supplementary Figure 3.4. Effect of spectral shape with constant intensities on radial map in absence
of background.
This simulation shows spectra with Gaussian shape and different standard deviations on using 32
wavelength bins, 8.9nm bandwidth, and a 410-695 nm range in the absence of background. All spectra
are centered on 543nm (channel 16) and the integral of intensities is kept constant. (a-l) For each value
of the standard deviation, a grayscale image and SEER visualization are presented. The map used is the
Radial map centered on the origin and extended to the border of the phasor plot. A color reference is
added in the phasor plot (m). Clusters on the phasor plot are distributed along the radius, with distance
from the origin inversely proportional to the standard deviation.
117
Supplementary Figure 3.5. Effect of spectrum intensity in presence of background on radial map.
In this simulation, the first panel (top-left) of the Simulated Hyperspectral Test Chart (Supplementary
Figure 3.3) is reduced in intensity by a factor of 10
1
- 10
4
(panel 1-4 respectively) in the presence of a
constant background. Background with an average intensity of 5 digital levels was generated in MATLAB;
Poissonian noise was added using the poissrnd() function. Grayscale images (a,d,g,j) are scaled by (a)
factor of 10, (d) factor of 10
2
, (g) factor of 10
3
, (j) factor of 10
4
. Radial map (original) visualization shows a
shift of panel colors toward blue with the decreasing intensities (b,e,h,k). The phasor plots (c,f,i,l)
(harmonic n=2) show a radial shift of the clusters toward the origin. Radial map reference is added in (c).
(m) Absolute intensities plot shows the average spectrum for the four panels, maximum peak values are
1780, 182, 23, 7 digital levels (panel 1-4 respectively). The normalized intensity spectra (n) show an
apparent broadening of the shape of spectra with the decreasing signal-to-noise.
118
Supplementary Figure 3.6. Radial and Angular reference map designs and modes differentiate nearly
indistinguishable spectra (Simulated Hyperspectral Test Chart I) (Supplementary Figure 3.3).
We present 4 different modes that can be applied for each map. Here second harmonic is utilized for the
calculations. Angular map (a) and Radial map (b) in Standard mode, Scaled mode, Max Morph mode and
Mass Morph mode. In Standard mode, the reference map is centered at the origin and limited by the
phasor unit circle. In Scaled mode, the reference map adapts to the phasor plot histogram, changing its
coordinates to wrap around the edges of the phasor clusters and enhancing contrast of the chosen map
properties. In Max Morph mode, the map is centered on the spectrum with highest frequency of
appearance in the phasor histogram. This mode improves sensitivity by using statistical frequency bias. In
Mass Morph mode, the map is centered on the weighted center of the phasor, enhancing sensitivity for
multiple small spectra. Visualizations are presented after 1x spectral denoising.
119
Supplementary Figure 3.7. Gradient Ascent and Descent reference map designs and modes
differentiate nearly indistinguishable spectra (Supplementary Figure 3.3).
Here, second harmonic is utilized for SEER. Gradient Ascent map (a) and Gradient Descent map (b) in
Standard mode, Scaled mode, Max Morph mode and Mass Morph mode. The two maps place a focus on
very different (Ascent) and similar (Descent) spectra by fading the reference map to dark at the center
and edges of the phasor plot unit circle respectively. Visualizations are presented after 1x spectral
denoising.
120
Supplementary Figure 3.8. Simulated Hyperspectral Test Chart II and its standard overlapping spectra.
Simulated SHTC II was generated from the same zebrafish embryo datasets and same design used in
SHTC1 (Supplementary Figure 3.3) utilizing CFP, YFP and RFP labeled samples and 3-by-3 block chart, with
each block subdivided into 3 regions corresponding to spectra 𝑆𝑆 1
, 𝑆𝑆 2
, and 𝑆𝑆 3
. The aim is to test scenarios
with less overlapping spectra. We change the shifting distance in this simulation to be d1 (-3 wavelength
bins, -26.7nm nm steps) and d2 (3 wavelength bins, 26.7nm steps). The channels used in the Gaussian
Kernel for TrueColor RGB representation here were 650nm, 510nm, 470nm which respectively represent
R, G, B. The concentric squares in the lower right side of the simulation are separated by a peak-to-peak
distance of 53.6nm, with outer and inner concentric squares well separated by 106.8nm. This distance is
similar to the emission gap between CFP (475nm EM) and tdTomato (581nm). Under these spectral
conditions most methods are expected to perform well.
121
Supplementary Figure 3.9. Radial and Angular reference map designs and modes rendering standard
overlapping spectra (Simulated Hyperspectral Test Chart II) (Supplementary Figure 3.2).
Here, first harmonic is utilized for SEER Angular map (a) and Radial map (b) in Standard mode, Scaled
mode, Max Morph mode and Mass Morph mode are here applied to the standard overlapping spectra
simulation. The reference maps show improved contrast consistently among different modalities.
Visualizations are presented after 1x spectral denoising.
122
Supplementary Figure 3.10. Gradient ascent and descent reference map designs and modes
differentiation of standard overlapping spectra (Simulated Hyperspectral Test Chart II).
Here, first harmonic is utilized for SEER Gradient Ascent map (a) and Gradient Descent map (b) in Standard
mode, Scaled mode, Max Morph mode and Mass Morph mode. The reference maps provide enhanced
visualization even in the scenario of spectra overlapping at similar level to commonly used fluorescent
proteins. Visualizations are presented after 1x spectral denoising.
123
Supplementary Figure 3.11. Spectral denoising effect on Angular and Radial maps visualization of
standard overlapping spectra (Simulated Hyperspectral Test Chart II, Supplementary Figure 3.8).
Phasor spectral denoising affects the quality of data along the spectral dimension, without changing
intensities. Here second harmonic is utilized for calculations. Noisy data appears as a spread cluster on
the phasor, here shown overlaid with the (a) Angular map and (b) Radial map, with the overlaid
visualization exhibiting salt and pepper noise. (c, d) When denoising is applied on the phasor, the cluster
spread is reduced, providing greater smoothing and less noise in the simulated chart. (e, f) Increasing the
number of denoising filters results in a clearer distinction between the three spectrally different areas in
each block of the simulation. (a, c, e) In Max Morph Mode, each denoising filter introduces a shift of the
apex of the map, changing the reference center of the color palette (b, d, f) In Scale Mode, the less
scattered phasor cluster makes maximum use of the reference maps, enhancing the contrast (d, f) of the
rendered SHTC.
124
Supplementary Figure 3.12. Spectral denoising effect on Gradient Ascent and Descent maps
visualization of standard overlapping spectra (Simulated Hyperspectral Test Chart II, Supplementary
Figure 3.8).
The phasor spectral denoising principle described in (Supplementary Figure 3.11) applies to different
reference maps. In this case (a) Gradient Ascent map in Scaled mode and (b) Gradient Descent in Mass
Morph mode are overlaid to the scattered phasor representation of a standard overlapping spectrum
SHTC. The denoising filter removes outliers along the spectral dimension while preserving intensities. (c,
d) The phasor cluster spread is reduced after filtering, resulting in spectral smoothing of the images
affected by noise. Due to the changes in phasor cluster spread after filtering, the map reference for the
Gradient Ascent map has an increased brightness in comparison to its non-filtered representation (chart
panel in a and b). (e, f) The rendered SHTC after multiple denoising passes has higher intensity, which
simplifies distinction of subtle differences in spectra. (b, d, f) The denoising filter does not change the
clusters’ center of mass, therefore the apex of the reference map remains unchanged after filtering.
However, the filters play a role in reducing Poisson noise in the dataset, converging to a stable value after
5x filtering. The representation shows more uniformity in the concentric squared areas of within each
block, which are simulated using the same spectrum. The edges of these squares are now more sharp and
easier to detect, suggesting the combination of SEER and phasor denoising can play an important role in
simplifying image segmentation.
125
Supplementary Figure 3.13. Visualization comparison for autofluorescence with other RGB standard
visualizations.
The visualization of unlabeled freshly isolated mouse tracheal explant (Figure 3.4) is shown here with
different standard approaches. Details for these visualizations are reported in the Methods section. (a)
SEER RGB mask obtained using gradient descent morphed map; this mask shows the colors associated by
SEER to each pixel, without considering intensity. (b) Average spectrum for the entire dataset. (c)
TrueColor 32 channels maximum intensity projection (d) Peak wavelength RGB mask. (e) Gaussian Default
Kernel with RGB centered respectively at 650nm, 510nm and 470nm. (f) Gaussian Kernel at 10% threshold,
RGB values centered at 659nm, 534nm and 410nm. (g) Gaussian Kernel at 20% threshold, RGB values
centered at 570nm, 490nm and 410nm. (h) Gaussian kernel at 30% threshold, RGB values centered at
543nm, 472nm and 410nm. (i) wavelength-to-RGB color representation for Peak Wavelength mask in
panel d. A representation of the RGB visualization parameters is reported in the following sub plots (j)
kernel used for panel e, average spectrum of the dataset (yellow plot), (k) kernel used for panel f, average
spectrum of the dataset (yellow plot), (l) kernel used for panel g, average spectrum of the dataset (yellow
plot), (m) kernel used for panel h, average spectrum of the dataset (yellow plot).
126
Supplementary Figure 3.14. Phasor Fluorescence Lifetime Imaging Microscopy (FLIM) of unlabeled
freshly isolated mouse tracheal explant.
(a) Phasor FLIM representation of fluorescence lifetime data for unlabeled freshly isolated mouse tracheal
explant acquired in frequency domain utilizing a 2-photon fluorescence microscope (LSM 780, Zeiss, Jena)
tuned at 740nm, coupled with an acquisition unit with Hybrid Detectors (FLIM Box, ISS, Urbana-
Champaign). The selected regions correspond to more Oxidative Phosphorylation phenotype (red circle)
more Glycolytic phenotype (yellow circle). (b) FLIM segmented image corresponding to the selection
performed on phasor (a) where cells in apical layer exhibit Oxidative Phosphorylation phenotype
compared to cells in basal layer with a Glycolytic phenotype. (c) The line joining free and bound NADH in
the phasor plot is known as the “metabolic trajectory”, and a shift in the free NADH direction is
representative of a more reducing condition and a glycolytic metabolism, while a shift towards more
bound NADH is indicative of more oxidizing conditions and more oxidative phosphorylation, as described
in previous studies
107,110,111,117
.
The extremes of the metabolic trajectory are the lifetimes for NADH free and bound. The parameters for
lifetime ( 𝜏𝜏 phase and modulation) are in line with those reported in literature (0.4ns free and 1.0-3.4 ns
bound)
107–112
127
Supplementary Figure 3.15. Gray scale visualization of a single fluorescence label against multiple
autofluorescences.
Monochrome representation of the average spectral intensity for a single optical section of Tg(fli1:mKO2)
(pan-endothelial fluorescent protein label) zebrafish presenting intrinsic signal arising from the yolk and
xanthophores (pigment cells). Dataset was acquired using a confocal microscope in multi-spectral mode
(LSM 780, Zeiss, Jena) with 488nm excitation. Average intensity was calculated along the spectral
dimension and then represented in grayscale.
128
Supplementary Figure 3.16. Visualization comparison for single fluorescent label with other RGB
standard visualizations in presence of autofluorescence.
Visualization of Tg(fli1:mKO2) (pan-endothelial fluorescent protein label) zebrafish with intrinsic signal
arising from the yolk and xanthophores (pigment cells) (Figure 3.5) is here shown with different standard
approaches. Details for these visualizations are reported in the Methods section. (a) SEER RGB mask for a
single z-plane, obtained using gradient angular map in scaled mode, this mask shows the colors associated
by SEER to each pixel, without considering intensity. (b) SEER maximum intensity projection (MIP) for the
entire volume (c) TrueColor 32 channels volume MIP (d) Peak wavelength volume MIP. I Gaussian Default
Kernel with RGB centered respectively at 650nm, 510nm and 470nm. (f) Gaussian Kernel at 10% threshold,
RGB values centered at 686nm, 588nm and 499nm. (g) Gaussian Kernel at 20% threshold, RGB values
centered at 668nm, 579nm and 499nm. (h) Gaussian kernel at 30% threshold, RGB values centered at
641nm, 570nm and 499nm. (i) wavelength-to-RGB color representation for Peak Wavelength mask in
panel d. A representation of the RGB visualization parameters is reported in (j) Average spectrum (blue
plot) for the entire dataset with boundaries used for TrueColor 32ch MIP in panel c. (k) Kernel used for
panel e, average spectrum of the dataset (yellow plot), (l) kernel used for panel f, average spectrum of
the dataset (yellow plot), (m) kernel used for panel g, average spectrum of the dataset (yellow plot), (n)
kernel used for panel h, average spectrum of the dataset (yellow plot).
129
Supplementary Figure 3.17. Visualization comparison for triple label fluorescence with other RGB
standard approaches.
Visualization of Tg(kdrl:eGFP); Gt(desmin-Citrine); Tg(ubiq:H2B-Cerulean) labelling respectively
vasculature, muscle, and nuclei (Figure 3.6) is shown here with different standard approaches. Details for
these visualizations are reported in the Methods section. The same slice (here z=3) is shown as a maximum
intensity projection (MIP) using: (a) SEER gradient descent map in max morph mode, (b) SEER MIP angular
map mass morph mode, (c) TrueColor 32 channels, (d) Peak wavelength, I Gaussian Default Kernel with
RGB centered respectively at 650nm, 510nm and 470nm. (f) Gaussian Kernel at 10% threshold, RGB values
centered at 597nm, 526nm and 463nm (g) Gaussian Kernel at 20% threshold, RGB values centered at
579nm, 517 and 463nm (h) Gaussian kernel at 30% threshold, RGB values centered at 561nm, 526nm and
490nm. A representation of the RGB visualization parameters is reported in (i) wavelength-to-RGB color
representation for Peak Wavelength mask in panel d, (j) Average spectrum (blue plot) for the entire
dataset with boundaries used for TrueColor 32ch MIP in panel c. (k) kernel used for panel e, average
spectrum of the dataset (yellow plot), (l) kernel used for panel f, average spectrum of the dataset (yellow
plot), (m) kernel used for panel g, average spectrum of the dataset (yellow plot), (n) kernel used for panel
h, average spectrum of the dataset (yellow plot).
130
Supplementary Figure 3.18. SEER of zebrafish volumes in Maximum Intensity Projection (MIP) and
Shadow Projection.
The capability of SEER to improve visualization of spectral datasets is translatable to 3D visualizations with
different visualization modalities. Here we show a zebrafish embryo Tg(kdrl:eGFP); Gt(desmin-
Citrine);Tg(ubiq:H2B-Cerulean) labelling respectively vasculature, muscle, and nuclei. (a) MIP of an
Angular map volume with Mass Morph mode. (b) The same combination of map and mode is shown using
shadow projection. While the volume rendering approaches are different, the spatial distinction between
fluorescent labels is maintained. The Gradient Descent map in Max Morph mode is here applied on the
same dataset using (c) MIP and (d) shadow projection. With the Gradient Descent map (c) MIP improves
contrast for determining spatial distinction between fluorophores. (d) Shadow projection further
enhances the location of skin pigments (green).
131
Supplementary Figure 3.19. Visualization comparison for combinatorial expression with other RGB
standard approaches.
Visualization of ubi:Zebrabow muscle (Figure 3.7) with different standard approaches. Details for these
visualizations are reported in the Methods section. The same slice is shown as an RGB mask which
represents the color associated to each pixel, independent from the intensity, or as a maximum intensity
projection (MIP) using: (a) SEER gradient descent map mask in scaled mode, (b) Average spectrum (blue
plot) for the entire dataset with boundaries used for TrueColor 32ch MIP in panel c. (c) TrueColor 32
channels, (d) Peak wavelength mask, I Gaussian Default Kernel with RGB centered respectively at 650nm,
510nm and 470nm. (f) Gaussian Kernel at 10% threshold, RGB values centered at 659nm, 561nm and
463nm. (g) Gaussian Kernel at 20% threshold, RGB values centered at 641nm, 552nm and 463nm. (h)
Gaussian kernel at 30% threshold, RGB values centered at 632nm, 552nm and 472nm. (i) wavelength-to-
RGB color representation for Peak Wavelength mask in panel d. A representation of the RGB visualization
parameters is reported in (j) kernel used for panel e, average spectrum of the dataset (yellow plot), (k)
kernel used for panel f, average spectrum of the dataset (yellow plot), (l) kernel used for panel g, average
spectrum of the dataset (yellow plot), (m) kernel used for panel h, average spectrum of the dataset (yellow
plot).
132
Supplementary Figure 3.20. Processing speed comparison SEER vs Independent Component Analysis
for the datasets of Figures 3.4-3.7.
Here we compare the processing time between SEER and the FastICA submodule of the python module,
scikit-learn. With the same measurement strategy used in Supplementary Figure 3.1, timers using the
perf_counter function within the python module, time, were placed around specific functions
corresponding to the calculations required for the creation of SEER maps in HySP and with FastICA. (a)
Run time for SEER (magenta) was considerably lower than ICA (3 components) (cyan) in all Figures and
their subsets. (b) The speed up was higher for larger z-stack spectral datasets (Figure 3.5, 41-fold
improvement) and reduced for smaller, single spectral images (Figure 3.4, 7.9-fold improvement).
Numerical values for these plots are reported in Supplementary Table 3.2.
133
Supplementary Figure 3.21. RGB Visualization with multiple modalities under different spectral
overlap and SNR conditions.
In this simulation, the first panel (top-left) of the Simulated Hyperspectral Test Chart (SHTC,
Supplementary Figure 3.3) is reduced in intensity by a factor of 5*10
0
- 5*10
4
(panel 1-5 respectively) in
the presence of a constant background. Background with average intensity 5 was generated in MATLAB,
Poissonian noise was added using the poissrnd() function obtaining 5 different levels of SNR. (a,b,c,d,e)
Peak-to-peak distance for the spectra in the middle and outer concentric squares in the SHTC is shifted by
units of 8.9nm with respect to the peak of the average spectrum in the intermediate square, which is kept
constant in this simulation (similarly to Supplementary Figure 3.3) starting from distance 0 (a) to 35.6nm
(e). For each level of spectral overlap (a-e), seven different RGB visualization modalities are presented
here for comparison at five different level of SNR. In order from the top row, SEER at harmonic 2 (SEER
h=2) and harmonic 1 (SEER h=1), peak wavelength selected (Peak Wav.), Gaussian kernel set at 30% of the
spectrum (Gauss r=.3), set at 20% (Gauss r=.2) and at 10% (Gauss r=.1), finally Gaussian kernel set at
650nm, 510nm, 470nm for RGB respectively (Gauss. Def.). (f) the wavelength-to-RGB conversion map
used for the peak wavelength visualization. (g) center wavelength for the R=579nm, G=534nm, B=499nm
channels of Gauss r=.3. Average spectrum (yellow) (h) center wavelength for the R=597nm, G=543nm,
B=490nm channels of Gauss r=.2. Average spectrum (yellow). (i) center wavelength for the R=614nm,
G=543nm, B=481nm channels of Gauss r=.1. Average spectrum (yellow). (j) center wavelength for the
R=650, G=510nm, B=470nm channels of Gauss. Def. Average spectrum (yellow). The maps utilized here
for SEER were gradient descent in scale mode (a, b, c, d), and center of mass mode (e). Visualization with
SEER shows a reasonably constant contrast and color for the different spectra in the simulation at different
SNR.
134
Supplementary Figure 3.22. Spectra of extreme conditions in SNR-Overlap simulation.
The extremes of the simulation utilized in Supplementary Figure 3.20 are reported here as spectra for
comparison. For high signal-to-noise ratio (a) average spectrum for spectra with peak-maxima distance
set to zero and (b) example single spectra from each concentric square region of the simulation (digital
levels, DL). (c) Average and (d) single spectra at high SNR for simulation with spectra separated with a
peak-to-peak distance of 35.6nm. (e) Reference Simulated Hyperspectral Test Chart with color coded
concentric squares. The low SNR simulation spectra are reported here for a peak distance of zero as (f)
average and (g) single and for a peak distance of 35.6nm as (h) average and (i) single.
135
Supplementary Figure 3.23. Spectral separation accuracy of SEER under different spectral overlap and
SNR conditions.
Spectral separation accuracy was calculated for different signal-to-noise ratios and spectral maxima
separation (a) aligned, (b) 8.9nm, (c) 17.8nm, (d) 26.7nm, (e) 35.6nm, starting from the visualizations in
Supplementary Figure 3.20 and corresponding spectra in Supplementary Figure 3.21. The spectral
separation accuracy is calculated here as the sum of the Euclidean distance of the RGB vectors between
pairs of the concentric squares of the simulation, in ratio to the largest color separation (red to green, red
to blue, blue to green). A thorough description of spectral separation accuracy calculation is reported in
the Methods section. Each value in the plots represents the average distance of 200
2
pixels; error bars are
the standard deviation of normalized spectral separation accuracy value across all pixels. The average
spectral separation accuracies over multiple SNR conditions for each spectral maxima separation: (a) with
highly overlapping spectra SEER provides on average 38.0% for harmonic 1, 50.6% for harmonic 2, while
the best performing other comparison here is Gaussian r=.3 with an average 26.7%. (b) With a 8.9nm
peak-to-peak separation SEER h=1 averages 57.0%, SEER h=2 49.6%, other best performing here is Peak
Wavelength with 22.2%; (c) with 17.8nm separation SEER h=1 averages 57.2%, SEER h=2 60.0±2.3%, other
best Gauss r=.3 with 26.2% (d) with 26.7nm separation SEER h=1 averages 59.9%, SEER h=2 60.4%, other
best Gauss r=.3 with 32.1%; (e) with well separated spectra 35.6nm apart, SEER h=1 averages 66.3%, SEER
h=2 66.7%, with other best Gauss r=.3 scoring 43.5% in average.
136
Supplementary Figure 3.24. Comparison of SEER and ICA spectral image visualization (RGB) under
different spectral overlap and SNR conditions.
The same simulation used in Supplementary Figure 3.20, which changes parameters for the Simulated
Hyperspectral Test Chart obtaining different values of peak-to-peak spectral overlap and signal to noise,
is used here to compute the spectral separation accuracy (Methods) for Independent Component Analysis
using ENVI, with 3 independent components (ICs) with optimization for this specific dataset. (a,b,c,d,e).
The three ICs are utilized as R, G, B channels for creating a color image for each simulation parameter (ICA
= 3 line) and are shown here next to SEER harmonic 1 and 2 (SEER h=1 and SEER h=2 respectively). Error
bars are the standard deviation. (f,g,h,i,j) The parameters of spectral separation accuracy described in the
methods section are applied here to the SEER and ICA results. Each value in the plots represents the
average distance of 200
2
pixels and error bars are the standard deviation of normalized spectral
separation accuracy value across all pixels. Overall spectral separation accuracy of ICA, as calculated here,
averages at 24.8% for Kurtosis function (ICA-K), 24.7% for LogCosh (ICA-L), while SEER’s 55.7% h=1 and
57.5% h=2. For the different levels of overlaps the average spectral separation accuracy are (f) ICA-K
15.8%, ICA-L 15.9%, SEER 38.0% for harmonic 1, 50.6% for harmonic 2; (g) ICA-K 29.4%, ICA-L 29.2%, SEER
h=1 57.0%, SEER h=2 49.6%; (h) ICA-K 20.1%, ICA-L 19.2%, SEER h=1 57.2%, SEER h=2 60.0%; (i) ICA-K
30.3%, ICA-L 30.1%, SEER h=1 59.9%, SEER h=2 60.4%; (j) ICA-K 28.6%, ICA-L 28.8%, SEER h=1 66.3%, SEER
h=2 66.7%;
137
Supplementary Figure 3.25. Quantification of enhancement for Figures 3.4-3.7.
The scores of (a) colorfulness, (b) contrast, (c) sharpness and (d) Color Quality Enhancement (CQE) are
calculated according to Methods section for multiple visualization strategies. Average values are reported
in Supplementary Table 3.2 and 3.3. (a) Colorfulness values for SEER were generally higher than other
approaches, made exception for Figure 3.7 Peak Wavelength visualization (reported in Supplementary
Figure 3.19d), owing to a very low average intensity in the red channel (=840), and an almost double
average green to blue intensity (/=1.7), which makes the 𝛽𝛽 parameter used in colorfulness small
in average and the denominator of the second logarithm in the colorfulness equation (Methods)
approximately equal to 1, producing a factor of 10 larger than usual ratio of variance 𝛽𝛽 to average 𝛽𝛽 . This
combination of intensities results in a colorfulness 1.03-fold higher than the SEER h=2, however in this
case the value of colorfulness does not correspond to human observation (Supplementary Figure 3.19d)
suggesting this score could be an outlier due to a special combination of intensities. The values of (b)
contrast, (c) Sharpness show higher performance for SEER. (d) The CQE score of SEER was higher than
the standard, with improvement of 11%-26% for Figure 3.4, 7%-98% for Figure 3.5, 14%-25% for Figure
3.6 and 12%-15% for Figure 3.7.
138
Supplementary Figure 3.26. Visualization of photobleaching with SEER.
Photo-bleaching experiments were performed on a 24 hpf zebrafish embryo Gt(cltca-citrine);
Tg(fli1:mKO2); Tg(ubiq:memTdTomato), labeling clathrin, pan-endothelial and membrane respectively.
The experiments were performed utilizing the “bleaching” modality in the Zeiss Zen 780 inverted confocal,
where single z positions were acquired in lambda mode. Frames are acquired every 13.7 sec, with 5
intermediate bleaching frames (not acquired) at high laser power until image intensity reached 90%
bleaching. The SEER RGB mask represents the values of colors associated to each pixel, independent from
the intensity values. The map used here is Radial map in Center of Mass mode. In this modality the map
will adjust its position on the shifting center of mass of the phasor clusters, visually compensating for the
decrease in intensity. (a) In the initial frame the cltca-citrine is associated to a magenta color, membrane
to cerulean, pan-endothelial is not in frame and background to yellow. (b) Frame 10 shows consistent
colors with the initial bleaching; the colors are maintained (c) at frame 40 and (d) frame 70 where most
of the signal has bleached and most colors have switched to yellow (here, background). I Final frame
shows the 90% bleached sample. The Alpha Color rendering adds the information of intensity to the image
visualization. Here we show for comparison (f) frame 1, (g) frame 10, (h) frame 40 and (i) frame 70. Scale
bar 10 𝜇𝜇 m. (j) Average total intensity plot as a function of frame, calculated from the sum of 32 channels,
shows evident bleaching in the sample.
139
Supplementary Figure 3.27. Morph mode algorithm pictorial abstraction.
(a) A Radial map in standard mode centered at the origin O can be abstracted as a (b) 3D Conic shape with
height ℎ and apex A. (c) Upon shifting the apex of the cone from 𝐴𝐴 to 𝐴𝐴 ′, the map reference center
translates from origin O to the projection 𝐴𝐴 ⏊ ′ . During this shift, the edges of the cone base are anchored
on the phasor unit circle. (c-d) If we consider a plane cutting the oblique cone horizontally, the resulting
section is a circle with center 𝑂𝑂 ′ and radius 𝐹𝐹 ′ . The projection of this circle is centered on 𝑂𝑂 ⏊ ′ which lies
on the line 𝑂𝑂 𝐴𝐴 ⏊ ′ , adjoining the fixed center O and new apex projection 𝐴𝐴 ⏊
′
and has the same radius 𝐹𝐹 ′ . As
a result, (d) all of the points in each of these projected circles are shifted along the vector 𝑂𝑂𝑂𝑂 ⏊ ′ on the
phasor plot.
140
Supplementary Figure 3.28. Autofluorescence visualization in volumetric data of unlabeled freshly
isolated mouse tracheal explant.
A tiled z-stack (x,y,z) imaged with multi-spectral two-photon microscopy (740nm excitation, 32
wavelength bins, 8.9nm bandwidth, 410-695 nm detection) is here visualized as a single (x,y) z-slice SEER
RGB Gradient Descent Max Morph mask at (a) 43 µm, (b) 59 µm, (c) 65 µm depth. Color differences
between basal and apical layer cells are maintained at different depths, with consistent hue for each of
the cell layers. Colorbar is provided as a simplified indicator of the spectral signatures within the samples;
similar colors denote similar spectral signatures in nm. Volume renderings presented as SEER Alpha Color
renderings for (d) top-down (x,y) view, (e) Lateral (y,z) view and (f) zoomed-in lateral (y,z) view show the
shape and the 98 µm thickness of the unlabeled tissue sample.
141
3.7.3 Supplementary Notes
Supplementary Note 3.1: Choice of harmonic for visualization
The distribution of spectral wavelengths on the phasor plot is highly dependent on the
harmonic number used. Typically, the first and second harmonics are utilized for obtaining the
hyperspectral phasor values due to visualization limitations imposed by branching within the
Riemann surfaces in complex space
97
.
The first harmonic results in a spectral distribution which approximately covers 3/2 𝜋𝜋 radians
for spectrum within the visible range (400 nm – 700 nm), within the universal circle, along a
counterclockwise path. As a result, spectra separated by any peak-to-peak distance will appear
in different positions on the phasor plot. However, the first harmonic provides a less efficient use
of the phasor space, leaving 1/2 𝜋𝜋 radians non utilized and leading to a lower dynamic range of
separation as can be seen in Supplementary Figure 3.9 and 3.10.
Similarly, the second harmonic approximately spans over (3/2 + 2) radians on the phasor for
spectrum within the visible range (400 nm – 700 nm), distributing spectra in a more expansive
fashion within the universal circle, simplifying the distinction of spectra which may be closely
overlapping and providing a higher dynamic range of separation as demonstrated in
Supplementary Figure 3.6 and 3.7. The downside of this harmonic is the presence of an overlap
region from orange to deep red fluorescence. Within this region, spectra separated by 140 nm
(in our system with 32 bands, 410.5 nm to 694.9 nm with 8.9 nm bandwidth), may end up
overlapping within the phasor plot. In this scenario, it would not be possible to differentiate those
well-separated spectra using the second harmonic, requiring the use of the first. Thanks to SEER,
142
the choice of which harmonic to use for visualization can be quickly verified and changed within
the HySP software (http://bioimaging.usc.edu/software.html).
In the common scenario of imaging with a single laser line, the range of the majority of the
signal emitted from multiple common fluorophores is likely to be much smaller than 150 nm due
to the Stokes’ shift which is usually in the 20-25 nm range. Excitation spectra of fluorescent
proteins separated by 140nm generally do not overlap, requiring utilization of a second excitation
wavelength to obtain the signal.
The SEER method presented here has utilized the second harmonic in order to maximize the
dynamic range of the phasor space and separate closely overlapping spectra. However, SEER can
work with the first harmonic seamlessly, maintaining swift visualization of multiple fluorophores
that may be far in peak spectral wavelength.
Supplementary Note 3.2: Color visualization limitations for SEER
The SEER maps are built based on the frequency domain values generated by applying the
phasor method to hyperspectral and multispectral fluorescent data. RGB colors are used to
directly represent these values. As such, the quality of color separation has a maximum
resolution limited by the spectral separation provided by the phasor method itself. Therefore,
as long as the phasor method can differentiate between spectra which have a higher amount of
fluorescent signal vs noise (high signal to noise) and spectra with a higher combination of the
noise versus the signal (low signal to noise), the SEER maps will assign different colors to these
spectra. If signal is indistinguishable from noise, SEER maps will assign the same color to these
spectra. In the scenario where spectra derived from two different effects are exactly the same,
143
for example, a case where low protein expression is on an outer layer and a high-level expression
is attenuated at a deeper level, the phasor method and the SEER maps, in their current
implementation, will not be able to differentiate between the two effects. The separation of
these two effects is a different and complex problem which depends on the optical microscopy
components, the sample, the labels, the multispectral imaging approach and more factors in the
experimental design, and we believe this separation falls outside of the scope of this paper and
constitutes a project on its own.
Supplementary Note 3.3: Measuring color contrast in fluorescent images
There is an inherent difficulty in determining an objective method to measure the image
quality of color images in relation to fluorescent images. The main challenge for fluorescent
images is that for the majority of fluorescence microscopy experiments, a reference image does
not exist because there is an inherent uncertainty related to the image acquisition. Therefore,
any kind of color image quality assessment will need to be based solely on the distribution of
colors within an image.
This type of assessment has its own further challenges. Although there have been a variety
of quantitative methods formulated to determine the quality of intensity distributions in
grayscale images, such methods for color images are still being debated and tested
126,130–132
. This
lack of suitable methods for color images mainly comes from the divide between the
mathematical representation of the composition of different colors and human perception of
those same colors. This divide occurs because human color perception varies widely and is
nonlinear for different colors; whereas the quantitative representation of any color is usually a
144
linear combination of base colors such as Red, Green, and Blue. This nonlinear human perception
of color is closely related to the concept of hue. Loosely speaking, hue is the dominant
wavelength that reflects the light. Hues perceived as blue tend to reflect light at the left end of
the spectrum and red at the right end of the spectrum. Generally, each individual color has a
unique holistic trait which is determined by its distinctive spectrum. Discretization of the
spectrum into multiple components cannot fully describe the original richness in color.
The current methods for determining the quality of an RGB image usually adapt grayscale
methods in two different ways. The first method involves either converting the three-channel
color image into a single channel grayscale image before measuring the quality. The second
method measures the quality of each channel individually and then combines those
measurements with different weights. However, both methods face limitations in providing a
value of quality that correlates well with human perception. The first method loses information
when converting the color image to grayscale. The second method tries to interpret the nonlinear
human perception of the quality of a color image by separating it into three channels and
measuring them individually. The inherent hue of a color, however, is more than the sum of the
individual component colors, since each channel taken individually is not necessarily as colorful
as the combined color.
A more complete color metric should take hue into account, such as by measuring the
colorfulness loss between the original and processed images
130
. In conclusion, as a consequence
of this limitation in measuring colorfulness for current methods, there is currently no established
“true measure of contrast” within fluorescent color images.
145
3.8 Chapter Conclusion
In this project, we have introduced and validated SEER, a near real-time visualization method
of large hyperspectral datasets; this task has traditionally been very challenging using standard
multispectral dataset visualization approaches. Through the use of the Phasor method and
rapidly deployable color maps for differentiating spectral properties, SEER provides a fast,
replicable, and objective way to distinguish between different spectral regions of sample. This
can be done within a single dataset or among multiple datasets, across a wide range of dataset
sizes. With these characteristics, SEER is best situated as a preprocessing tool for providing an
improved view during acquisition and before analysis. With its real-time nature, SEER can
potentially allow researchers to rapidly determine the best possible imaging parameters during
image acquisition of hyperspectral datasets, which are usually very large and unwieldy to acquire.
This then allows researchers to efficiently make more educated choices in optimizing the data
for any subsequent processing and analysis of the datasets. Overall, SEER has to the potential to
greatly improve image quality and size through optimization of the experimental pipeline and
fulfills a great need for handling ever increasing multidimensional datasets.
146
Chapter 4
Quantitative data extraction through Hybrid Unmixing (HyU)
4.1 Author Contribution
Authors: Hsiao Ju Chiang
1,2,†
, Daniel E.S. Koo
1,2,†
, Masahiro Kitano
1,3
, Jay Unruh
4
, Le A.
Trinh
1,2,3
, Scott E. Fraser
1,2,3
, Francesco Cutrale
1,2,3
1 Translational Imaging Center, University of Southern California, Los Angeles, CA
2 Department of Biomedical Engineering, University of Southern California, Los Angeles, CA
3 Molecular and Computational Biology, University of Southern California, Los Angeles, CA
4 Stowers Institute for Medical Research, Kansas City, MO
† Equal contribution
4.2 Chapter Summary
In this chapter, we proceed to establish a workflow for analyzing the multitude of
components available to us at widely varying time- and length-scales within high resolution 5D
datasets through hyperspectral laser scanning microscopy. The major challenge in such complex
imaging experiments is to cleanly separate multiple fluorescent labels with overlapping spectra
from one another and background autofluorescence, without perturbing the sample with high
levels of light. Thus, there is a requirement for efficient and robust analysis tools capable of
quantitatively separating these signals.
In response, we have combined multispectral fluorescence microscopy with hyperspectral
phasors and linear unmixing to create Hybrid Unmixing (HyU). Here, we demonstrate its
capabilities in the dynamic imaging of multiple fluorescent labels in live, developing zebrafish
147
embryos. HyU is more sensitive to low light levels of fluorescence compared to conventional
linear unmixing approaches, permitting better multiplexed volumetric imaging over time, with
less bleaching. HyU can also simultaneously image both bright exogenous and dim endogenous
labels because of its high dynamic range. This allows studies of cellular behaviors, tagged
components, and cell metabolism within the same specimen, offering a powerful window into
the orchestrated complexity of biological systems.
4.3 Introduction
In recent years, high-content imaging approaches have been refined for decoding the
complex and dynamical orchestration of biological processes.
75,143,144
Fluorescence, with its high
contrast, high specificity and multiple parameters, has become the reference technique for
imaging.
145,146
Continuous improvements in fluorescent microscopes
23,71,147,148
and the ever-
expanding palette of genetically-encoded and synthesized fluorophores have enabled the
labeling and observation of a large number of molecular species
149,150
. This offers the potential
of using multiplexed imaging to follow multiple labels simultaneously in the same specimen, but
the technologies for this have fallen short of their fully imagined capabilities. Standard
fluorescence microscopes collect multiple images sequentially, employing different excitation
and detection bandpass filters for each label. Recently developed techniques allow for massive
multiplexing by utilizing sequential labeling of fixed samples but are not suitable for in vivo
imaging.
151,152
Unfortunately, these approaches are ill-suited to separating overlapping
fluorescence emission signals, and the narrow bandpass optical filters used to increase
selectivity, decrease the photon efficiency of the imaging. (Figs. S4.1, S4.2) These limitations
have restricted the number of imaged fluorophores per sample (usually 3-4) and risks exposing
148
the specimen to damaging levels of exciting light. This has been a significant obstacle for the
dynamic imaging, and has prevented in vivo imaging from reaching its full potential.
Hyperspectral Fluorescent Imaging (HFI) potentially overcomes the limitations of overlapping
emissions by expanding signal detection into the spectral domain.
153
HFI captures a spectral
profile from each pixel, resulting in a hyperspectral cube (x,y, wavelength) of data, that can be
processed to deduce the labels present in that pixel. Linear unmixing (LU) has been widely
utilized to analyze HFI data, and has performed well with bright samples emitting strong signals
from fully-characterized, extrinsic fluorophores such as fluorescent proteins and dyes
17,73,154
.
However, in vivo fluorescence microscopy is almost always limited in the number of photons
collected per pixel (due to the expression levels, the bio-physical fluorescent properties, and the
sensitivity of the detection system), which reduces the quality of the spectra acquired.
A further challenge which affects quality of spectra is the presence of multiple forms of noise
in the imaging of the sample. Two examples of instrumental noise are photon noise and read
noise.
30–33
Photon noise, also known as Poisson noise, is an inherent property related to the
statistical variation of photons emission from a source and of detection. Poisson noise is
inevitable when imaging fluorescent dyes and is more pronounced in the low-photon regime. It
poses challenges especially in live and time lapse imaging, where the power of the exciting laser
is reduced to avoid photo-damage to the sample, decreasing the amount of fluorescent signal.
Read noise arises from voltage fluctuations in microscopes operating in analog mode, during the
conversion from photon to digital levels intensity and commonly affects fluorescence imaging
acquisition. Most biological samples used for in vivo microscopy are labelled using extrinsic
signals from fluorescent proteins or probes but often include intrinsic signals (autofluorescence).
149
Autofluorescence contributes photons that are undesired, difficult to identify and to account for
in LU. The cumulative presence of noise inevitably leads to a degradation of acquired spectra
during imaging. As a result, the spectral separation by LU is often compromised, and the Signal
to Noise ratio (SNR) of the final unmixing is often reduced by the weakest of the signals
detected.
154
Increasing the amount of laser excitation can partially overcome these challenges,
but the higher energy deposition in the sample causes photo-bleaching and -damage, affecting
both the integrity of the live sample and the duration of the observation. Traditional unmixing
strategies such as LU are computationally demanding, requiring long analyses and often slowing
the experiment. Combined, these compromises have reduced both the overall multiplexing
capability and the adoption of HFI multiplexing technologies.
We have developed Hybrid Unmixing (HyU) as an answer to the challenges that have limited
the wider acceptance of HFI for in vivo imaging. HyU employs the phasor approach
91
merged
with traditional unmixing algorithms to untangle the fluorescent signals more rapidly and more
accurately from multiple exogenous and endogenous labels. The phasor approach
91
, a popular
dimensionality reduction approach for the analysis of both fluorescence lifetime and spectral
image analysis
93,107,155–159
provides key advantages to HyU, including spectral compression,
denoising, and computational reduction. HyU utilizes phasor processing as an encoder to
aggregate similar spectra and applies unmixing algorithms, such as LU, on them to provide
unsupervised analysis of the HFI data, removing user subjectivity. Our results show that HyU
offers three key advantages: (1) improved unmixing over conventional LU, especially for low
intensity images, down to 5 photons per spectra; (2) simplified identification of independent
150
spectral components; (3) dramatically faster processing of large datasets, overcoming the typical
unmixing bottleneck for in vivo fluorescence microscopy.
4.4 Results
4.4.1 Architecture of HyU in comparison to traditional linear unmixing
HyU combines the best features of hyperspectral phasor analysis and linear unmixing (LU),
resulting in faster computation speeds and more reliable results, especially at low light levels.
Phasor approaches reduce the computational load because they are compressive, reducing the
32 channels of an HFI spectral plot into a position on a 2D-histogram, representing the real and
imaginary Fourier components of the spectrum (Fig. 4.1A,B). Different 32 channel spectra are
represented as different positions on the 2D phasor plot, and mixtures of the two spectra will be
rendered at a position along a line connecting the pure spectra. Because the spectral content of
an entire 2D or 3D image set is rendered on a single phasor plot, there is a dramatic data
compression - from a spectrum for each voxel in an image set (up to or even beyond Gigavoxels)
to a histogram value on the phasor plot (Megapixels). In addition, because each “bin” on the
phasor plot histogram corresponds to multiple voxels with highly similar spectral profiles, the
binning itself represents spectral averaging, which reduces the Poisson and instrumental noise
(Fig. 4.1C-E). Poisson noise in the collected light is unavoidable in HFI unless the excitation is
turned so high that the statistics of collected fluorescence creates hundreds or thousands of
photons per spectral bin. The clear separation of the spectral phasor plot and its referenced
imaging data, permits denoising algorithms to be applied to phasor plot with minimal
degradation of the image resolution. LU or other unmixing approaches can be applied to the
spectra on the phasor plot, offering a dramatic reduction in computational burden for large
151
image data sets (Fig. 4.1D). To understand this saving, consider the conventional approach of LU
applied to image data at the voxel level (Fig. 4.1 A,F). A timelapse volumetric dataset of
512x768x17 (x, y, z) pixels, over 6 timepoints, (Sup. Table 4.1), would require 40 million
operations. HyU’s requires only ~18 thousand operations to unmix each bin on the phasor plot,
representing more than a thousand-fold saving (Fig. 4.1F,G).
152
Figure 4.1. Schematic illustrating how Hybrid Unmixing (HyU) enhances analysis of multiplexed
hyperspectral fluorescent signals in vivo.
The HyU workflow is a new computationally efficient process for untangling fluorescent signals more
rapidly and accurately from multiple exogenous and endogenous labels within a biological sample through
a combination of standard linear unmixing and the fourier transform. (A) Multicolor fluorescent biological
sample (here a zebrafish embryo) is imaged in hyperspectral mode, collecting the fluorescence spectrum
of each voxel in the specimen. (B) HyU represents spectral data as a phasor plot, a 2D histogram of the
real and imaginary Fourier components (at a single harmonic). (C) Spectral denoising filters reduce the
Poisson and instrumental noise on the phasor histogram, providing the first signal improvement. (D) The
phasor acts as an encoder, where each histogram-bin corresponds to a number n of pixels, each with a
relatively similar spectrum (E). Summing these spectra effectively averages the spectra for that phasor
position. This denoising results in cleaner average spectrum for this set of pixels, which are ideally suited
for analytical decomposition through unmixing algorithms (F). (G) Unmixing results in images that
separated into spectral components. Here, linear unmixing (LU) is used for unmixing, but HyU is
compatible with any unmixing algorithm.
Note that HyU offers a major reduction in data size and complexity of the LU (or any other unmixing)
computation, because the calculation is applied to the 10
4
histogram bins (D), rather the the ~10
7
voxels
in the specimen (A). This reduces the number of calculations required for LU dramatically.
4.4.2 Quantitative assessment of HyU compared to LU
To quantitatively assess the relative performance of LU and HyU, we analyzed them on
synthetic hyperspectral fluorescent datasets, created by computationally modelling the
biophysics of fluorescence spectral emission and microscope performance (Fig. 4.2 A, B, Figs.
153
S4.3-S4.5). We used this synthetic dataset to evaluate LU and HyU algorithm performance
quantitatively by using metrics such as Mean Square Error (MSE) and unmixing residual (see Fig.
S4.6, Methods; for both metrics, a lower value indicates better performance). In addition to the
computational efficiency mentioned above, HyU analysis shows better ability to capture spatial
features over a wide dynamic range of intensities, when compared with standard LU, in large part
due to the denoising created by processing in phasor space (Fig. 4.2 A, B). The improved accuracy
is demonstrated by a lower MSE, in comparing the results of LU and HyU to the image ground
truth. The absolute MSE for HyU is consistently up to 2x lower than that of LU, especially at low
and ultra-low fluorescence levels (Fig. 4.2C). MSE can be further decreased by the use of
denoising filters on the phasor plot, resulting in superiority of HyU relative to LU for HFI at low
(5-20 photons/spectrum) and ultralow (2-5 photons/spectrum) levels (Fig. 4.2D). To better
characterize the performance in the experimental data without ground truth, we also define the
unmixing residual as the difference between the original multichannel hyperspectral images and
their unmixed results. Residuals provide a measure of how closely the unmixed results
reconstruct the original signal (Fig. S4.3, Methods). Unmixing residuals are inversely proportional
to the performance of the algorithm, with low residuals indicating high similarity between the
unmixed and the original signals. Analysis of unmixing residuals in the synthetic data highlights
an improved interpretation of the spectral information in HyU with an average unmixing residual
reduction of 21% compared to the standard (Fig. S4.5C). The reduction in both MSE and average
unmixing residual for synthetic data demonstrates the superior performance of HyU, and
provides a baseline comparison when demonstrating performance improvements for
experimental data.
154
We support the enhanced performance of HyU with analysis of experimental data, which
reveals comparatively lower unmixing residuals and a higher dynamic range as compared to LU.
Data was acquired from a quadra-transgenic zebrafish embryo Tg(ubiq:Lifeact-mRuby);Gt(cltca-
citrine);Tg(ubiq:lyn-tdTomato);Tg(fli1:mKO2), labelling actin, clathrin, plasma membrane, and
pan-endothelial cells, respectively (Figs. 4.2E-L, 4.3, S4.7-S4.9, Supplementary Movie 4.1). HyU
unmixing of the data shows minimal signal cross-talk between channels while LU presents
noticeable bleed-through (Fig. 4.2M-P). Consistently with synthetic data, we utilize the unmixing
residual as the main indicator for quality of the analysis in experimental data, owing to the
absence of a ground truth. The residual images (Fig. 4.2F, G) depict a striking difference in
performance between HyU and LU. The average relative residual of HyU denotes a 7-fold
improvement compared to LU (Fig. 4.2H) in disentangling the fluorescent spectra. We visualize
the unmixed channels independently (Fig. 4.2, I to L), zooming in on details (Fig. 4.2 I to P) to
highlight areas affected by bleed-through and which are difficult to unmix. HyU, with contrast 2-
fold higher than standard LU, reduces bleed-through effects and produces images with sharper
spatial features, leading to better interpretation of the experimental data (Fig. 4.2 K, L, fig. S4.7,
Methods).
Applying HyU to another HFI dataset further highlights HyU’s improvements in noise
reduction and reconstitution of spatial features for low-photon unmixing. (Figs. 4.3, S4.8). In the
zoomed-in image of a single slice of the embryo skin surface, acquired in the trunk region, the
HyU image correctly does not display pan-endothelial (magenta) signal in the periderm, an area
which should be devoid of endothelial cells and mKO2 signal (Fig. 4.3C). In contrast, the result
from LU shows visually distinctive pan-endothelial signal throughout the tissue plane (Fig. 4.3D).
155
This incorrect estimation of the relative contribution of mKO2 fluorescence for LU is possibly due
to the presence of noise, corrupting the spectral profiles. This is further delineated in the
intensity profiles of the mKO2 signal between HyU and LU with much higher individual peaks
from noise demonstrated for LU (Fig. 4.3G, lower left). Intensity profiles for both magnified cross-
sections of the volume (Fig. 4.3C-F) provide a striking visualization of the improvements of HyU.
The line intensity profiles in HyU present reduced noise and represent more closely the expected
distribution of signals (Fig. 4.4G,H). The visible micro patterns of actin on the membrane of the
periderm suggest that the improvements quantified with synthetic data are maintained in live
samples’ signals and geometrical patterns of microridges
160
. By contrast, noise corruption and
the presence of misplaced signals are characterized in the results from LU, with high frequency
intensity variations that mis-match both the labeling and biological patterns.
156
Figure 4.2: Hybrid Unmixing outperforms standard Linear Unmixing in both synthetic and live spectral
fluorescence imaging.
Improvements in the quality of unmixing results of Hybrid Unmixing (HyU) over Linear Unmixing (LU) is
demonstrated using both synthetic and experimental data. (A) HyU and (B) LU unmixing results are shown
for a hyperspectral fluorescence simulation that was generated from four fluorescent signatures
(emission spectra, Fig S4.5E). (C) Absolute Mean Squared Error (MSE) shows that HyU offers a consistent
reduction in error across a broad range of photons per spectra (#photons/independent spectral
components, here resulting from 4 reference spectra combined). (D) The performance differences in the
157
MSE of HyU relative to LU persists when applying multiple phasor denoising filters (0 to 5 median filters).
The analysis of this synthetic data shows the consistent improvement of HyU at low photon counts with
over a 2-fold improvement when 5 denoising filters are applied at a signal level of 16 photons per
spectrum. Shaded regions for line plots denote the 95% confidence interval around the mean. (E)
Unmixing of experimental data from a 4-color zebrafish shows increased contrast for HyU (left) compared
to LU (right). Scale bar = 50 µm. (F, G) The increased accuracy is revealed by residual images of HyU and
LU, showing the spatial distribution of unassigned signals after the analysis of data in E. The results show
consistently lower residual values for HyU (F) compared to LU (G). (H) Box plots of the residuals in F and
G presents values of 11% for HyU compared to 77% for LU with *(p < 10
-10
) with n=1.05e6 pixels. Box plot
elements are defined in Methods. (I-L) Enlarged rendering of HyU results (E, white box) clearly shows low
levels of bleed-through between labels (M-P) Similar enlargement of LU results show noticeably worse
performance. Note that regions with bright signals (membrane J, N white arrow) bleed through other
channels (M) and (O). Scale bar: 20 µm. Tetra-labeled specimen used here was Gt(cltca-
citrine);Tg(ubiq:lyn-tdTomato; ubiq:Lifeact-mRuby;fli1:mKO2)
158
Figure 4.3: Hybrid Unmixing enhances unmixing for low-signal in vivo multiplexing and achieves
deeper volumetric imaging.
(A) Hybrid Unmixing (HyU) volumetric renderings compared to those of (B) Linear Unmixing (LU) for the
trunk portion in a 4-color zebrafish demonstrate an increased contrast and reduced residual in HyU
results, especially over deeper parts of the sample. The 4 labels in the fish are Gt(cltca-citrine);Tg(ubiq:lyn-
tdTomato;ubiq:Lifeact-mRuby;fli1:mKO2), respectively labeling clathrin-coated pits (green), membrane
(yellow), actin (cyan) and endothelial (magenta). (C,E) HyU results have increased spatial resolution and
less bleed though comparing to those of (D,F) LU. Scale bar: 20 µm. When observing the zoomed-in
visualization of the surface region of the sample, the yellow signal distinctly marks the membrane and the
cyan signal clearly labels the actin in (C) HyU. The same signals are not distinct in (D) LU because of multiple
incorrectly assigned magenta pixels that bleed through compromising the true signal in other channels.
159
Similarly, for the zoomed-in visualization of the Perivascular region of the embryo, in (E) HyU, the yellow
and magenta signals clearly distinguish the membrane and vasculature while in (F) LU, the results are
corrupted by greater noise. (G,H) Intensity line plots of each of the four results signals for HyU (solid) and
LU (dashed) demonstrate the improved profiles with greatly reduced noise peaks in HyU as compared to
LU. Intensities are scaled by the maximum of each unmixed channel. DL: digital level. (I) Box plots of the
relative residual values as a function of z depth for HyU and LU highlight the improvements in the unmixing
results. HyU has an unmixing residual of 6.6% ± 5.3% compared to LU’s 58% ± 17%. The average amount
of residual is 9-fold lower in HyU with narrower variance of residual. n = 5.2e5 pixels for each z slice. Box
plot elements are defined in Methods.
4.4.3 Increased accuracy and sensitivity
HyU is more accurate and results in more reliable unmixing results across the depth of sample
with greatly reduced unmixing residuals. The average residual for HyU is 9-fold lower than that
of LU with a 3-fold narrower variance. (Figs. 4.3I, S4.8). This reduction in the residual is consistent
with increasing z-depth where HyU unmixing results stably maintain both lower residuals and
variance on average. These reduced residuals correspond both to a mathematically more precise
and more uniform decomposition of signals as illustrated by the distribution of residuals versus
photons (Figs. S4.8E,F, S4.14).
We utilized HyU’s increased sensitivity to overcome common challenges of multiplexed
imaging such as poor photon yield and spectral cross-talk and were able to visualize dynamics in
a developing zebrafish embryo. We used a triple-transgenic zebrafish embryo with labeled pan-
endothelial cells, vasculature, and clathrin-coated pits (Tg(fli1:mKO2); Tg(kdrl:mCherry); Gt(cltca-
Citrine)). Multiplexing these spectrally close fluorescent proteins is enabled by HyU’s increased
sensitivity at lower photon counts. The increased performance at lower SNR allowed us to
maintain high quality results (Fig. 4.4, Supplementary Movie 4.2) while performing faster
acquisitions and reducing photon-damage through lower excitation laser power and pixel dwell
time. Decreased experimental requirements allow for tiling of larger volumes, extending the
field-of-view while still providing enough time resolution for developmental events, even with a
160
high number of multiplexed fluorescent signals. The time-lapses visualize the formation of
ventral vasculo-endothelial protrusions acquired in parallel to the development of clathrin and
kdrl. HyU enables comparative quantifications of spatiotemporal features, allowing for the
determination of volumetric changes over lengthy timelapses, in this case, over the course of 300
minutes (Fig. 4.4B)
161,162
.
Figure 4.4: HyU reveals the dynamics of developing vasculature by enabling multiplexed volumetric
time-lapse.
Hybrid Unmixing (HyU) enables multiplexed volumetric time-lapse in vivo imaging of a developing embryo
by presenting consistent, usable unmixing results from multidimensional, low signal-to-noise ratio data.
Here we present this (A) HyU rendering for the trunk portion of a 3-color zebrafish Gt(cltca-
citrine);Tg(kdrl:mCherry;fli1:mKO2) at timepoint 0. (B) An example representing the time evolution of the
segmented volumes of mCherry (vasculature, magenta) mKO2(endothelial-lymphatics, yellow) and citrine
(clathrin-coated pits, cyan) demonstrates the suitability of HyU unmixed results for performing
quantitative analysis and segmentation. Box and line plots were generated using ImarisVantage as
described in Methods. (C1-4) Time lapse imaging of the formation of the vasculature over 300 mins
(zoomed-in rendering of the box in A) at 0, 100, 200, 300 minutes shows consistency in unmixed signals
across multiple time points. This supports the quality of HyU’s unmixing results at low light levels, which
then permits multiplexing to be used in the observation of development of a live embryo.
161
4.4.4 Increased sensitivity further extends analysis to intrinsic signals
HyU provides the ability to combine the information from intrinsic and extrinsic signals during
live imaging of samples, at both single (Fig. 4.5) and multiple time points (Fig. 4.6). The graphical
representation of phasors allows identification of unexpected intrinsic fluorescence signatures in
a quadra-transgenic zebrafish embryo Gt(cltca-citrine);Tg(ubiq:lyn-tdTomato;ubiq:Lifeact-
mRuby;fli1:mKO2), imaged with single photon (488 and 561nm excitation) (Fig. 4.5A-D). The
elongated distribution on the phasor (Fig. 4.5C) highlights the presence of an additional,
unexpected spectral signature, related to strong sample autofluorescence (Fig. 4.5D blue). HyU
analysis of the sample, inclusive of this additional signal, provides separation of the contributions
of 5 different fluorescent spectra with residual 3.9%±0.3%. HyU allows for reduced energy load,
tiled imaging of the entire embryo without perturbing its development or depleting its
fluorescence signal (Fig. 4.5A). The higher speed, lower power imaging allows for subsequent re-
imaging of the same sample, as we report in the zoomed high-resolution acquisitions of the head
section (Fig. 4.5B,E).
With the ability to unmix low photon signals, HyU enables imaging and decoding of intrinsic
signals, which are inherently low light. Two photon lasers are ideal for exciting and imaging blue-
shifted intrinsic fluorescence from samples
13,32,103,163–165
. Here, the same quadra-transgenic
sample is imaged using 740 nm excitation to access both intrinsic and extrinsic signals (Fig. 4.5 E-
G, Sup Note 4.2). HyU enables unmixing of at least 9 intrinsic and transgenic fluorescent signals
(Fig. 4.5), recovering fluorescent intensities from labels illuminated at a sub-optimal excitation
wavelength (Fig. 4.5E). The spectra for intrinsic fluorescence were obtained from in vitro
measurements and values reported in literature (Methods). For this sample the intrinsic signals
162
arise from events related mainly with metabolic activity (NADH and Retinoids)
106,108–111
, tissue
structure (elastin)
166
, and illumination (laser reflection) (Fig. 4.5E). These results confirm our
conclusion that HyU is a powerful tool for allowing the imaging and analysis of endogenous labels.
Finally, we exploited the HyU capabilities to multiplex volumetric timelapse of extrinsic and
intrinsic signals by imaging the tail region of the same quadra-transgenic zebrafish embryo. We
excite extrinsic labels at 488/561 nm and the intrinsic signals with 740 nm two photon, collecting
6 tiled volumes over 125 mins (Figs. 4.6, S4.9-S4.11). HyU unmixing in this sample allows for
distinction of 9 signals, separating their contributions with sufficiently low requirements to allow
repeated imaging of notoriously low SNR intrinsic fluorescence.
163
Figure 4.5: HyU enables identification and unmixing of low photon intrinsic signals in conjunction
with extrinsic signals.
Application of HyU to zebrafish embryos containing a large number of fluorescent signals provide a frame
of reference not only for the improved unmixing of extrinsic signals, but also its increased sensitivity which
enables identification and unmixing of intrinsic signals which inherently exist in a low-photon
environment. (A) HyU results of a whole zebrafish embryo provide an overview of the distributions of
unmixed intrinsic and extrinsic signals. (B) HyU results of the head region (box in A) reveal the simplicity
of identifying an unknown autofluorescent signal among multiple extrinsic signals using the phasor
method for a quadra-transgenic zebrafish Gt(cltca-citrine);Tg(ubiq:lyn-tdTomato;ubiq:Lifeact-
164
mRuby;fli1:mKO2) imaged over multiple tiles. Scale bar: 80 µm. (C) The input spectra required to perform
the unmixing are easily identified on (D) the phasor plot when visualizing each spectrum as a spatial
location. Phasors offer a simplified identification and selection of independent and unexpected spectral
components in the encoded HyU approach. Intrinsic signals are notoriously low in emitted photons
leading to an inability to unmix using traditional unmixing algorithms. (E) The zoomed-in acquisition of
the head region of the embryo (box in A) displays HyU’s unmixing results of many intrinsic and extrinsic
signals when in an environment of very low photon output, a previously highly difficult experimental
condition to unmix. Scale bar: 70 µm. (F) The phasor plot representation provides the easily identifiable
eight independent fluorescent fingerprint locations. (G) The spectra corresponding to each of the eight
independent spectral components are also provided as a reference. Colors in F match renderings in E and
G: NADH bound (red), NADH free (yellow), retinoid (magenta), retinoic acid (cyan), reflection (green),
elastin (purple) and extrinsic signals: mKO2 (blue), and mRuby (orange). All signals were excited with a
(A-D) single photon laser at both 488 nm and 561 nm or a (E-G) two photon laser at 740 nm.
165
Figure 4.6: HyU pushes the upper limits of live multiplexed volumetric timelapse imaging of intrinsic
and extrinsic signals.
An example demonstrates HyU’s increased sensitivity and ability to provide a simple solution for the
challenging task of imaging timelapse data at 6 time points (125 mins) for both intrinsic signals and
extrinsic signals of a quadra-transgenic zebrafish: Tg((cltca-Citrine);(ubiq:lyn-tdTomato);(ubiq:Lifeact-
mRuby);(fli1:mKO2)). (A) – (F) Volumetric renderings of HyU results for time points acquired at 25 min
intervals reveal the high-contrast and highly-multiplexed labels of NADH bound (red), NADH free (yellow),
retinoid (magenta), retinoic acid (cyan), mKO2 (green), and autofluorescence from blood cells (blue) when
excited @740nm. Further extrinsic signals for mKO2 (yellow), tdTomato (magenta), mRuby (cyan), Citrine
(green) and blood cells autofluorescence (blue) are also readily unmixed using HyU when exciting the
sample @488/561nm. HyU provides the capacity to simultaneously multiplex 9 signals in a live sample
over long periods of time, a previously unexplored task. Scale bar: 50 µm.
166
4.5 Discussion
4.5.1 Advantages of HyU over LU
Our results reveal the advantages of Hybrid Unmixing (HyU) over more conventional Linear
Unmixing (LU) in performing complex multiplexing experiments. HyU overcomes the significant
challenges of separating multiple fluorescent and autofluorescent labels with overlapping
spectra while minimally perturbing the sample with excitation light.
The chief advantage of HyU is its multiplexing capability when imaging in the presence of
biological and instrumental noise, especially at low signal levels. HyU increased sensitivity
improves multiplexing in photon limited applications (Fig. 4.2F-L), in deeper volumetric
acquisitions (Fig. 4.3I) and in signal starved imaging of autofluorescence (Fig. 4.5E, Fig. 4.6). Our
simulation results (Fig. 4.2) demonstrate that HyU improves unmixing of spatially and spectrally
overlapping fluorophores excited simultaneously. The increased robustness at low photon
imaging conditions reduces the imaging requirements for excitation levels and detector
integration time, allowing for imaging with reduced photo-toxicity. Live imaging on multi-color
samples performed at high sampling frequency enables improved tiling to increase the field-of-
view (Fig. 4.3, 4.4) while maximizing the usage of the finite fluorescent signals over time. Two-
photon imaging of intrinsic and extrinsic signals suggests the ability of HyU to multiplex signals
with large dynamic range differences (Fig. 4.5) extending multiplexed volumetric imaging into the
time dimension (Fig. 4.6). Although improved, images with particularly low signal still present
corruption (Fig. S4.4), setting a reasonable range of utilization above 8 photons/spectrum.
Simplicity of use and versatility are other key advantages of HyU, inherited from both the
phasor approach
97
and traditional unmixing algorithms. Phasors here operate as a spectral
167
encoder, reducing computational load and integrating similar spectral signatures in histogram
bins of the phasor plot. This representation simplifies identification of independent spectral
signatures (Fig. 4.5, Supplementary Note 4.1) through both phasor plot selection and phasor
residual mapping (Fig. S4.11), accounting for unexpected intrinsic signals (Figs. 4.5, 4.6, S4.12,
Supplementary Note 4.2) in a semi-automated manner, while still allowing fully-automated
analysis by means of spectral libraries.
The simplicity of this approach is especially helpful in live imaging where identifying
independent spectral components remains an open challenge, owing to the presence of intrinsic
signals (Fig. S4.12, Supplementary Note 4.1). High-SNR reference spectra can be derived from
other experimental data or identified directly on the phasor. Selection of portions on the phasor
plot allows for visualization of the corresponding spectra in the wavelength domain (Fig
4.5C,D,F,G). This intuitive versatility allows for identification of both the number of unexpected
signatures and their spectra, a task previously difficult to perform due to noise and lack of global
visualization tools. In single photon imaging (Fig. 4.5A-D), HyU phasor allowed identification of a
fifth distinct spectral component arising from general autofluorescent background, thereby
improving the unmixed results. In two photon imaging, HyU enabled identification and
multiplexing of 8 highly overlapping signals possessing a wide dynamic range of intensities,
between intrinsic and extrinsic markers (Fig. 4.5F,G). Combination of single and two photon
imaging increased the number of multiplexed fluorophores to 9 (Fig. 4.6), considering some of
the extrinsic labels being excited at two photons. Multiplexing of signals may be further
improved by implementing HyU on fluorescent dyes.
168
HyU performs better than standard algorithms both in the presence and absence of phasor
noise reduction filters
97
. Compared with LU, the unmixing enhancement when such filters
97
are
applied is demonstrated by a decrease of the MSE of up to 21% (Fig 4.2C), with a reduction of the
average amount of residuals by 7-fold. Even in the absence of phasor denoising filters, HyU
performs up to 7.3% better than the standard (Fig. 4.2D) based on Mean Squared Error of
synthetic data unmixing. This base improvement is due to the averaging of similarly shaped
spectra in each phasor histogram bin, which reduces the statistical variability within the spectra
used for the unmixing calculations (Fig. 4.1E). This averaging strategy works well for general
fluorescence spectra owing to their broad and mostly unique spectral shape.
In the absence of noise, for example in the ground truth simulations, LU produces an MSE 6-
fold lower than HyU (Fig. S4.5, B, C, S4.6G). In these noiseless conditions, the binning and
averaging of spectra in the phasor histogram, without denoising, provides statistically indifferent
values of error respect to LU, suggesting results of similar quality.
4.5.2 Effects of applying the phasor encoder for HyU
HyU can interface with different unmixing algorithms, adapting to existing experimental
pipelines. We successfully tested hybridization with iterative approaches such as non-negative
matrix factorization
167
, fully constrained and non-negative least-squares
168
(Methods). Speed
tests with iterative fitting unmixing algorithms demonstrate a speed increase of up to 500-fold
when the HyU compressive strategy is applied. (Fig. S4.13, Supplementary Note 4.3). Due to the
initial computational overhead for encoding spectra in phasors, there is a 2-fold speed reduction
for HyU in comparison to standard LU. However, this may be improved with further
optimizations of the HyU implementation.
169
One restriction of HyU derives from the mathematics of linear unmixing, where linear
equations representing the unmixed channels need to be solved for the unknown contributions
of each analyzed fluorophore. To obtain a unique solution from these equations and to avoid an
underdetermined equation system, the maximum number of spectra for unmixing may not
exceed the number of channels acquired
169
, generally 32 for commercial microscopes. This
number could be increased; however, due to the broad and photon-starved nature of
fluorescence spectra, acquisition of a larger number of channels could negatively affect the
sample, imaging time and intensities. Depending on the number of labels in the specimen of
interest, extending the number of labels to simultaneously unmix beyond 32 will likely require
spectral resolution upsampling strategies.
HyU improvement is related to the presence of various types of noise in microscopy images,
such as Gaussian, Poisson and digital as well as unidentified sources of spectral signatures (Fig.
S4.5B,C, S4.6G). In the multiplexing of fluorescent signals, HyU offers improved performance,
quality- and speed-wise in the low-signal regime. HyU is poised to be used in the context of in
vivo imaging, harvesting information from samples labeled at endogenous-level.
4.6 Methods
4.6.1 Zebrafish lines
Adult fish were raised and maintained as described
66
in strict accordance with the
recommendations in the Guide for the Care and Use of Laboratory Animals by the University of
Southern California, where the protocol was approved by the Institutional Animal Care and Use
Committee (IACUC) (Permit Number: 12007 USC). Upon crossing appropriate adult lines, the
170
embryos obtained were raised in Egg Water (60 μg/ml of Instant Ocean and 75 μg/ml of CaSO4
in Milli-Q water) at 28.5
o
C with addition of 0.003% (w/v) 1-phenyl-2-thiourea (PTU) around 18hpf
to reduce pigment formation.
Transgenic Gt(cltca-Citrine)
ct116a
line is a genetrap of clathrin, heavy polypeptide a, labeling
transport vesicles with heightened expression in the vasculature.
133
Tg(kdrl:mCherry) labels the
vasculature and was a kind gift from Ching-Ling Lien (Children’s Hospital Los Angeles).
Tg(fli1:mKO2)
ct641ca
labels pan endothelial cells in both blood vessels and lymphatics as previously
reported.
170
Tg(ubiq:lyn-tdTomato) labels all cell membrane by expression of lyn-tdTomato from
the ubiquitin promoter, while Tg(ubiq:Lifeact-mRuby) labels actin by expression of LifeAct-
mRuby fusion from the ubiquitin promoter.
mpv17a9/a9;mitfaw2/w2 (casper) line was purchased from Zebrafish International Resource
Center (ZIRC) and csf1rj4e1/j4e1 (panther) line
171
was a kind gift from David Parichy (Univ.
Virginia). We crossed casper with panther to produce triple heterozygote
mpv17a9/+;mitfaw2/+;csf1rj4e1/+ F1 generation fish, which were subsequently in-crossed to
produce F2 generation with 27 combinations of mutational state of these genes. Since csf1rj4e1
phenotype was not clear in F2 adult with casper phenotype, we outcrossed these fish with
panther fish to determine the zygocity of csf1rj4e1 mutation based on the frequency of larva with
xanthophores (heterozygote and homozygote produced 50%- and 0%-fraction of xanthophore-
positive larva, respectively) by fluorescence microscopy. The casper;csf1rj4e1/j4e1 line is viable
and reproducible; we outcrossed either casper;csf1rj4e1/j4e1 line or casper;csf1rj4e1/+ line with
other fluorescent transgenic lines over several generations to obtain fish harboring multiple
transgenes on casper background either in the presence or absence of xanthophores.
171
4.6.2 Sample preparation
Transgenic zebrafish lines were intercrossed over multiple generations to obtain embryos
with multiple combinations of the transgenes. All lines were maintained as heterozygous for
each transgene. Embryos were screened using a fluorescence stereo microscope (Axio Zoom, Carl
Zeiss) for expression patterns of individual fluorescence proteins before imaging experiments. A
confocal microscope (LSM 780, Carl Zeiss) was used to isolate Tg(ubiq:Lifeact-mRuby) lines from
Tg(ubiq:lyn-tdTomato) lines by distinguishing spatially- and spectrally-overlapping signals.
For in vivo imaging, 5–6 zebrafish embryos at 18 to 72 hpf were immobilized and placed into
1% UltraPure low-melting-point agarose (catalog no. 16520-050, Invitrogen) solution prepared in
30% Danieau (17.4 mM NaCl, 210 M KCl, 120 M MgSO
4
7H
2
O, 180 M Ca(NO
3
)
2,
1.5 mM HEPES
buffer in water, pH 7.6) with 0.003% PTU and 0.01% tricaine in an imaging dish with no. 1.5
coverglass bottom, (catalog no. D5040P, WillCo Wells). Following solidification of agarose at
room temperature (1–2 min), the imaging dish was filled with 30% Danieau solution and 0.01%
tricaine at 28.5 °C.
4.6.3 Image acquisition
Images were acquired on a Zeiss LSM 780 laser confocal scanning microscope equipped with
a 32-channel detector using 40x/1.1 W LD C-Apochromat Korr UV-VIS-IR lens at 28ºC.
Samples of Gt(cltca-Citrine), Tg(ubiq:lyn-tdTomato), Tg(fli1:mKO2), and Tg(ubiq:Lifeact-
mRuby), were simultaneously imaged with 488 nm and 561 nm laser excitation, for citrine,
tdTomato, mKO2, and mRuby. A narrow 488 nm/561 nm dichroic mirror was used to separate
172
excitation and fluorescence emission. Samples were imaged with a 2-photon laser at 740 nm to
excite autofluorescence, using a 690nm lowpass filter to separate excitation and fluorescence.
For all samples, detection was performed at the full available range (410.5-694.9nm) with 8.9nm
spectral binning.
Supplementary Table 4.1 provides the detailed description of the imaging parameters used
for all images presented in this work.
4.6.4 Hyperspectral Fluorescence Image Simulation
The model simulates spectral fluorescent emission by generating a stochastic distribution of
photons with profile equivalent to the pure reference spectra (as described in Sup. Note 4.1).
The effect of photon starvation, commonly observed on microscopes, is synthetically obtained
by manually reducing the number of photons in this stochastic distribution. Detection, Poisson
and signal transfer noises are then added to produce 32-channel fluorescence emission spectra
that closely resemble those acquired on microscopes. The simulations include accurate
integration of dichroic mirrors and imaging settings.
4.6.5 Independent Spectral Signatures
Independent spectral fingerprints can be obtained through samples, solutions, literatures, or
spectral viewer websites (Thermo fisher, BD spectral viewer, Spectra analyzer). Fluorescent
signals used in this work were obtained by imaging single labelled samples in areas
morphologically and physiologically known to express the specific fluorescence, see
Supplementary Figure 4.16. For each dataset a phasor plot was computed. The 32-channel
spectral fingerprint was extracted from the phasor-bin at the counts-weighted average position
173
of the phasor cluster. Those fingerprints were compared with literature fingerprints and manually
corrected to reduce noise. Further descriptions for how to identify new components can be found
in Supplementary Note 4.1 and Supplementary Figure 4.26.
For autofluorescent signals, spectrum for Elastin was obtained experimentally and compared
with literature.
172
Spectra for Nicotinamide Adenine Dinucleotide (NADH) free, NADH bound,
Retinoic acid, Retinol and Flavin Adenine Dinucleotide (FAD) were acquired from in vitro solutions
using the microscope. NADH free from B-Nicotinamide Adenine Dinucleotide (Sigma-Aldrich, St.
Luis, MO, #43420) in Phosphate Buffered Saline (PBS) solution. NADH bound from B-
Nicotinamide Adenine Dinucleotide and L-Lactic Dehydrogenase (Sigma-Aldrich, #43420, #L3916)
in PBS. Retinoic acid from a solution of Retinoic Acid (Sigma-Aldrich, #R2625) in
Dimethylsulfoxide (DMSO). Retinol from a solution of Retinol synthetic (Sigma-Aldrich, #R7632)
in DMSO. FAD from Flavin Adenine Dinucleotide Disodium Salt Hydrate (Sigma-Aldrich, #F6625)
in PBS.
4.6.6 Phasor analysis
For each pixel in a dataset, the Fourier coefficients of its normalized spectra define the
coordinates ( 𝐺𝐺 ( 𝑑𝑑 ), 𝑆𝑆 ( 𝑑𝑑 )) in the phasor plane, where:
𝐺𝐺 ( 𝑑𝑑 ) =
∑ 𝐼𝐼 ( 𝜆𝜆 ) c os( 𝑠𝑠𝜔𝜔𝜆𝜆 ) ∆ 𝜆𝜆 𝜋𝜋 𝑓𝑓 𝜋𝜋 𝑠𝑠 ∑ 𝐼𝐼 ( 𝜆𝜆 ) ∆ 𝜆𝜆 𝜋𝜋 𝑓𝑓 𝜋𝜋 𝑠𝑠 (4.1)
𝑆𝑆 ( 𝑑𝑑 ) =
∑ 𝐼𝐼 ( 𝜆𝜆 ) s in( 𝑠𝑠𝜔𝜔𝜆𝜆 ) ∆ 𝜆𝜆 𝜋𝜋 𝑓𝑓 𝜋𝜋 𝑠𝑠 ∑ 𝐼𝐼 ( 𝜆𝜆 ) ∆ 𝜆𝜆 𝜋𝜋 𝑓𝑓 𝜋𝜋 𝑠𝑠 (4.2)
𝜔𝜔 =
2 𝜋𝜋 𝑐𝑐 (4.3)
174
Where 𝜆𝜆 𝑐𝑐 and 𝜆𝜆 𝑓𝑓 are starting and ending wavelengths respectively; I is the measured
intensity; 𝑐𝑐 is the number of spectral channels (32 in our case), and 𝑑𝑑 the harmonic number
173
.
In this work, we utilized the first harmonic ( 𝑑𝑑 = 1) for the autofluorescent signals and the second
harmonic ( 𝑑𝑑 = 2) for fluorescent signals based on the sparsity of independent spectral
components. A two-dimensional histogram with dimensions (S, G) is applied to the phasor
coordinates in order to group pixels with similar spectra within a single square bin. We define
this process as phasor encoding.
4.6.7 Linear Unmixing
The hypothesis for linear unmixing in this work is that given i independent spectral
fingerprints (fp), each collected spectrum (I( λ)) is a linear combination of fp, and the sum of each
fp contribution ( 𝑅𝑅 ) is 1.
𝐼𝐼 ( 𝜆𝜆 ) = 𝑊𝑊 1
𝑅𝑅 1
𝑓𝑓𝑠𝑠 1
+ 𝑊𝑊 2
𝑅𝑅 2
𝑓𝑓𝑠𝑠 2
+ ⋯ + 𝑊𝑊 𝑖𝑖 𝑅𝑅 𝑖𝑖 𝑓𝑓𝑠𝑠 𝑖𝑖 + 𝑁𝑁 (4.4)
𝛴𝛴 𝑅𝑅 𝑖𝑖 = 1 (4.5)
where 𝑅𝑅 𝑖𝑖 is the ratio, 𝑊𝑊 𝑖𝑖 the weight, and 𝑁𝑁 the noise. The acquired spectra are collected in
the original spectral cube with shape (t,z,c,y,x), with t as time, c as channel and x,y,z spatial
dimensions.
i spectral vectors, 𝑓𝑓𝑠𝑠 𝑖𝑖 , need to be provided to the unmixing function. It is assumed that there
are identical weights for all 𝑓𝑓𝑠𝑠 , and a low value for noise 𝑁𝑁 . Under these conditions, we obtain
𝑅𝑅 𝑖𝑖 by applying a Jacobian Matrix Inversion
174
:
175
⎣
⎢
⎢
⎡
∑ 𝑤𝑤 ( 𝑥𝑥 )
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 1
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 1
𝑥𝑥 ∑ 𝑤𝑤 ( 𝑥𝑥 )
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 1
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 2
𝑥𝑥 ⋯
∑ 𝑤𝑤 ( 𝑥𝑥 )
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 2
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 1
𝑥𝑥 ∑ 𝑤𝑤 ( 𝑥𝑥 )
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 2
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 2
𝑥𝑥 ⋯
⋮ ⋮ ⋱
⎦
⎥
⎥
⎤
�
𝛼𝛼 1
− 𝛼𝛼 1
0
𝛼𝛼 2
− 𝛼𝛼 2
0
⋮
�=
⎣
⎢
⎢
⎢
⎢
⎡
∑ 𝑤𝑤 ( 𝑥𝑥 )[ 𝑦𝑦 ( 𝑥𝑥 ) − 𝑓𝑓 0
( 𝑥𝑥 )]
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 1
𝑥𝑥 ∑
𝑤𝑤 ( 𝑥𝑥 )[ 𝑦𝑦 ( 𝑥𝑥 ) − 𝑓𝑓 0
( 𝑥𝑥 )]
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 1
𝑥𝑥 ∑
𝑤𝑤 ( 𝑥𝑥 )[ 𝑦𝑦 ( 𝑥𝑥 ) − 𝑓𝑓 0
( 𝑥𝑥 )]
𝜕𝜕 𝑓𝑓 0
𝜕𝜕 𝛼𝛼 1
𝑥𝑥 ⎦
⎥
⎥
⎥
⎥
⎤
(4.6)
In the pixel-by-pixel linear unmixing implementation in this work, the Jacobian Matrix
inversion is applied on the acquired spectrum in each pixel with dimensions (t,z,c,y,x). Resulting
ratios for each spectral vector are assembled in the form of a ratio cube with shape (t,z,i,y,x)
where x,y,z,t are the original image spatial and time dimensions, respectively and i is the number
of input spectral vectors. The ratio cube (t,z,i,y,x) is multiplied with the integral of intensity over
channel dimension of the original spectral cube, with shape (t,z,y,x), to obtain the final resulting
dataset with shape (t,z,i,y,x).
4.6.8 Hybrid Unmixing - Linear Unmixing
In the Hybrid Unmixing implementation, Jacobian Matrix Inversion is applied on the average
spectrum of each phasor bin with dimensions (c,s,g) where g and s are the phasor histogram sizes
and c is the number of spectral channels acquired. The average spectrum in each bin is calculated
by using the phasor as an encoding, to reference each original pixel spectra to a bin. Resulting
ratios for each component channel are assembled in the form of a phasor bin-ratio cube with
shape (i,s,g) where i is the number of input independent spectra fp (Linear Unmixing section).
This phasor bin-ratio cube is then referenced to the original image shape, forming a ratio cube
with shape (t,z,i,y,x) where x, y, z, t are the original image dimensions. We multiply the ratio cube
176
with the integral of intensity over channel dimension of the original spectral cube, with shape
(t,z,y,x), obtaining a final result dataset with shape (t,z,i,x,y).
4.6.9 HyU Algorithm
The pseudo-code utilized for the HyU algorithm is as follows:
Input: I(x,y,c,z,t) [5D hyperspectral image]
U(i,c) [Reference spectra (n spectra))]
Output: I_U(x,y,i,z,t) [Multi-channel unmixed image]
Procedure:
HYU(I(x,y,c,z,t), U(n, c))
// Single Harmonic Fourier Transform
G(x,y,z,t), S(x,y,z,t) = phasor_transform(I(x,y,c,z,t))
// 2D Histogram of G and S values
H(g,s) = histogram2d(G(x,y,z,t) S(x,y,z,t))
// Averaging of hyperspectral image over phasor histogram
I_H(g,s,c) = phasor_average(I(x,y,c,z,t), H(g,s))
// Linear Unmixing of averaged hyperspectral image
I_U(g,s,i) = LU(I_H(g,s,c), U(i,c))
// Reference unmixed phasor image back to original image dimensions
I_U(x,y,i,z,t) = reverse_phasor_reference(I_U(g,s,i)
return I_U(x,y,i,z,t)
177
4.6.10 Other unmixing algorithms
Unmixing algorithms utilized for speed comparisons with the HyU algorithm (Supplementary
Figure 4.13) were plugged in the unmixing step of the analysis pipeline and sourced as follows.
Non-negative Constrained Least Squares and Fully Constrained Least Squares from
pysptools.abundance_maps (https://pysptools.sourceforge.io/abundance_maps.html). Robust
Non-Negative Matrix Factorization
167
python implementation was obtained from
(https://github.com/neel-dey/robust-nmf).
4.6.11 Data visualization
Rendering of final result datasets were performed using Imaris 9.5-9.7. In Figures 4.2 and 4.3,
contrast settings (minimum, maximum, gamma) for each channel were set to be equal to provide
reasonable comparison between HyU and LU results. Gamma was set to 1, no minimum
threshold was applied, and the maximum for each channel was set to 1/3 of the maximum
intensity. The images were rendered using Maximum Intensity Projection (MIP), and for
improving display, they were digitally resampled in the z-direction, maintaining a fixed xy ratio to
attenuate the gap generated from sparse sampling z-wise on the microscope.
4.6.12 Box Plot Generation
All box plots were generated using standard plotting methods. The center line corresponds
to the median, the lower box border corresponds to the first quartile, and the upper box border
corresponds to the third quartile. The lower- and upper- line extensions correspond to one and
a half times the interquartile range below and above the first and third quartiles respectively.
178
4.6.13 Timelapse registration
A customized python script (Supplementary Code) was first utilized to pad the number of z
slices across multiple time points, obtaining equally sized volumes. The “Correct 3D drift”
plugin
68
(https://imagej.net/Correct_3D_Drift) in FIJI
67
(https://imagej.net/Fiji) was used to
register the data.
4.6.14 Timelapse statistics
Box plots and line plots for timelapses were generated using ImarisVantage in Imaris 9.5-9.7.
Box plot elements follow the same guidelines as described above. Line plots are connected box
plots for each time point with the solid line denoting the median values, and the shaded region
denoting the first and third quartiles.
4.6.15 Quantification with Mean Square Error
For synthetic data, a ground truth is available for comparison of unmixing fidelity between
HyU and LU. fp contributions, or ratios, were used for quantification, owing to the arbitrary
nature of intensity values in microscopy data. We utilize Mean Square Error ( 𝐸𝐸𝑆𝑆 𝐸𝐸 ) for
determining the quality of the ratios in synthetic data. We define MSE as the square difference
of the ratio recovered by an unmixing algorithm (r unmixed) and the ground truth ratio (r) divided
by the total number of pixels (n).
𝐸𝐸𝑆𝑆 𝐸𝐸 =
1
𝑠𝑠 | 𝐹𝐹 𝑢𝑢 𝑠𝑠𝑚𝑚 𝑖𝑖 𝑥𝑥 𝑢𝑢 𝑢𝑢 − 𝐹𝐹 |
2
(4.7)
To simplify comparison between different unmixing algorithms, we define Relative Mean
Square Error (RMSE) as:
𝑅𝑅 𝐸𝐸𝑆𝑆 𝐸𝐸 = �
𝑀𝑀𝑀𝑀 𝐸𝐸 𝐿𝐿𝐿𝐿
𝑀𝑀𝑀𝑀 𝐸𝐸 𝐻𝐻𝐻𝐻 𝐿𝐿 − 1 � ∗ 100% (4.8)
179
RMSE measures the improvement in MSE when using HyU as compared to LU.
4.6.16 Residuals
For experimental data, in the absence of ground truth, we quantify the performance of the
results returned by the unmixing algorithms with the following measurements: Average Relative
Residual, Residual Image Map, Residual Phasor Map, and finally, Residual Intensity Histogram.
Residual ( 𝑅𝑅 ) is calculated as:
For image: 𝑅𝑅 ( 𝑥𝑥 , 𝑦𝑦 , 𝑐𝑐 , 𝑧𝑧 , 𝑒𝑒 ) = 𝐼𝐼 𝑅𝑅 𝑎𝑎 𝑅𝑅 𝐼𝐼 𝑚𝑚𝑎𝑎𝑔𝑔𝑢𝑢 ( 𝑥𝑥 , 𝑦𝑦 , 𝑐𝑐 , 𝑧𝑧 , 𝑒𝑒 ) − 𝐼𝐼 𝑈𝑈 𝑠𝑠𝑚𝑚𝑖𝑖𝑥𝑥𝑢𝑢 𝑢𝑢 𝐼𝐼 𝑚𝑚𝑎𝑎𝑔𝑔𝑢𝑢 ( 𝑥𝑥 , 𝑦𝑦 , 𝑐𝑐 , 𝑧𝑧 , 𝑒𝑒 ) (4.9)
For phasor: 𝑅𝑅 ( 𝑔𝑔 , 𝑑𝑑 , 𝑐𝑐 ) = 𝐼𝐼 𝑅𝑅 𝑎𝑎 𝑅𝑅 𝐼𝐼 𝑚𝑚𝑎𝑎𝑔𝑔𝑢𝑢 ( 𝑔𝑔 , 𝑑𝑑 , 𝑐𝑐 ) − 𝐼𝐼 𝑈𝑈 𝑠𝑠𝑚𝑚𝑖𝑖𝑥𝑥𝑢𝑢 𝑢𝑢 𝐼𝐼 𝑚𝑚𝑎𝑎𝑔𝑔𝑢𝑢 ( 𝑔𝑔 , 𝑑𝑑 , 𝑐𝑐 ) (4.10)
The spectral intensity difference between the unmixed image and original image for each
pixel or phasor bin depend on the following descriptions of the intensity image (I), where:
𝐼𝐼 𝑅𝑅 𝑎𝑎 𝑅𝑅 𝐼𝐼 𝑚𝑚𝑎𝑎𝑔𝑔𝑢𝑢 = ∑ 𝐹𝐹 ∗ 𝑓𝑓 𝑠𝑠 𝑖𝑖 𝑖𝑖 + 𝑁𝑁 (4.11)
𝐼𝐼 𝑈𝑈 𝑠𝑠𝑚𝑚𝑖𝑖𝑥𝑥𝑢𝑢 𝑢𝑢 𝐼𝐼 𝑚𝑚 𝑎𝑎𝑔𝑔𝑢𝑢 = ∑ 𝐹𝐹 𝑢𝑢 𝑠𝑠𝑚𝑚 𝑖𝑖 𝑥𝑥 𝑢𝑢 𝑢𝑢 ∗ 𝑓𝑓 𝑠𝑠 𝑖𝑖 𝑖𝑖
(4.12)
The original spectrum ( 𝐼𝐼 𝑅𝑅 𝑎𝑎 𝑅𝑅 𝐼𝐼 𝑚𝑚𝑎𝑎𝑔𝑔𝑢𝑢 ) is the combination of each independent spectral
component ( 𝑓𝑓𝑠𝑠 ) with its ratio ( 𝐹𝐹 ) plus noise ( 𝑁𝑁 ). The recovered spectrum is obtained by the
multiplication of recovered ratios ( 𝐹𝐹 𝑢𝑢 𝑠𝑠𝑚𝑚 𝑖𝑖 𝑥𝑥 𝑢𝑢 𝑢𝑢 ) with each corresponding individual component.
Relative Residual (RR) is calculated as the sum of the residual values over C channels and
normalized to the sum of the original intensity values over C channels (with C = 32 in our
instrument).
𝑅𝑅𝑅𝑅 ( 𝑥𝑥 , 𝑦𝑦 , 𝑧𝑧 , 𝑒𝑒 ) =
∑ 𝑅𝑅 ( 𝑥𝑥 , 𝑦𝑦 , 𝑐𝑐 , 𝑧𝑧 , 𝑡𝑡 )
𝐶𝐶 𝑐𝑐 = 1
∑ 𝐼𝐼 𝑅𝑅 𝑚𝑚 𝑅𝑅 𝐼𝐼 𝑚𝑚𝑚𝑚𝑔𝑔 𝐼𝐼 ( 𝑥𝑥 , 𝑦𝑦 , 𝑐𝑐 , 𝑧𝑧 , 𝑡𝑡 )
𝐶𝐶 𝑐𝑐 = 1
(4.13)
𝑅𝑅𝑅𝑅 ( 𝑔𝑔 , 𝑑𝑑 ) =
∑ 𝑅𝑅 ( 𝑔𝑔 , 𝑐𝑐 , 𝑐𝑐 )
𝐶𝐶 𝑐𝑐 = 1
∑ 𝐼𝐼 𝑅𝑅 𝑚𝑚 𝑅𝑅 𝐼𝐼 𝑚𝑚𝑚𝑚𝑔𝑔 𝐼𝐼 ( 𝑔𝑔 . 𝑐𝑐 . 𝑐𝑐 )
𝐶𝐶 𝑐𝑐 = 1
(4.14)
180
The Average Relative Residual (Supplementary Figure 4.5) provides a single comparison value
for evaluating the performance of different processing methods on the same data, such as the
application of multiple filters, applying of various threshold values, and variations in the number
of components estimated. Average Relative Residual ( 𝑅𝑅 𝑅𝑅 𝑎𝑎𝑎𝑎 𝑔𝑔 ) is defined as the average of the
relative residual for every pixel in the image or every phasor bin in the phasor histogram.
For image: 𝑅𝑅 𝑅𝑅 𝑎𝑎𝑎𝑎 𝑔𝑔 =
∑ ∑ ∑ ∑ 𝑅𝑅𝑅𝑅
𝑥𝑥 𝐻𝐻 𝑧𝑧 𝑡𝑡 𝑥𝑥 𝑦𝑦𝑧𝑧 𝑡𝑡 (4.15)
For Phasor: 𝑅𝑅 𝑅𝑅 𝑎𝑎𝑎𝑎 𝑔𝑔 =
∑ ∑ 𝑅𝑅𝑅𝑅
𝑔𝑔 𝑠𝑠 𝑔𝑔 𝑐𝑐 (4.16)
The Residual Image Map visualizes the residual values for each pixel of the image
(Supplementary Figure 4.4). Regions with higher residual values appear to characterize portions
of the dataset with increased amount of noise or where an unexpected spectral signature is
present.
Residual Image Maps ( 𝑅𝑅 𝑖𝑖 𝑚𝑚𝑔𝑔 𝑚𝑚 𝑎𝑎𝑎𝑎
( 𝑥𝑥 , 𝑦𝑦 )) project the Relative Residual (RR) cube to the 2D
image shape for each voxel, providing an estimated visualization of an algorithm ratio recovery
performance in the spatial context of the original image.
𝑅𝑅 𝑖𝑖 𝑚𝑚𝑔𝑔 𝑚𝑚 𝑎𝑎𝑎𝑎
( 𝑥𝑥 , 𝑦𝑦 ) = ∑ 𝑅𝑅𝑅𝑅 ( 𝑥𝑥 , 𝑦𝑦 , 𝑧𝑧 , 𝑒𝑒 ) ∗ 100
𝑧𝑧 , 𝑡𝑡 (4.17)
Residual Phasor Map visualizes residuals for each bin of the phasor histogram
(Supplementary Figure 4.4). These maps allow for insights on where HyU unmixing results have
reduced performance in phasor domain and indicate phasor locations of unexpected additional
spectral components (Supplementary Figure 4.5).
𝑅𝑅 𝑎𝑎 ℎ 𝑚𝑚 𝑎𝑎𝑎𝑎
( 𝑔𝑔 , 𝑑𝑑 ) = 𝑅𝑅𝑅𝑅 ( 𝑔𝑔 , 𝑑𝑑 ) ∗ 100 (4.18)
181
The Residual Intensity Histogram 𝑅𝑅 𝐼𝐼 𝑠𝑠𝑡𝑡 𝐻𝐻 𝑖𝑖𝑐𝑐𝑡𝑡
( 𝑠𝑠 , 𝐹𝐹𝐹𝐹 ) (Figure 4.3 G,H, Supplementary Figure
4.4D) calculates the distribution of the relative residual in relation to intensity overall all pixels or
all phasor bins. Higher residuals appear to be present in regions with lower signal intensity and
SNR, providing degraded performance.
For image:
𝑅𝑅 𝐼𝐼 𝑠𝑠𝑡𝑡 𝐻𝐻 𝑖𝑖𝑐𝑐𝑡𝑡
( 𝑠𝑠 , 𝐹𝐹𝐹𝐹 ) = 𝑐𝑐 𝑠𝑠 𝑢𝑢𝑑𝑑𝑒𝑒 � 𝑃𝑃 ( 𝑥𝑥 , 𝑦𝑦 , 𝑧𝑧 , 𝑒𝑒 ), 𝑅𝑅𝑅𝑅 ( 𝑥𝑥 , 𝑦𝑦 , 𝑧𝑧 , 𝑒𝑒 ) �
𝑎𝑎 , 𝑎𝑎𝑎𝑎
(4.19)
For phasor:
𝑅𝑅 𝐼𝐼 𝑠𝑠𝑡𝑡 𝐻𝐻 𝑖𝑖𝑐𝑐𝑡𝑡
( 𝑠𝑠 , 𝐹𝐹𝐹𝐹 ) = 𝑐𝑐 𝑠𝑠 𝑢𝑢𝑑𝑑𝑒𝑒 � 𝑃𝑃 ( 𝑔𝑔 , 𝑑𝑑 ), 𝑅𝑅𝑅𝑅 ( 𝑔𝑔 , 𝑑𝑑 ) �
𝑎𝑎 , 𝑎𝑎𝑎𝑎
(4.20)
𝑃𝑃 =
∑ 𝐼𝐼 𝑅𝑅 𝑚𝑚 𝑅𝑅 𝐼𝐼 𝑚𝑚𝑚𝑚𝑔𝑔 𝐼𝐼 𝐶𝐶 𝑐𝑐 = 1
4 ∗ 𝑐𝑐𝑓𝑓
(4.21)
Where p is a bin of the histogram P, rr is a bin of RR, and sf is the factor which converts the
number of photons to digital intensity levels.
4.6.17 Image Contrast
Image contrast measures the distinguishability of a detail against the background. Here we
use percent contrast to refer the relationship between the highest and lowest intensity in the
image.
𝐶𝐶 𝑠𝑠 𝑑𝑑𝑒𝑒 𝐹𝐹 𝑎𝑎 𝑑𝑑 𝑒𝑒 =
𝐼𝐼 𝑐𝑐 − 𝐼𝐼 𝐵𝐵 𝐼𝐼 𝐵𝐵
Where the Intensity of signal average (I s) is the average of top 20% intensities in the image.
The Intensity of background average (I B), the average of bottom 20% image intensities.
182
4.7 Supplementary Material
4.7.1 Supplementary Tables
Label Zebrafish
Stage
Imaged
volume
[pixels]
Imaged
volume
[pixels]
Imaged
volume
[pixels]
Lateral pixel
(x,y,res.)
[µm]
Axial section
(z res.)
[µm]
Pixel dwell
time
[µs]
Laser
Power
Fig
4.2E
-P,
S4.7
Gt(cltca-
Citrine);
Tg(ubiq:lyn
-tdTomato;
ubiq:Lifeact
-mRuby;
fli1:mKO2)
10 dpf 1024 1024 17 0.346 3.00 3.1 561 nm:
0.18%
488nm: 5%
Fig
4.3A
-F,
S4.1,
S4.8
Gt(cltca-
Citrine);
Tg(ubiq:lyn
-tdTomato;
ubiq:Lifeact
-mRuby;
fli1:mKO2)
2 dpf 1024 512 31 0.415 5.00 3.1 561 nm:
0.6% 488
nm: 5.5%
Fig
4.4
Gt(cltca-
Citrine);
Tg(kdrl:mC
herry;fli1:m
KO2)
5 dpf 512 1024 26 0.461 3.00 2.5 561 nm:
2%
488nm: 4%
Fig
4.5
A,
S4.2
Gt(cltca-
Citrine);
Tg(ubiq:lyn
-tdTomato;
ubiq:Lifeact
-mRuby)
3 dpf 2304 512 27 1.38 5.00 5.0 561 nm:
0.4% 488
nm: 2.8%
Fig
4.5
B-D
Gt(cltca-
Citrine);
Tg(ubiq:lyn
-tdTomato;
ubiq:Lifeact
-mRuby;
fli1:mKO2)
Plus strong
autofl
3 dpf 2560 2048 27 0.259 4.00 3.1 561 nm:
0.18%
488 nm:
5%
Fig4.
5 E-
G
740 nm:
4%
Fig
4.6,
S4.9,
S4.1
0,
Gt(cltca-
Citrine);
Tg(ubiq:lyn
-tdTomato;
ubiq:Lifeact
-mRuby;
fli1:mKO2)
3 dpf 768 512 17 0.923 4.00 6.3 561 nm:
0.4%
488 nm:
5%
183
S4.1
1
Plus strong
autofl
S4.1
2
casper 36 hpf 6144 1536 25 0.461 6.00 3.1 740 nm:
3%
*hpf = hours post fertilization
*dpf = days post fertilization
Supplementary Table 4.1: Imaging parameters for all datasets
4.7.2 Supplementary Figures
Supplementary Figure 4.1. HyU unmixing reduces noise and signal bleed-through compared to
traditional bandpass filter imaging.
Imaging of a quadra-transgene zebrafish Gt(cltca-citrine);Tg(ubiq:lyn-tdTomato; ubiq:Lifeact-mRuby;
fli1:mKO2) (same data as Fig. 4.3) performed with (A) 4-channel optical filter imaging and (B) multispectral
imaging and Hybrid Unmixing (HyU) analysis. Bleed-through from the fluorophores’ overlapping emission
spectra is present in A. This artifact is the result of the sharp spectral discretization imposed on the
fluorescent signals by the optical filters, which fail to produce a clean distinction between Citrine (480nm-
690nm), mKO2 (525nm-690nm), tdTomato (530nm-690nm) and mRuby (560nm-690nm). Fluorophores
are well separated in B. Colors in both A and B represent Citrine (green), mKO2 (yellow), tdTomato
(magenta) and mRuby (cyan).
184
Supplementary Figure 4.2. HyU algorithm outperforms current methods.
(A) True-color rendering of a 32-channel tetra-label Gt(cltca-citrine);Tg(ubiq:lyn-tdTomato; ubiq:Lifeact-
mRuby) zebrafish shows indistinguishability of multi-label in the absence of analysis. (B) Optical filter
imaging presents strong bleed-through across the 4 channels. (C) Traditional unmixing provides contrast
increase across labels while still affected by incorrect re-assignment of signals. (D) Hybrid Unmixing
enhances separation of spectral and spatial overlapping signals. (E-H) are the zoom in from white boxes
in A, B, C, D respectively. Scale bar is 100 µm.
185
Supplementary Figure 4.3. Comparison of unmixing results for synthetic data at different SNR
demonstrate improved HyU performance.
Ground truth photon mask of the four independent fluorescent signals, (A) mKO2, (B) Citrine, (C) mRuby,
and (D) tdTomato for synthetic data. (E) The maximum intensity projection (MIP) of the simulated 32
channel hyperspectral image generated from the four ground truth masks at low signal-to-noise ratio
(SNR). In this case, a maximum of 10 photons are simulated for each fluorescent component. (F-I)
Grayscale representation of the maximum emission channel of each component, based on the respective
spectra. Unmixing result of (J) LU and (K) HyU for simulations with a maximum of 10, report decreased
performance. In the ultra-low SNR simulation (5 photon at most for each component), both LU (I) and HyU
(M) results are deteriorated, however HyU maintains a 1.5x lower average MSE compared to LU.
186
Supplementary Figure 4.4. Quantification of HyU vs LU unmixing results for synthetic data highlight
increased HyU performance.
HyU performance is evaluated under several algorithmic parameters and experimental conditions. (A)
Relative MSE between HyU and LU was calculated as a function of max input photons/spectrum over 5
denoising filters for HyU. The improvement increases both with the number of photons and the number
of denoising filters, showing significant differences above 7 photons/spectra with peak at 124%. Shaded
regions denote the 95% confidence interval around the mean. (B) Absolute MSE from LU and HyU
algorithms for the same synthetic dataset with and without beam splitters. The addition of optical filters
causes the MSE of LU to increase on average by 8%, compared to an average increase of 5% for HyU. (C)
Average relative residual of synthetic data with and without beam splitters with increasing level of
denoising. The average relative residual without beam splitters with denoising (HyU-filt1x – HyU-filt5x) is
83%, compared to 109% for LU. In the absence of denoising filters (filt0x) the average relative residual is
92.9%. Beam splitters were applied in this simulation and both Mean Squared Error (MSE) and residual
values were calculated with and without beam splitters. (D) Simulated spectral with beam splitter and (E)
simulated spectral without beam splitter are shown. All box plot elements are defined as described in
Methods.
187
Supplementary Figure 4.5. Residual analysis for synthetic data identifies locations with reduced
algorithm performance.
Simulated data in Figure 4.2 for four fluorescent labels (Citrine, mKO2, tdTomato, mRuby) is analyzed with
LU and HyU. Unmixing results for (A) HyU and (B) LU. Residual Image Map for (C) HyU and (D) LU results
present regions with higher residual (red) along the boundary between the sample’s labelled features and
the background, where signal-to-noise drops. The average residual values for LU (118%) are higher than
HyU (94%). (E) Residual Phasor Map shows higher residual for the background region (arrow), consistently
with the results in C, D. The Residual colorbar scale corresponds to C, D and E. (F) Phasor Residual Intensity
Histogram maps the average photon counts in each histogram bin in E between 0 and 50 photons and
presents a trend of decreasing relative residuals with photon number. The (G) Average Relative Residual
plot shows higher values for LU compared to HyU with different denoising filters applied. Ground truth
values are also included for comparison. Box plot elements are defined as described in Methods. An (H)
original phasor plot with 0 threshold and 5 denoising filters applied is presented. The ROI (yellow circle)
highlights the background pixels in yellow in the (I) average spectral intensity image. The noise from
background and residual can be decreased considerably with an intensity threshold.
188
Supplementary Figure 4.6. Schematic overview of residual calculation.
Image residual is the residual for image ( 𝑥𝑥 , 𝑦𝑦 ). (A) Raw hyperspectral data cube in dimension of ( 𝑥𝑥 , 𝑦𝑦 , 𝜆𝜆 ).
𝑥𝑥 , 𝑦𝑦 is the spatial dimension. 𝜆𝜆 is the wavelength range from the spectral channel on the detector. (D)
Recovered model in dimension of ( 𝑥𝑥 , 𝑦𝑦 , 𝜆𝜆 ) comes from (B) the product of Recovered ratio ( 𝑥𝑥 , 𝑦𝑦 , 𝑐𝑐 ℎ) and
Independent spectra ( 𝑐𝑐 ℎ, 𝜆𝜆 ). 𝑐𝑐 ℎ is the number of independent spectra or unmixing components. (C)
Residual is the difference of Recovered model and Raw data. (E-H) Same logic for Phasor residual, but
instead of ( 𝑥𝑥 , 𝑦𝑦 ) the dimension of Phasor is composed from the real and imaginary Fourier
components ( 𝐺𝐺 , 𝑆𝑆 ).
189
Supplementary Figure 4.7. Unmixing of a quadra-transgenic zebrafish with HyU and LU highlights
improvements in contrast and spatial features.
Volumetric zoom-in view of the somites within the trunk region of a 10-dpf Gt(cltca-citrine);Tg(ubiq:lyn-
tdTomato; ubiq:Lifeact-mRuby; fli1:mKO2) zebrafish merging all channels in (A) HyU and (B) LU. (A-E) HyU
presents a wide dynamic range of intensities with average contrast 1.11-fold higher than LU compared to
(F-J) LU. In LU, bleed-through from the (H) membrane label (arrow) is observed in the (G) lymphatic
vasculature channel (arrow) and the (I) actin channel (arrow). This incorrect re-assignment of intensities
is not present in the corresponding HyU channels for (B) vasculature and (D) actin, where fibers (arrow)
are cleanly unmixed. (K) Phasor Residual Distribution shows the distribution of relative residual (%) and
photon counts in phasor histogram bins. Residual distribution shows the distribution of relative residual
(%) and photon counts in histogram pixels for both (L) HyU and (M) LU.
190
Supplementary Figure 4.8. Residual analysis of experimental data supports performance improvement
of HyU.
Residual analysis for multispectral fluorescent data of a 5dpf quadra-transgenic zebrafish Gt(cltca-
citrine);Tg(ubq:lyn-tdTomato; ubiq:Lifeact-mRuby; fli1:mKO2) in Figure 4.3. Unmixing results for (A) HyU
and (B) LU, respectively. Residual Image map of the z-averaged dataset for (C) HyU and (D) LU show lower
residual values for HyU, suggesting improved quality of unmixing. (E) Residual distribution relative to the
original intensity in each pixel as a function of estimated photon counts per spectrum for LU and (F) HyU.
(G) Residual Phasor Map presents increased residual values in the background region (arrow). Jet
colormap scale refers to C, D and G. (H) Residual Phasor Histogram for HyU shows distribution of residuals
in the broad dynamic range of photons for experimental data. (I) Raw phasor with 0 threshold applied
and 5 denoising filters, the ROI (yellow circle) highlights the background pixels in the (J) average spectral
image (showing the first z slice).
191
Supplementary Figure 4.9. Application of denoising filters reveals improved results with lower
residuals.
(A) Residual Image Map of HyU unmixing of a quadra-transgenic zebrafish Gt(cltca-citrine);Tg(ubiq:lyn-
tdTomato; ubiq:Lifeact-mRuby; fli1:mKO2) inclusive of one strong autofluorescent signal. Residual values
are calculated with different number of denoising filters
173
. The average relative residual visibly decreases
with the increase of denoising filter numbers. (B) Residual Phasor Map shows a major decrease in values
between 0 and 1 denoising filters, maintaining statistically similar values at higher denoising filter
applications. (C) Phasor plots for different denoising filters. Phasor of raw data prior to denoising or
thresholding, presents noise connected to each of the detectors, as well as a high count area
corresponding to the background region. The phasor plot distribution highlights areas with higher pixel
counts coming from the background noise, which correspond with lower Residual Phasor Map values in
B. (D) Average residual values for Residual Image Map A and Residual Phasor Map B highlight that the
improvement on residuals mostly focuses on the first application of the denoising filter. In this initial
denoising, the average relative residual decreased from 69.8% to 46.8%, further decreasing to 42.6% after
5 denoising. Average relative residual for phasor decreased from 33.9% to 7.1% after 1 denoising filter
was applied, further decreasing to 2.1% after 5 denoising applied. With standard processing threshold of
250 digital levels applied (bottom 0.38% intensities of 16-bit format), the average relative residual
decreased from 7.2% to 4.6%, further decreasing to 4.1% after 5 denoising filters. Average relative
residual for phasor decreases from 10.1% to 2.6%, further decreasing to 1.1% after 5 denoising filters.
Bars denote the variance of the relative residual values.
192
Supplementary Figure 4.10. Comparison of residual images for LU and HyU highlights improved HyU
performance.
Residual Image projection for LU and HyU of a 3D dataset of 3 dpf quadra-transgenic zebrafish Gt(cltca-
citrine);Tg(ubiq:lyn-tdTomato; ubiq:Lifeact-mRuby; fli1:mKO2) with an intensity threshold of 250. (A) LU
Residual Image Map for a single slice (z=8 of 17 in a z-stack) provides average relative residual of 35.9%,
while the (B) corresponding map for HyU averages at 7.1%. (C) Residual Phasor Map for the z-stack
presents average relative residual of 1.1%. The reduction in residuals for HyU is maintained across the z-
stack, as shown in (D) the average LU and (E) HyU Residual Image Maps built from the average of residuals
across all z-slices. The average residual improvement for HyU at 4% compared to LU at 21% is 5.3-fold.
193
Supplementary Figure 4.11. Residual maps facilitate identification of independent spectral
components.
Experimental fluorescence microscopy data often includes unexpected autofluorescence signals. Residual
Maps (Methods) provide additional information to account for these signals and properly adjust HyU
analysis. (A) Average intensity image with pixels pseudo-colored in cyan (autofluorescence) and magenta
(background) according to the ROIs selections on the (B) Residual Phasor Map, computed from the HyU
of 4 input spectra with a threshold of zero. The pseudo-colored areas of the image match those presenting
high residual values in the (C) Residual Image Map. Changing the unmixing input to include the
unexpected autofluorescent spectrum (cyan) and performing the Residual Phasor map selections
produces the (D) background pseudo-colored (magenta) image. The inclusion of autofluorescence as an
independent spectral component in the unmixing decreases the number of pixels corresponding to the
autofluorescent signal (cyan ROI) in the (E) Residual Phasor Map, thereby matching with the (F) Residual
Image Map, which no longer presents high residuals in the center portion of the image. Increasing the
threshold to 250 removes the pixels with high residuals corresponding to the background, removing them
from the (G) average intensity image, (H) Residual Phasor Map, and (I) Residual Image Map.
194
Supplementary Figure 4.12. HyU analysis of 36 hpf Casper zebrafish demonstrates feasibility of
unmixing only intrinsic signals.
Casper zebrafish is a transgenic zebrafish characterized by the absence of pigments. Dataset was acquired
in the two-photon spectral mode @740 nm excitation. HyU Unmixing was performed utilizing 5 pure
intrinsic signals measured in solution (Methods): (A) Merged overview of all signals (B) NADH bound (C)
NADH free (yellow), (D) retinoid (magenta), (E) retinoic acid (cyan) which appears mainly in the yolk sac,
known location where carotenoids are stored, transferred and then metabolized to retinoic acid
175
(F)
elastin (green) has a similar distribution with in the zebrafish floorplate at this developmental stage (G)
Phasor (H) average spectra from selection in G.
195
Supplementary Figure 4.13. Speed comparison and improvement plots of multiple unmixing
algorithms in their Original form vs HyU encoded.
(A) Computational times of multiple unmixing algorithms for both original (pixel-by-pixel) and HyU
versions over a range of hyperspectral imaging datasets sizes. (B) Improvement in speed using the ratio
of the HyU over the Original versions demonstrate a vast increase in speed for all algorithms other than
LU across all input data sizes. (C, D) Computational times and speed improvement for original and HyU
versions of LU show that the original version of LU provides higher computational speeds at ~2x. Plots A-
C use logarithmic scales while plot D uses a linear scale for the y-axis.
196
Supplementary Figure 4.14. Residuals in synthetic data and experimental data.
(A) We simulated data with the purpose to cover wide ranges of noise and to allow for thorough testing
of the algorithm’s performance. In this example a ground truth spectrum with 14 photons (red dashed
line) is simulated accounting for multiple types of noise (dark green line). The simulated spectrum
presents disrupted shape with substantial presence of noise. During HyU analysis, in the encoding of
spectra within a phasor bin, the simulated spectrum (dark green line) is averaged with similar spectra of
multiple other pixels, producing (light green light). The spectrum recovered with Hybrid Unmixing (orange
line) is similar to the ground truth. In the calculation of residuals, however, due to the disrupted signal of
simulation (dark green line), the absence of noise in the clean data is counted as residual. (B) Spectrum
from experimental data (from Figure 4.3) at a similar photon-range (15 to 20 photons) for comparison.
The same color code is utilized for the spectra lines, without ground truth owing to the nature of
experimental data.
4.7.3 Supplementary Notes
Supplementary Note 4.1: Identification of spectra and new components with HyU
Identification of independent spectral components has been an adversity for unmixing
hyperspectral data. First, the collected spectra may be distorted by reduced SNR. Secondly,
excitation of intrinsic signals causes uncertainty of biological sample. Favorably, HyU simplifies
this process by adapting Phasor approach and achieving semi- or full- automation process for
spectra identification and selection. In HyU spectra can be loaded from an existing library,
197
virtually automating the analysis process. Pre-identified cursors are generated from common
fluorophores such as mKO2, tdTomato, mRuby, Citrine. In the presence of unexpected
fluorescent signals, spectra can also be selected and visualized directly from the phasor. The
“tails” on the phasor distributions are independent components. In our HyU graphical interface,
clicking on the phasor visualizes the spectra within a small area (9x9 bins) of the phasor histogram
(Figure 4.1 D). In the example in Sup. Figure 4.9-4.11 we identify 5 distinct endmembers on the
Phasor (Sup. Figure 4.10, C), visualize their spectra identifying NADH bound, NADH free, Retinoid,
Retinoic Acid and elastin. The use of Residual Phasor Map (Sup. Figure 4.11 B) allows for
identification of areas in the phasor with high amounts of residuals, likely corresponding to a
missing endmember in the unmixing. Residual Image Maps (Sup. Figure 4.11 C) provide a rapid
overview of residuals in the image data, for identification of location in the dataset of the missing
endmember.
Supplementary Note 4.2: HyU in autofluorescent data
Cellular metabolism is a key regulator of cell function and plays an essential role in the
development of numerous diseases. Understanding of cellular metabolic pathways is critical for
the development and assessment of novel therapies and diagnostics. However, metabolic
pathways are complex and changes are highly dynamic: changes can occur in the order of seconds
or minutes or occur over a period of days, weeks, months and years and those metabolic changes
are highly heterogeneous. A number of metabolites have been reported in literature to be
fluorescent and to change their spectra according to their biochemical configurations. For
example, the measurement of NADH in its free and bound state is possible thanks to a shift in
the emission spectra when NADH is bound to enzymes such as Lactate Dehydrogenase (LDH).
198
Likewise, retinol and retinoic are known to have different autofluorescent spectra. HyU is well
posed for the analysis of intrinsically low autofluorescence owing to its ability to operate at low
SNR. In Sup. Figure 4.12 we visualize unmixing of multiple autofluorescent signals based on
spectra acquired from in vitro solutions.
Supplementary Note 3: Reduced computational costs during unmixing
An advantage of HyU is speed. HyU provides substantial speed boosts when comparing to
other pixel-based unmixing algorithms. The exception is for standard LU vs HyU owing to the
highly optimized computational implementation of the functions which are utilized in standard
LU. This speed boost occurs because unmixing is performed at a phasor-histogram level, where
a single bin corresponds to a multitude of image pixels. For algorithms other than standard LU,
HyU provides up to ~500-fold improvement in speed at comparable coding language and
computing hardware, processing 2 GB in less than 100 seconds (Supplementary Figure 4.13).
This improvement provides a solution for open image-analysis challenges in multiplexing
fluorescence. First, the increased size of HFI data, resulting from continuously higher throughput
and resolution microscopes, scaled with the number of spectral channels. Second, the number
of datasets, owing to experimental reproducibility and biological variability.
4.8 Chapter Conclusion
In conclusion, the results presented in this chapter quantitatively show that HyU, a phasor
based, computational unmixing framework, is well suited for tackling the many challenges
present in live imaging of multiple fluorescence labels, where determining relative contributions
of colocalized signals is needed. HyU demonstrates improved unmixing capabilities compared to
199
standard unmixing methods such as LU with lower SNR data. With HyU’s improved capabilities,
we are able to work with images which have reduced requirements in the amount of acquired
fluorescent signals, permitting a reduction of laser excitation load. Reducing the power of the
excitation laser enables longer duration for live imaging or higher time resolution, while lowering
photo-toxicity (Chapter 1). By combining HyU with lower power multidimensional imaging, we
are able to better multiplex the imaging of biological events and provide access to information-
rich imaging across different spatiotemporal scales. The reduced requirements of HyU make it
fully compatible with any commercial and common microscopes capable of spectral detection,
facilitating access to the technology. Our analysis demonstrates HyU’s robustness, simplicity and
improvement in identifying both new and known spectral signatures, and vastly improved
unmixing outputs, providing a much-needed tool for delving into the many questions still
surrounding studies with live imaging.
200
Chapter 5
Fluorescence Microscopy Data Simulation Framework
5.1 Author Contribution
Authors: Daniel E.S. Koo
1,2,†
, Hsiao Ju Chiang
1,2,†
, Masahiro Kitano
1,3
, Le A. Trinh
1,2,3
, Scott E.
Fraser
1,2,3
, Francesco Cutrale
1,2,3
1 Translational Imaging Center, University of Southern California, Los Angeles, CA
2 Department of Biomedical Engineering, University of Southern California, Los Angeles, CA
3 Molecular and Computational Biology, University of Southern California, Los Angeles, CA
† Equal contribution
5.2 Chapter Summary
With the dynamic segmentation and multiplexed signal extraction toolsets, I have covered a
variety of methods which greatly improve our ability to process multidimensional data. In this
chapter, I discuss a final project to assist and improve both of these toolsets. Fluorescence
microscopy data is highly specified to the fluorescence labels that have been used to tag desired
components within biological samples. However, the data is also subject to higher noise and
obscured signal due to low photon effects fundamental to utilizing fluorescence. These effects
are especially compounded for multidimensional data where the photons needed for excitation
are further divided across the multiple dimensions. The inherent uncertainty in distinguishing
the true signal for a multidimensional fluorescence microscopy image signifies that a ground
truth for the dataset is usually unavailable or difficult to acquire. This causes a problem when
seeking to objectively determine the efficacy of and apply metrics to an image processing method
201
or algorithm such as the Hybrid Unmixing method introduced in the previous chapter. It also
hinders the development of highly effective machine learning algorithms which require large
amounts of training data in conjunction with a ground truth. To that end, we have developed a
framework to generate simulated hyperspectral fluorescence microscopy data; each simulated
hyperspectral image is generated from a reference image which serves as a ground truth when
applying image processing methods. We demonstrate that our synthetic hyperspectral images
match the spectral and intensity distributions of experimentally acquired images. Thus, we
establish an extremely useful platform on which future image processing algorithms dealing with
fluorescence microscopy data can be thoroughly assessed and aided in development.
5.3 Introduction
5.3.1 Importance of source data for developing image processing algorithms
In the development of new image processing algorithms, a proper reference image is a crucial
element in determining the utility of the algorithm. Image processing algorithms are usually a
series of operations which identify, modify, or extract attributes from within digital images.
34
Such algorithms are necessary since acquired images are usually approximate visual recordings
of a particular scene and extraction of desired information from that scene requires parsing of
inaccurate elements in the approximation. Hence, source data (also referred to as reference
image or ground truth) can be defined as a digital image (or images) whose pixels contain values
which can be considered the true values of the recorded scene. Therefore, source data is
necessary to determine how close to the truth is an algorithm’s output. Determining fidelity is
necessary in many different types of image processing algorithms, including denoising filters,
segmentation workflows, and unmixing methods. Most notably, for machine learning
202
algorithms, which have demonstrated huge advances in processing and analysis, not only the
assessment but also the initial act of development is not possible without source data. However,
the acquisition of source data is quite often a challenging task, especially in the case of
multidimensional fluorescence microscopy data.
5.3.2 The challenges of source data collection in fluorescent microscopy
In the application of fluorescence microscopy for biological samples, the acquisition of images
which can be considered as source data is generally challenging on multiple levels. First,
acquisition of large volumes of fluorescence microscopy data, with corresponding correct labels,
is generally a highly challenging task both time- and cost-wise. Biological fluorescence imaging is
inherently inclusive of noise and can be greatly affected by a wide variety of experimental factors
such as biological sample variability, temperature, fluorescent signals levels, photon starvation
and instrumental noise.
28–33
The presence of multiple types of biophysical and instrumental
noises, combined with the presence of multiple fluorescent signals, increase the complexity of
collecting source data that covers these widely different experimental scenarios. Utilizing low
excitation power mitigates detrimental perturbations in the biological sample system, but
produces decreased fluorescent photons, reducing detection rates in what is known as photon
starvation. The presence of intrinsic fluorescent signals, also known as autofluorescence, further
complicates acquisition of valuable training datasets. These experimental hurdles limit the
capability of acquiring or estimating a real ground truth in the microscopy images, as source data
from instrument includes these undesired signals and noises. Furthermore, trying to apply a label
or ground truth to this data is an added layer of complexity. In this case, the ground truth is the
real number of photons emitted from the fluorophore labels for interested cells. Therefore, a
203
way to accurately simulate fluorescence images would greatly aid in developing and objectively
evaluating any image processing algorithms involving fluorescence microscopy.
5.3.3 A new framework to facilitate extraction and analysis
In this work, we proposed a versatile framework of building simulations for generating
synthetic fluorescence images from source images and spectra which serve as the ground truth.
The framework is packaged as a python module so that it can be used out of the box or adapted,
if necessary, to different microscope configurations or environments. The simulation considers
several parameters, including: input photons levels, fluorophore (spectrum) type, number of
fluorophores in the sample, number of detectors utilized, and other microscope settings as
needed. Therefore, the simulation is versatile in terms of adjusting the acquisition of a variety of
signals and noises while having a consistent image reference. Our results demonstrate the high
similarity of the images generated through our simulation to hyperspectral datasets which are
experimentally acquired. The realism of our simulated hyperspectral fluorescence images and
the flexibility in generating the images pave the way for accelerated, more comprehensive
algorithm development and machine learning applications.
5.4 Results
5.4.1 Generation of simulated fluorescence spectra
We have created a simulation framework for fluorescence spectral data by developing a
series of functions and a programmatic interface (Methods) which enable the computational
recreation of fluorescence spectra determined through the detection of photons emitted from
fluorophores when using a laser scanning microscope. This simulation follows several steps
204
which replicate the behavior photons that are emitted from a fluorophore take through the light-
path of a laser scanning microscope (Figure 5.1 A) in order to generate fluorescence spectra
highly similar to those acquired experimentally (Figure 5.1 B). These steps for generating these
realistic fluorescence spectra include (1) the spectral profile of a fluorophore corresponding to
excitation, (2) the stochasticity of photons during emission, (3) signal filtering by the dichroic
mirror, and (4) the intensity conversion and noise inclusion due to detection; each step is detailed
further in the Methods section. Since the emission of photons is stochastic, the simulated output
of an ideal spectra corresponding to a given photon input is randomized, leading to a wide range
of possible output spectra (Sup Fig 5.1). This means that for the same set of parameters, there
are many different possible outputs for the simulation, which reflects the nondeterministic
nature of optical microscopy data. Finally, while up to this point, we have been considering the
simulation of individual spectra (Figure 5.2 A), one important point that must be understood is
that the simulation can pair the reference emission spectral profiles with source images
(Methods, Sup Note 5.1) in order to also generate spatially organized spectral signal as
hyperspectral images (Figure 5.2 B).
205
Figure 5.1: Computational model for spectral detection of fluorescence
Spectral detection of fluorescent signals using a laser scanning microscope is introduced using a simplified
physical schematic of fluorescent light collection paired with a schematic of the signal changes in the
fluorescence emission spectra during the collection. (A) A schematic for fluorescence signal collection in
a laser scanning microscope is broken down into four parts. (Box 1) The excitation of a fluorophore results
in (Box 2) the stochastic emission of photons, (Box 3) after which the wavelengths corresponding to
excitation light is removed with a dichroic mirror. (Box 4) The remaining emitted photons are then
collected by the detector. (B) Simplified spectral signal simulation corresponding to the schematic in A.
(Box 1) A fluorophore has an ideal spectral profile which is a probability distribution function for the
emitted photons. (Box 2) Stochastic emission of photons results in counting of photons according to
wavelengths bins, resulting in an n-channel spectrum. (Box 3) The optical filter zeros out any signal within
the bin corresponding to the excitation wavelength. (Box 4) During detection, photon values are
converted into intensity values with the addition of detection noises such as Poisson noise, dark current
noise, and salt and pepper noise.
206
Figure 5.2: Simulated spectra can be generated individually or arranged into spatial images
Simulation of 1D spectra can be extended spatially in order to produce 2D spectral images. (A) Individual
1D spectra are generated from a spectral profile paired with a single photon value by computationally
recreating stochastic emission and all relevant noises as explained in the Methods. (B) Simulation of 2D
spectral images extend the simulation to take into account the spatial organization of multiple
fluorophores by pairing the reference spectrum with multiple photon values where the spatial
information is represented using a photon mask (Sup Note 5.1).
207
5.4.2 Quantitative comparison of experimental and simulated spectral signals
With the construction of our simulation, we first demonstrated that our simulation is
reproducing the same fluorescent signals as the experimental data by simulating the data
acquired from a Chroma slide (Methods) and comparing their intensity distributions. A Chroma
slide should have nearly uniform signal and a single spectrum; this provides a reference on which
a baseline comparison can be established for our simulation framework. First, the signal
distribution of the experimentally acquired image is visually similar to that of the computationally
generated synthetic image (Figure 5.3 A, B top rows). This similarity is further corroborated by
the intensity histograms of each image with mean intensities of 2401±672 and 2226±445 for
channel 7 and 4824±726 and 5012±644 for channel 16 for experimental and synthetic,
respectively (Figure 5.3 A, B top rows). The average spectrum for a hyperspectral dataset can be
generated by taking the mean intensity over all of the pixels for each of the 32 channels in order
to determine the approximate ideal emission spectrum of the dataset as long as the dataset
contains a single spectral signature (Sup Note 5.2). Taking the average spectra of both
experimentally acquired and simulated data also validates the concurrence of the simulated
signals (Figure 5.3 C, D). One final measure that we used to confirm the high degree of similarity
between the experimental and synthetic images was to plot the ratio of the signal variance versus
signal mean intensity (VMR) for each channel (Method). We originally utilized this ratio in order
to calculate the photon to digital level conversion factor. In comparing the VMRs (Figure 5.4), we
verify that the simulated images have VMR values that are close to those of the experimental
data with values in the range of 460–510 across the channels. This replication of the conversion
rate from the synthetic data further corroborates the correctness of our simulation. Our
208
simulation provides a high degree of similarity both in terms of the spatial distribution of signals
as well as the spectral distributions with regards to the intensity.
Figure 5.3: Simulated spectra are highly similar to experimentally acquired spectra
Comparison between synthetic and experimental spectral images containing a single fluorescent
signature supports the fidelity of the simulation in recreating experimental emission spectra. (A)
Representative intensity images (top row) for channels 7 and 16 and their respective histograms (bottom
row) are shown for the experimental data of the autofluorescent slides from Chroma. (B) The
corresponding images (top row) and histograms (bottom row) are shown for the simulated data, matching
those shown in A. The average across all pixels for each channel provides the average spectra of the (C)
experimental and (D) simulated datasets, which are also very similar to each other.
209
Figure 5.4: High similarity of mean vs variance plots between experimental and simulated images
Comparison of mean/variance relations for each spectral channel between experimental and simulated
images provide another measure for the spectral fidelity of the simulation. Mean vs Variance plots are
provided for experimental (blue) and simulated data (orange) in analog mode at low photon emissions for
a constant fluorescent signal. Linearity in mean/variance is expected within the gain’s linear response
range. For these plots, experimental data was acquired sequentially at increasing laser power, while
simulated data was created by increasing the average statistically emitted fluorescent photons, effectively
simulating an “increasing laser power”. Results show a linear mean/variance relation for experimental
data and a similar linear relation for simulated data. Slope values are reported in each plot’s legend.
Channels 7, 11, 13, 17, 21, and 23 are shown here from the full 32 spectral channels as representative
examples.
5.4.3 Comparison of experimental and simulated biologically relevant images
Since we were able to verify our ability to replicate uniform, single spectra datasets, our next
step was to determine our ability to reconstruct a biologically relevant hyperspectral image. We
accomplished this by first generating a photo-realistic photon mask (Methods) from
experimental data acquired from a biological sample with a single spectrum and generated a
simulated counterpart. A comparison between the individual channel images of the
210
experimental hyperspectral dataset and its synthetic counterpart demonstrates the realism of
the computationally generated hyperspectral image. If we take a comparative look at the
hyperspectral datasets across multiple channels, we can see how closely the computationally
generated images at the bottom resemble the experimental ones to the top (Figure 5.4 A, B).
The simulated data is generated by pairing a photon mask with a representative spectral profile
and using our framework to computationally form a realistic hyperspectral dataset as a whole
(Methods, Sup Note 5.1). This is in contrast to taking each channel of the original experimental
hyperspectral dataset and attempting to recreate the intensity distributions within each channel
individually. While the spectral distribution of the synthetic data does not coincide with that of
the experimental data due to the complex collection of autofluorescent signals within a biological
sample (Sup Note 5.2), we demonstrate through the use of line profiles (Figure 5.4 C) that the
spatial organization of photons is correctly conveyed for all of the channels through the use of
photon masks.
211
Figure 5.5: Simulations recapitulate spatial intensity distributions across all channels
Further evidence for the simulation’s ability to provide spectral images with high spectral and spatial
fidelity with experimental images is shown using intensity profiles. (A) 2D intensity images of
experimental data for channels 18 (566 nm), 25 (629 nm), and 30 (673 nm) provide a baseline comparison
for the simulations. (B) Corresponding images of the simulated data for channels 18 (566 nm), 25 (629
nm), and 30 (673 nm) show visually similar images for each channel. (C) Line profiles for experimental
(blue) and simulated (orange) data corresponding to the yellow lines in A and B provide a comparison for
the spatial distribution of intensity values and present a high degree of similarity for the distributions
across multiple channels.
212
5.5 Discussion
The process of acquiring source data for fluorescence microscopy can either be highly difficult
or, depending on the experimental parameters, practically impossible. In developing this
simulation framework, we have created a method of rapidly generating synthetic data across a
large variety of experimental parameters. In the presented results, we have demonstrated the
ability to create a realistic hyperspectral image for a single spectral profile. This is not the extent
of the simulation capabilities; it is possible to further extend the complexity of our simulated data
by combining any number of single fluorophore simulations together. By combining multiple
photon masks with realistic biological patterns each with a specific spectrum, the simulation can
create a simulated hyperspectral dataset which mimics the colocalizations of multiple
fluorophores within a biological sample (Sup Fig 5.2). The complexities in determining the exact
spectral composition within a real biological sample (Sup Note 5.2) makes it difficult to
definitively prove that our simulation computationally reconstructs the makeup of a multi-
fluorophore sample. However, we believe that our results in recapitulating a uniform, single-
spectrum, experimentally acquired dataset and the spatial distributions of a single-spectrum,
biological sample through our simulation framework provide a strong support for the viability of
our simulation.
We also believe that the simulation framework enables the objective evaluation of image
processing algorithms for hyperspectral datasets with its ability to create complex simulations.
As an example, we utilized the simulations to demonstrate the many improvements that the
Hybrid Unmixing (HyU) algorithm (chapter 4) provides in terms of unmixing capabilities when
compared to the standard unmixing algorithm, Linear Unmixing (LU). We do this by applying
213
either the HyU or LU unmixing methods to the simulated hyperspectral dataset and calculating
the error from each method by using the ground truth for each simulation as a reference. In this
way, the simulations serve as an objective standard to enable quantitative analysis and
comparative statistics. In addition to increasing spectral complexity, the simulation framework
facilitates the viability of testing algorithms across a multitude of experimental conditions. By
generating a multitude of simulations with a range of parameters, we are able to thoroughly
understand the particular conditions under which HyU provides improvement in unmixing (Sup
Fig 5.3). I would like to stress this point, since it has been and is still currently impractical to
acquire hyperspectral datasets across the many different experimental conditions needed to
thoroughly investigate the limits of an image processing algorithm for fluorescence microscopy.
Referring again to the analysis of our HyU algorithm, we were able to comprehensively determine
the conditions under which HyU provided improved unmixing results compared to LU by testing
the algorithms on 1200 different simulation sets covering a wide variety of biological and
instrumental parameters including number of fluorescent labels, number of detection channels,
number of total photons per pixel, and percentage of spatial overlap (Sup Fig 5.4). Acquiring
1200 datasets experimentally is completely unfeasible; this is made even worse when taking into
account the need to acquire a matching ground truth dataset for each of the experimentally
acquired datasets. Comprehensively, there are a large number of applicable situations for
utilizing synthetic data with regards to scaling spectral complexity and data size, reinforcing the
value of the simulation framework in empowering proper characterization of image processing
algorithms and assisting algorithm development.
214
The simulation framework additionally opens the path to several other applications with its
ability to generate as much synthetic data as needed, while being able to selectively modify
experimental parameters. One specific avenue mentioned in the introduction is the creation of
training data for machine learning algorithms. Since we’ve been able to validate the high degree
of similarity of the computationally generated data compared to experimentally acquired data,
we are able to precisely control the specific experimental conditions for our simulations in order
to generate synthetic data which can be used as input for any machine learning algorithm to solve
our specific problem. In addition, since our simulations are non-deterministic and we can
generate vast amounts of data, we are able to create the necessary training data needed in order
to jump start the development of any machine learning algorithm. Apart from machine learning
applications, the simulation’s inherent control of multiple experimental parameters provides us
with a way to decouple the dependency of requiring the microscope for procuring data.
Acquisition of data through laser scanning microscopes may sometimes be a strenuous task, with
the need for proper training, lengthy imaging sessions, high costs, and the possibility of hardware
failure. This difficulty may be further exacerbated by the many constraints placed on the
researcher which appear when dealing with biological samples. Overall, the simulation
framework provides a way to accelerate the development of machine learning algorithms and
avoid the often, heavy burden of handling biological samples in conjunction with laser scanning
microscopes.
The simulation framework is a powerful tool for image processing development; still, there
are several limitations that must be considered and overcome. One major limiting factor has
been previously touched upon in the complexities of determining the true spectral composition
215
of a live biological sample. As discussed in Supplementary Note 5.2, due to the partially random
nature of the instrumental noises and autofluorescent background signals that appear in live
samples, we have currently not been able to create photon masks which account for both the
dominant fluorescent signals and the secondary autofluorescent signals. Further analysis will be
most likely be needed to provide true recreations. However, we believe that our results already
provide a great deal of support for establishing the consistency of our simulations. Another
limiting factor of our simulations is the construction of the photon masks which serve as our
source images and ground truth. We have demonstrated the creation of source images from
experimentally acquired data, but this still requires the performance of experiments which may
cause a bottleneck. We believe a method to computationally create biologically relevant,
spatially organized photon masks will push the simulation framework to greater capabilities in
order to truly create an all-encompassing, computationally-driven, fluorescence microscopy data
simulation. Overall, the simulation provides a great foundation for creating a testbed for the
development of image processing algorithms.
5.6 Methods
5.6.1 Fluorescence spectrum and hyperspectral dataset simulation
Introduction to the key steps for creating a simulation
To generate a bio-pattern-realistic simulation, we replicated the biophysics of how
fluorophores are excited in a microscope through 3 steps. First, we generated a realistic-photon-
distribution image and ground truth ratio from multiple single label experimental data. Next, we
simulated the signal in each pixel based on the type and number of fluorophores assigned from
step one. Lastly, we simulated the process of how signals are collected in a microscope from
216
photon to digital level and the corresponding noise adding through each path. Overall,
simulation can be broken down into 3 aspects – the simulation of fluorescence emission, the
simulation of acquired spectra from microscope, an image with a realistic photon distribution.
Excitation and emission of a fluorophore
Creating an ideal simulation for spectral emission datasets requires a proper modelling of
how fluorophores signals are generated and collected within a microscope. A fluorophore is
excited by a laser at a specific wavelength and emits a photon in relation to the energy imparted
onto the fluorophore. This photon emission is a stochastic process, with the emission of many
photons from the same excitation energy having differing levels of energy and therefore different
emission wavelengths. This emission profile is called the emission spectrum of the fluorophore,
and it is a probability distribution function where photons are distributed at different energy
levels, corresponding to different wavelengths (Figure 5.1 B1).
6,7
The ideal spectrum of a
fluorophore considers the case where an infinite number of photons are emitted. However, the
limitation of energy input during live imaging where laser power must be kept low in order to
protect samples from photo-toxicity and prevent photo-bleaching of fluorophores (chapter 1).
This low input energy translates to a limited number of excited photons. Since the spectral
emission profile is a probability distribution arising from stochastic emission, having a low
number photons causes a large deviation from the ideal emission profile of the fluorophore. In
contrast, the measured spectrum of the fluorophore at any one point (experiment, sample) will
be a distribution restricted by the PDF, which is limited by the finite number of photons emitted
for the specified laser power.
Simulating fluorescence emission
217
In a microscope, photons emitting from a sample will travel through the objective and the
dichroic mirror before finally reaching the detectors (Figure 5.1 A). Each channel or detector
collects a certain number of photons within the given timeframe, known as the pixel dwelling
time. Finally, the counted photon is converted to digital level as intensity of the image. Hence,
the simulation begins with an input value representing the total number of photons (TP) emitted
by a virtual fluorophore. The TP is fed into a random number generator constrained by the
corresponding PDF of the virtual fluorophore. The random number generator produces a
distribution of numbers ranging from one to the total number of detector channels (NDC). This
distribution is then organized into a 1D histogram with the number of photon counts recorded
for each detector channel (Figure 5.1 B2 left). This process simulates the stochastic emission of
a limited number of photons for a fluorophore, so we call this the stochastic spectrum (SS) which
is an array of integer values of size NDC (Figure 5.1 B2 right).
Signal modification and addition of instrumental noise
Unlike the emission profile of a fluorophore collected through spectroscopy, the factors
mentioned above for the microscope affect the acquiring signal. In other words, any output
collected through a microscope suffers from various amounts of noise and additional signal which
obscure the original signal input. This obscuration occurs at several steps. First, the final signal
output collected in the microscope is converted from photon counts to digital level in order to
be recorded; this conversion is affected by several degrading influences. Various forms of noise
such as Poisson noise from the dark current, sensor noise, and salt and pepper noise, accompany
the conversion of photon detection to digital levels when photons are collected at the
microscope detector.
30–33
Hence, the mentioned SS is converted from photon values to digital
218
levels in detectors. This process is first accompanied by the addition of Poisson noise as described
in the following section, which occurs when the incidence rate of the photons is being counted
by the detector. This signal is then converted from photon values to digital levels by multiplying
by the conversion rate. Since the conversion from photons to digital levels is also accompanied
by noise, Gaussian noise with zero mean and standard deviation from the sqrt of the digital level
is then added to the converted spectrum resulting in a digital spectrum (DS) (Figure 5.1 B4). Next,
a dichroic mirror affects the recorded signal by blocking the excitation laser light in the
microscope through the reflection of photons in the specific wavelength range of the excitation
(Figure 5.1 A3).
6,7
Hence, a portion of the emission profile is also removed. This application of
the optical filter is simulated by multiplying the DS with a float value for each channel
corresponding to the transmission rate of the photons through the optical filter resulting in an
optically reduced digital spectrum (ODS) (Figure 5.1 B3). Note that this operation is taking place
here because of how the filter multiplication affects photon integer level vs digital level. Finally,
gaussian noise with mean and standard deviation for each channel taken from calibration
measurements corresponding to the specific confocal microscope is added to the ODS.
Generating a biologically relevant photon mask for fluorescence images
To generate biologically relevant fluorescent images, input images containing the proper
spatially organized photon values are necessary as a template for each simulated fluorescent
biological marker. The values for each pixel in these input images are the total number of
photons (TP) as described in the previous section. Experimental hyperspectral datasets are used
to generate these source images, which we call photon masks (Sup Note 5.1). For constructing
photon masks with greater signal compared to background, experimental datasets can be
219
zoomed-in and cropped around distinct patterns. Starting with a single hyperspectral cube image
(x,y,lambda), intensity values were summed across the lambda dimension to create a realistic
distribution of intensities across x and y, which we refer to as the intensity mask. This intensity
mask is then thresholded using a manually determined threshold value before being max scaled
to a predetermined photon value. This output is the photon mask, and an individual photon mask
is generated for each fluorophore that will be simulated. With these photon masks, we have the
exacting mapping of photon value to pixel for each simulated signal. If we are simulating a
combination of multiple fluorophores within the same synthetic sample, we are able to easily
calculate the ratio for each independent fluorophore component by taking the ratio of the
number of photons for each label with the total number of photons from all labels.
Creating a photo-realistic synthetic image
With the photon masks and spectral simulation, it is possible to then fully generate the
necessary biologically relevant fluorescent simulation images. Each pixel of the photon mask
undergoes the spectral simulation process described in section 2.4. However, since inclusion of
background biological signals is necessary, once the stochastic spectrum is generated, a random
combination of background biological signals is first generated and added to the stochastic
spectrum for each pixel. Then, we consider the effect of the point spread function (PSF), coming
from background noise of the sample, photons intruding from other z slices, would also affect
the final image by simulating a two-dimensional Gaussian kernel is applied for each of the 32
channel images, emulating the interaction of photons across neighboring locations (pixels)
(Supplementary Figure 5.5). Finally, salt and pepper noise occurs randomly on the image in the
photon to digital conversion process due to inherent fluctuations in the detector. We simulated
220
this noise with a constantly set SNR (0.92). Likewise, the rest of the spectral simulation takes
place, creating the optically reduced digital spectrum for each of the pixels in the photorealistic
simulated hyperspectral dataset.
5.6.2 Conversion rate measurement
One big challenge when simulating digital spectral images is determining the conversion rate
that is applied when photons are counted and outputted as digital levels. The conversion rate is
a property inherent to the detector of any microscope and is key to matching the correct digital
levels when simulating the output of the user’s specific set up. The method we used to measure
the conversion rate (s_factor) was to determine the ratio of the signal variance versus signal
intensity of images collected by the microscope as described in the work by Dalal et.al.
176
This
method requires the collection of uniform, unalterable fluorescent spectral signals from several
different Chroma slides covering multiple emission ranges at a specific gain and different laser
powers. By varying the laser power, multiple points can be plotted for the variance vs average
digital levels, leading to a value for the s_factor for each channel of the detector. Finally, each
s_factor was multiplied by the quantum efficiency of each channel to determine an
approximately constant conversion rate. For this work, we determined the conversion rate for a
commercial LSM 880 microscope.
176
5.6.3 Reference spectra collection
Reference hyperspectral images were acquired from autofluorescent plastic slides (Chroma
Technology Corp, #92001), also referred to as Chroma slides. These slides are typically used to
determine that the excitation light is illuminating the sample in a consistent fashion. Therefore,
221
the amount of fluorescence and the true emission profile of each slide should be uniform and
self-consistent. Each slide will have a different emission spectrum. Images were acquired on a
Zeiss LSM 880 laser confocal scanning microscope equipped with a 32-channel detector
5.6.4 Simulation framework accessibility
The python source code for the simulation source code may be found at
https://github.com/TranslationalImagingCenter/fluoroSim-HyU.
5.7 Supplementary Material
5.7.1 Supplementary Figures
Supplementary Figure 5.1. Stochastic emission of fluorophore results in randomized outputs
Examples of multiple randomized emission signals (right) are shown for the acquisition of the emission of
a particular fluorophore with a specific emission profile (left). This random output is due to the
stochasticity of photon emission. Since this process is nondeterministic, simulation of hyperspectral
signals should result in nearly infinite variations, enabling the creation of many different synthetic
datasets.
222
Supplementary Figure 5.2. Example simulation of multiple fluorophores in a single acquisition
(A) Four photon masks are paired with (B) four, 32-channel spectral emission profiles (Citrine, mKO2,
tdTomato, mRuby) and used as input for the simulation framework in order to create a (C) 32-channel 2D
synthetic hyperspectral image. This simulation integrates the spatial and spectral photon distributions of
all four fluorophores into the synthetic dataset. Channels 14, 18, 20, and 24 in C correspond to the
channels of maximum probability for each of the spectrums, Citrine, mKO2, tdTomato, mRuby,
respectively. The strongest signal patterns within the images for those channels correspond to the photon
masks shown in A for each respective spectrum.
223
Supplementary Figure 5.3. Simulations enable objective comparisons using the MSE
Simulations enable evaluation of Hybrid Unmixing (HyU) performance under several algorithmic
parameters and experimental conditions. (A) Absolute MSE from Linear Unmixing (LU) and HyU
algorithms for the same synthetic dataset with and without beam splitters. The addition of optical filters
causes the MSE of LU to increase on average by 8%, compared to an average increase of 5% for HyU. N =
1.05e6 pixels. (B) Relative MSE between HyU and LU was calculated as a function of max input
photons/spectrum over 5 denoising filters for HyU. The improvement increases both with the number of
photons and the number of denoising filters, showing significant differences above 7 photons/spectra
with peak at 124%. Shaded regions denote the 95% confidence interval around the mean.
224
Supplementary Figure 5.4. Heatmap of HyU improvement across a multitude of parameters
Fifteen matrices demonstrate the RMSE improvement of Hybrid Unmixing (HyU) with respect to Linear
Unmixing (LU) when unmixing a collection of synthetic data with 2 to 8 extrinsic labels as a function of the
spatial overlap of these labels in a sample. In the matrix, 0% overlap denotes simulations with spatially
distinct fluorophores while simulations with 100% overlap have in every pixel a randomized ratio of the n
fluorophores. The values in a matrix are the average of a 1024x1024x32 pixels simulation and show the
RMSE improvement of HyU to LU with 3x denoising filters across an increasingly binned number of
channels (32, 16, 8, 4) applied with a total number of photons per pixel in (A) 16 (B) 32 (C) 48. These
fifteen matrices each with 80 elements represent 1200 synthetic datasets across a multitude of controlled
computational parameters and demonstrate the utility of the simulations in objectively describing the
effects of an image processing algorithm.
225
Supplementary Figure 5.5. Schematic for integrating the effects of the PSF to the simulation
The integration of the effects of the point spread function (PSF) on simulated images is demonstrated.
The effects of the PSF present as an overlapping of nearby signal within a localized area. Therefore, the
simulation introduces a blurring effect on the synthetic data by applying convolving a 2D gaussian array
with each individual channel of the simulated datasets after the step describing the stochastic emission.
5.7.2 Supplementary Notes
Supplementary Note 5.1: Photon masks
Simulating hyperspectral images requires two components, a source data for the spatial
distribution of photons and a PDF describing the spectral distribution. For this chapter, we define
a fluorescence hyperspectral signal as a 1D array which contains the values for the intensity
distribution of a given fluorescence source at a single point across the range of detected
wavelengths. A hyperspectral image is then an N-dimensional (ND, N>2) array which contains a
226
collection of 1D hyperspectral signals organized with a particular spatial or spatiotemporal
distribution. If generating multiple hyperspectral signals, just the given photon values and a
corresponding spectrum’s PDF are necessary. However, for creating hyperspectral images, a
source data containing all of the photon values which describe the spatiotemporal distributions
of the signal within a recorded scene is required.
Here, we describe the source data needed for generating the simulated hyperspectral images
and all of its uses. Source data, which we also refer to as source images, reference images, or
photon masks, are N-dimensional (ND) arrays of integer type. The value of each element (voxel)
within a photon mask should correspond to the true, expected number of photons detected
(photon count) at that specific location (x,y,z) in time (t). One photon mask is needed for each
fluorescent signature (spectrum) that will be included in the simulated hyperspectral dataset.
This means that for a given fluorescent signature, the photon mask is paired with its spectrum’s
PDF. For example, the simulation of a hyperspectral image with four fluorescent signatures
would require four photon masks and a PDF paired to each photon mask. For the specific case
of unmixing algorithms, since the photon masks provide the true photon values for each pixel for
each fluorescent signature, the ratio of each signature can be calculated by simply taking the
ratio of each photon mask to the sum across all photon masks.
Since a photon mask is only a map denoting the distributions of photon counts within an
array, a photon mask can be constructed either artificially or in a biologically relevant way, with
distributions that are simple to complex. A simple artificial photon mask can be constructed by
designating a standard shape such as a circle or square, and filling the shape with a constant value
for the input photon counts. More and more complex photon masks can be created by utilizing
227
customized shapes and varying the distribution of the photon values. For biologically relevant
photon masks, experimentally acquired hyperspectral images can be used as templates. Creating
a biologically relevant photon mask requires acquisition from a biological sample with a single
fluorophore. The experimentally acquired data will have some combination of spatiotemporal
dimensions (x,y,z,t) and the channel/lambda dimension (c). We generate the biologically
relevant photon mask by adding the intensities of the experimental data across the lambda
dimension before dividing by the s_factor (Methods). Since we are using the experimentally
acquired data to create the biologically relevant photon masks, the photon distributions closely
match what would be expected in real world settings, a task which is still difficult to create from
scratch algorithmically.
Supplementary Note 5.2: Matching the spectral properties of experimental datasets
Generating simulations which exactly match the spectral signatures within hyperspectral
datasets of complex biological samples such as a live model system is a highly challenging task.
There are two big challenges which affect the exact matching of the spectral signatures. The first
challenge is the existence of background biological signals in complex samples such as a live
zebrafish embryo. These background signals have intensities which are not high enough for their
spatial intensity distributions to be easily identified, and yet they still affect the spectral
properties of the acquired hyperspectral image. The second challenge is the proper tuning of the
amount of salt and pepper noise for the microscope acquisition session of the hyperspectral
dataset which is being matched. We have noticed that for some experimental datasets, there is
a high amount of salt and pepper noise which is being distributed across all channels of the
hyperspectral detector. One may be able to create an improved simulation and better generate
228
matching synthetic datasets once the computational parameters and factors which allow us to
better integrate these two aspects are determined.
We are currently evaluating a method to better dissect and recreate the spectral signatures
and noises within an experimental dataset. Our results already demonstrate the capacity to
create highly realistic hyperspectral data simulations, but we acknowledge that delivering a
simulated hyperspectral image which provides a near exact match for the spectral properties of
an experimental dataset would further cement the reliability of our simulation framework. For
the background signals, the method that we have envisioned is to use our Hybrid Unmixing
method (HyU) to unmix the autofluorescent (background) signals within a hyperspectral dataset
with a single fluorophore. This should provide us with an initial map of the spatial distributions
of the background signatures. We can then integrate these distributions as additional photon
masks, generate synthetic datasets, and check if the resulting datasets have matching spectral
signatures with the experimental datasets. If the datasets do not match exactly, we can
iteratively modify the background signatures until we have a matching simulated dataset. For
the instrumental noise, we believe that we will need to use a brute-force approach of adjusting
the salt and pepper noise distributed across all of the wavelength channels until the simulated
hyperspectral dataset demonstrates matching spectral signatures. Once we have implemented
these additional methods, we should be able to generate a simulation which is not only realistic,
but also nearly mimics any experimental dataset.
5.8 Chapter Conclusion
In conclusion, we have been able to develop a simulation framework capable of accurately
reproducing fluorescence microscopy hyperspectral images. This simulation framework provides
229
a way to generate highly realistic noisy fluorescence images along with a ground truth which can
be used for objective comparison between image processing methods. Other than enabling the
evaluation of image processing algorithms targeted specifically for hyperspectral data, the
generated data can be applied to standard fluorescence image processing methods as well. By
either summing or averaging the channels of the synthetic hyperspectral data corresponding to
optical filters, we can also generate highly realistic standard fluorescence data. Finally, as we
mentioned in the beginning of this chapter, this simulation can greatly accelerate the
development of machine learning algorithms. As a whole, further development of this simulation
framework paves the way for true in silico fluorescence microscopy, where it may be possible to
develop image processing algorithms and workflows which are fully functional and highly
effective on experimental data by starting with only fully simulated data.
230
Chapter 6
Thesis Conclusion and Past and Future Perspectives
Each project introduced in this dissertation has been a step in the process of constructing a
toolset for better extraction and analysis of complex multidimensional biological imaging data.
In chapter 2, we developed a custom software to enable the virtual dissection of a complex non-
orthogonal structure within a four-dimensional dataset. This software enabled us to properly
visualize and quantitatively analyze dynamic expression patterns of actin within live zebrafish
embryos during early heart morphogenesis, a feat that had previously never been considered
and performed. The custom software can also be extended beyond the LPM, providing a
foundation for extraction of signal from any dynamic three-dimensional image with multiple
layers. Furthermore, the many LPM datasets that have already been segmented provide the
groundwork for developing a machine learning algorithm for automatically segmenting the LPM
which can be built into the software to provide a feedback loop with the manual corrections to
constantly improve segmentation efforts. With chapters 3 and 4, we have demonstrated the
capacity to visualize in near real-time and rapidly extract mixed signals using SEER and HyU.
These phasor-based methods have begun to be incorporated into a variety of medical and
industrial applications with further improvements providing higher scalability and utility in
handling multiplexed data. Finally, in chapter 5, I introduce our simulation framework which
enables the objective evaluation of image processing algorithms for fluorescence microscopy
data and paves the way for machine-learning guided signal denoising, extraction, and
segmentation.
231
Overall, with our two complementary workflows of dynamic, structurally guided
segmentation to untangle data that is not separable on the fluorophore scale and scalable
multiplexing for visualizing and computationally unmixing hyperspectral datasets, we will be able
to selectively extract quantifiable information from live biological image data. Furthermore, the
simulation framework supports both of these methods by providing objective benchmarks as well
as the capacity to generate more than enough synthetic data for use in deep learning algorithms.
Although no single approach will be able to provide analysis for the wide array of biological
environments, all of these projects can support and strengthen each other in order to provide a
versatile and scalable platform on which we will be able to dive deeper into the highly complex
processes of biological systems. In conclusion, this dissertation provides multiple methods and
approaches to extend our ability to extract and analyze complex, information dense,
multidimensional fluorescence imaging data.
As a final note, I would like to impart some of my thoughts about the work I have done during
my PhD and perhaps offer advice and insight to anyone who also wishes to pursue this difficult
but rewarding journey. One major consideration that I would have made, if I were to begin my
PHD again, knowing about the twists and turns that I would encounter, is to spend a lot more
time searching through literature that is outside of the specific research topic that I was working
on. In other words, I believe that I should have spent much more time studying and familiarizing
myself with works that were not directly in my field. While it is patently obvious that one should
read as much literature as possible for topics from which your research will be derived, I have
often discovered that there are methods and points of view from other fields that have been
developed and refined, and are often not considered and/or applied when looking at a different
232
field. As students pursuing a PhD, we become highly specialized in one particular subject or
research topic, which ends up causing us to become too narrowly focused. As a simple example,
during the SEER project, we were highly focused on the spectral properties of fluorescence
microscopy images when determining image quality and most of our comparisons were geared
towards that focus. However, since the output SEER images were color images and we were
demonstrating how the colors allowed better discrimination between regions of different
spectral properties, standard photography image measures such as contrast, sharpness, and
colorfulness were simpler ways of determining an initial comparison. While this example is for
two fields which are very close, fluorescence microscopy vs standard photography image
comparisons, I believe this situation can apply in a general sense across multiple fields and should
always be kept in mind when working on a research project. Another piece of major advice that
I would give is to always consider whether the task or sets of tasks being worked on for a project
is truly necessary in order to create a minimum publishing unit. Too often, I found myself
constantly refactoring and reassessing code and methods in order to get what I felt would be a
considerable improvement to my results, but in hindsight, were not truly necessary. I believe
that as a PhD student, one very important skill to learn is when to decide that a particular result
is “good enough” in order to move on to a conclusion. When to draw that line can be difficult to
determine, but that is where the PhD mentors and advisors come into play. That brings me to
my final advice, which is to seek guidance from PhD mentors and advisors as often as possible,
perhaps even to the point of annoyance. I believe that I did not go to my advisors as often as I
could have. I sometimes felt that my questions were not important enough or something that I
could and should solve by myself, but the purpose of a PhD advisor is to guide students and keep
233
them on track. I believe that the most important piece of advice that I can give is to contact one’s
advisors and discuss all questions, thoughts, and concerns as much as possible in order to pick
clean all of their considerable knowledge and experience. I believe a great mentor will welcome
such behavior and attempt to match such enthusiasm. While these pieces of advice are not all
encompassing, I hope that these few nuggets will help any beginning PhD student to better reach
their goals and aspirations as I have.
234
References
1. Fang, F. C. & Casadevall, A. Reductionistic and holistic science. Infect. Immun. 79, 1401–
1404 (2011).
2. Regenmortel, M. H. V. Van. Reductionism and complexity in molecular biology. EMBO
Rep. 5, (2004).
3. McMahon, A. P. & Moon, R. T. Ectopic expression of the proto-oncogene int-1 in
Xenopus embryos leads to duplication of the embryonic axis. Cell 58, (1989).
4. Brigandt, I. & Love, A. Reductionism in Biology (Standford Encyclopedia of Philosophy).
https://plato.stanford.edu/entries/reduction-biology/ (2017).
5. Duan, X. et al. Foxc1a regulates zebrafish vascular integrity and brain vascular
development through targeting amotl2a and ctnnb1. Microvasc. Res. 143, 104400 (2022).
6. Lakowicz, J. R. Principles of fluorescence spectroscopy. Principles of Fluorescence
Spectroscopy (2006). doi:10.1007/978-0-387-46312-4.
7. Herman, B. Fluorescence Microscopy. (Garland Science, 2020).
doi:10.1201/9781003077060.
8. Mavrakis, M., Pourquié, O. & Lecuit, T. Lighting up developmental mechanisms: How
fluorescence imaging heralded a new era. Development 137, 373–387 (2010).
9. Miyawaki, A. Proteins on the move: Insights gained from fluorescent protein
technologies. Nature Reviews Molecular Cell Biology vol. 12 (2011).
10. Abcam. Fluorochrome chart with the most popular labels.
https://www.abcam.com/secondary-antibodies/fluorochrome-chart-a-complete-guide.
11. Ghashghaei, O. et al. Multiple Multicomponent Reactions: Unexplored Substrates,
Selective Processes, and Versatile Chemotypes in Biomedicine. Chem. - A Eur. J. 24,
(2018).
12. Cremer, C. & Cremer, T. Considerations on a laser-scanning-microscope with high
resolution and depth of field. Microsc. Acta 81, (1978).
13. Denk, W., Strickler, J. H. & Webb, W. W. Two-photon laser scanning fluorescence
microscopy. Science (80-. ). 248, (1990).
14. Elliott, A. D. Confocal Microscopy: Principles and Modern Practices. Curr. Protoc. Cytom.
92, (2020).
15. Afromowitz, M. A., Callis, J. B., Heimbach, D. M., DeSoto, L. A. & Norton, M. K.
235
Multispectral Imaging of Burn Wounds: A New Clinical Instrument for Evaluating Burn
Depth. IEEE Trans. Biomed. Eng. 35, (1988).
16. Lansford, R., Bearman, G. & Fraser, S. E. Resolution of multiple green fluorescent protein
color variants and dyes using two-photon microscopy and imaging spectroscopy. J.
Biomed. Opt. 6, (2001).
17. Dickinson, M. E., Bearman, G., Tille, S., Lansford, R. & Fraser, S. E. Multi-spectral imaging
and linear unmixing add a whole new dimension to laser scanning fluorescence
microscopy. Biotechniques 31, 1272–1278 (2001).
18. Kino, G. S. & Corle, T. R. Confocal Scanning Optical Microscopy. Phys. Today 42, 55–62
(1989).
19. Toomre, D. & Pawley, J. B. Disk-scanning confocal microscopy. in Handbook of Biological
Confocal Microscopy: Third Edition (2006). doi:10.1007/978-0-387-45524-2_10.
20. Weber, M. & Huisken, J. Light sheet microscopy for real-time developmental biology.
Current Opinion in Genetics and Development vol. 21 (2011).
21. Levoy, M., Ng, R., Adams, A., Footer, M. & Horowitz, M. Light field microscopy. in ACM
Transactions on Graphics vol. 25 (2006).
22. Kasko, A. M. Degradable Poly(ethylene glycol) Hydrogels for 2D and 3D Cell Culture.
https://www.sigmaaldrich.com/US/en/technical-documents/technical-article/materials-
science-and-engineering/tissue-engineering/degradable-polyethylene-glycol-hydrogels.
23. Sinclair, M. B., Haaland, D. M., Timlin, J. A. & Jones, H. D. T. Hyperspectral confocal
microscope. Appl. Opt. 45, (2006).
24. Corp, C. T. Spectra Viewer - mRFP1, mKO, tdTomato, Citrine.
25. Song, L., Hennink, E. J., Young, I. T. & Tanke, H. J. Photobleaching kinetics of fluorescein in
quantitative fluorescence microscopy. Biophys. J. 68, (1995).
26. Greenbaum, L., Rothmann, C., Lavie, R. & Malik, Z. Green fluorescent protein
photobleaching: A model for protein damage by endogenous and exogenous singlet
oxygen. Biol. Chem. 381, 1251–1258 (2000).
27. Laissue, P. P., Alghamdi, R. A., Tomancak, P., Reynaud, E. G. & Shroff, H. Assessing
phototoxicity in live fluorescence imaging. Nat. Methods 14, 657–661 (2017).
28. Benson, R. C., Meyer, R. A., Zaruba, M. E. & McKhann, G. M. Cellular autofluorescence. Is
it due to flavins? J. Histochem. Cytochem. 27, (1979).
29. Berberan-Santos, M. N., Nunes Pereira, E. J. & Martinho, J. M. G. Stochastic theory of
236
molecular radiative transport. J. Chem. Phys. 103, 3022–3028 (1995).
30. Bass, M. Handbook of Optics, vol 3. Geometric Optics, General Principles Spherical
Surfaces, 2nd ed., Optical Society of America, New York (1995).
31. Hamamatsu Photonics K.K. Editorial Committee. PHOTOMULTIPLIER TUBES Basics and
Applications. (Hamamatsu Photonics K.K. Electron Tube Division, 2017).
32. Pawley, J. B. Confocal and two-photon microscopy: Foundations, applications and
advances. Microsc. Res. Tech. 59, (2002).
33. Huang, F. et al. Video-rate nanoscopy using sCMOS camera-specific single-molecule
localization algorithms. Nat. Methods 10, (2013).
34. Gonzalez, R. C. & Woods, R. E. Digital Image Processing (3rd Edition). Prentice-Hall, Inc.
Upper Saddle River, NJ, USA ©2006 (2007).
35. Pertusa, J. F. & Morante-Redolat, J. M. Main steps in image processing and
quantification: The analysis workflow. in Methods in Molecular Biology vol. 2040 (2019).
36. Roeder, A. H. K., Cunha, A., Burl, M. C. & Meyerowitz, E. M. A computational image
analysis glossary for biologists. Dev. 139, 3071–3080 (2012).
37. Cutrale, F., Fraser, S. E. & Trinh, L. A. Imaging, Visualization, and Computation in
Developmental Biology. Annu. Rev. Biomed. Data Sci. 2, 223–251 (2019).
38. Sommer, C., Straehle, C., Kothe, U. & Hamprecht, F. A. Ilastik: Interactive learning and
segmentation toolkit. in Proceedings - International Symposium on Biomedical Imaging
(2011). doi:10.1109/ISBI.2011.5872394.
39. Khan, Z., Wang, Y. C., Wieschaus, E. F. & Kaschube, M. Quantitative 4D analyses of
epithelial folding during Drosophila gastrulation. Dev. 141, (2014).
40. Malpica, N. et al. Applying watershed algorithms to the segmentation of clustered nuclei.
Cytometry 28, (1997).
41. Lerner, B., Clocksin, W. F., Dhanjal, S., Hultén, M. A. & Bishop, C. M. Automatic signal
classification in fluorescence in situ hybridization images. Cytometry 43, (2001).
42. Turetken, E., Wang, X., Becker, C. J., Haubold, C. & Fua, P. Network Flow Integer
Programming to Track Elliptical Cells in Time-Lapse Sequences. IEEE Trans. Med. Imaging
36, (2017).
43. Wählby, C., Sintorn, I. M., Erlandsson, F., Borgefors, G. & Bengtsson, E. Combining
intensity, edge and shape information for 2D and 3D segmentation of cell nuclei in tissue
sections. J. Microsc. 215, (2004).
237
44. Schiegg, M. et al. Graphical model for joint segmentation and tracking of multiple
dividing cells. Bioinformatics 31, (2015).
45. Meijering, E. & Cappellen, G. van. Biological Image Analysis Primer. Rotterdam, the
Netherlands 1–37 (2006).
46. Chen, X., Zhou, X. & Wong, S. T. C. Automated segmentation, classification, and tracking
of cancer cell nuclei in time-lapse microscopy. IEEE Trans. Biomed. Eng. 53, (2006).
47. Dufour, A., Thibeaux, R., Labruyère, E., Guillén, N. & Olivo-Marin, J. C. 3-D active meshes:
Fast discrete deformable models for cell tracking in 3-D time-lapse microscopy. IEEE
Trans. Image Process. 20, (2011).
48. Mosaliganti, K. R., Noche, R. R., Xiong, F., Swinburne, I. A. & Megason, S. G. ACME:
Automated Cell Morphology Extractor for Comprehensive Reconstruction of Cell
Membranes. PLoS Comput. Biol. 8, (2012).
49. Gilbert, S. F. & Barresi, M. J. F. Developmental Biology. (Oxford University Press, 2016).
50. Shewale, B. & Dubois, N. Of form and function: Early cardiac morphogenesis across
classical and emerging model systems. Semin. Cell Dev. Biol. 118, 107–118 (2021).
51. Prummel, K. D., Nieuwenhuize, S. & Mosimann, C. The lateral plate mesoderm. Dev. 147,
(2020).
52. Reiter, J. F. et al. Gata5 is required for the development of the heart and endoderm in
zebrafish. Genes Dev. 13, 2983–2995 (1999).
53. Duong, T. B., Holowiecki, A. & Waxman, J. S. Retinoic acid signaling restricts the size of
the first heart field within the anterior lateral plate mesoderm. Dev. Biol. 473, 119–129
(2021).
54. Lints, T. J., Parsons, L. M., Hartley, L., Lyons, I. & Harvey, R. P. Nkx-2.5: A novel murine
homeobox gene expressed in early heart progenitor cells and their myogenic
descendants. Development 119, 419–431 (1993).
55. Matsui, T. et al. Noncanonical Wnt signaling regulates midline convergence of organ
primordia during zebrafisn development. Genes Dev. 19, 164–175 (2005).
56. Hutson, M. R. et al. Arterial pole progenitors interpret opposing FGF/BMP signals to
proliferate or differentiate. Development 137, 3001–3011 (2010).
57. Alexander, J., Rothenberg, M., Henry, G. L. & Stainier, D. Y. R. casanova Plays an early and
essential role in endoderm formation in zebrafish. Dev. Biol. 215, 343–357 (1999).
58. Long, S., Ahmad, N. & Rebagliati, M. The zebrafish nodal-related gene southpaw is
238
required for visceral and diencephalic left-right asymmetry. Development 130, 2303–
2316 (2003).
59. Shen, M. M. Nodal signaling: Development roles and regulation. Development 134, 1023–
1034 (2007).
60. Smith, K. A. & Uribe, V. Getting to the heart of left–right asymmetry: Contributions from
the zebrafish model. J. Cardiovasc. Dev. Dis. 8, (2021).
61. Mao, L. M. F., Boyle Anderson, E. A. T. & Ho, R. K. Anterior lateral plate mesoderm gives
rise to multiple tissues and requires tbx5a function in left-right asymmetry, migration
dynamics, and cell specification of late-addition cardiac cells. Dev. Biol. 472, 52–66
(2021).
62. Ocaña, O. H. et al. A right-handed signalling pathway drives heart looping in vertebrates.
Nature 549, 86–90 (2017).
63. Schier, A. F., Neuhauss, S. C. F., Helde, K. A., Talbot, W. S. & Driever, W. The one-eyed
pinhead gene functions in mesoderm and endoderm formation in zebrafish and interacts
with no tail. Development 124, 327–342 (1997).
64. Kikuchi, Y. et al. The zebrafish bonnie and clyde gene encodes a Mix family
homeodomain protein that regulates the generation of endodermal precursors. Genes
Dev. 14, 1279–1289 (2000).
65. Matsui, T. & Bessho, Y. Left-right asymmetry in zebrafish. Cell. Mol. Life Sci. 69, 3069–
3077 (2012).
66. Westerfield, M. The Zebrafish Book. A Guide for the Laboratory Use of Zebrafish (Danio
rerio). (2000).
67. Schindelin, J. et al. Fiji: An open-source platform for biological-image analysis. Nature
Methods vol. 9 (2012).
68. Parslow, A., Cardona, A. & Bryson-Richardson, R. J. Sample drift correction following 4D
confocal time-lapse Imaging. J. Vis. Exp. (2014) doi:10.3791/51086.
69. Van Rossum, G. et al. Python 3 Reference Manual. Nature vol. 585 (2009).
70. Eiter, T. & Mannila, H. Computing discrete Fr{é}chet distance. Notes 94, 64 (1994).
71. Jahr, W., Schmid, B., Schmied, C., Fahrbach, F. O. & Huisken, J. Hyperspectral light sheet
microscopy. Nat. Commun. 6, (2015).
72. Levenson, R. M. & Mansfield, J. R. Multispectral imaging in biology and medicine: Slices
of life. Cytometry Part A vol. 69 (2006).
239
73. Garini, Y., Young, I. T. & McNamara, G. Spectral imaging: Principles and applications.
Cytometry Part A vol. 69 (2006).
74. Dickinson, M. E., Simbuerger, E., Zimmermann, B., Waters, C. W. & Fraser, S. E.
Multiphoton excitation spectra in biological samples. J. Biomed. Opt. 8, (2003).
75. Valm, A. M. et al. Applying systems-level spectral imaging and analysis to reveal the
organelle interactome. Nature vol. 546 (2017).
76. Cranfill, P. J. et al. Quantitative assessment of fluorescent proteins. Nat. Methods 13,
(2016).
77. Hiraoka, Y., Shimi, T. & Haraguchi, T. Multispectral imaging fluorescence microscopy for
living cells. Cell Structure and Function vol. 27 (2002).
78. Jacobson, N. P. & Gupta, M. R. Design goals and solutions for display of hyperspectral
images. in Proceedings - International Conference on Image Processing, ICIP vol. 2 (2005).
79. Hotelling, H. Analysis of a complex of statistical variables into principal components. J.
Educ. Psychol. 24, (1933).
80. Jolliffe, I. T. Principal Component Analysis. (Springer New York, NY, 2002).
doi:10.1007/b98835.
81. Abdi, H. & Williams, L. J. Principal component analysis. Wiley Interdisciplinary Reviews:
Computational Statistics vol. 2 (2010).
82. Tyo, J. S., Konsolakis, A., Diersen, D. I. & Olsen, R. C. Principal-components-based display
strategy for spectral imagery. IEEE Trans. Geosci. Remote Sens. 41, (2003).
83. Wilson, T. A. Perceptual-based image fusion for hyperspectral data. IEEE Trans. Geosci.
Remote Sens. 35, (1997).
84. Long, Y., Li, H. C., Celik, T., Longbotham, N. & Emery, W. J. Pairwise-distance-analysis-
driven dimensionality reduction model with double mappings for hyperspectral image
visualization. Remote Sens. 7, 7785–7808 (2015).
85. Kotwal, K. & Chaudhuri, S. A Bayesian approach to visualization-oriented hyperspectral
image fusion. Inf. Fusion 14, (2013).
86. Kotwal, K. & Chaudhuri, S. Visualization of hyperspectral images using bilateral filtering.
IEEE Trans. Geosci. Remote Sens. 48, (2010).
87. Zhao, W. & Du, S. Spectral-Spatial Feature Extraction for Hyperspectral Image
Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci.
Remote Sens. 54, (2016).
240
88. Zhang, Y., De Backer, S. & Scheunders, P. Noise-resistant wavelet-based Bayesian fusion
of multispectral and hyperspectral images. IEEE Trans. Geosci. Remote Sens. 47, (2009).
89. A, R. SVD Based Image Processing Applications: State of The Art, Contributions and
Research Challenges. Int. J. Adv. Comput. Sci. Appl. 3, (2012).
90. Redford, G. I. & Clegg, R. M. Polar plot representation for frequency-domain analysis of
fluorescence lifetimes. J. Fluoresc. 15, (2005).
91. Digman, M. A., Caiolfa, V. R., Zamai, M. & Gratton, E. The phasor approach to
fluorescence lifetime imaging analysis. Biophys. J. 94, (2008).
92. Vergeldt, F. J. et al. Multi-component quantitative magnetic resonance imaging by
phasor representation. Sci. Rep. 7, (2017).
93. Lanzanò, L. et al. Encoding and decoding spatio-temporal information for super-
resolution microscopy. Nat. Commun. 6, (2015).
94. Fereidouni, F., Bader, A. N. & Gerritsen, H. C. Spectral phasor analysis allows rapid and
reliable unmixing of fluorescence microscopy spectral images. Opt. Express 20, (2012).
95. Cutrale, F., Salih, A. & Gratton, E. Spectral phasor approach for fingerprinting of photo-
activatable fluorescent proteins Dronpa, Kaede and KikGR. Methods Appl. Fluoresc. 1,
(2013).
96. Andrews, L. M., Jones, M. R., Digman, M. A. & Gratton, E. Spectral phasor analysis of
Pyronin Y labeled RNA microenvironments in living cells. Biomed. Opt. Express 4, (2013).
97. Cutrale, F. et al. Hyperspectral phasor analysis enables multiplexed 5D in vivo imaging.
Nat. Methods 14, 149–152 (2017).
98. Radaelli, F. et al. μmAPPS: A novel phasor approach to second harmonic analysis for in
vitro-in vivo investigation of collagen microstructure. Sci. Rep. 7, (2017).
99. Scipioni, L., Gratton, E., Diaspro, A. & Lanzanò, L. Phasor Analysis of Local ICS Detects
Heterogeneity in Size and Number of Intracellular Vesicles. Biophys. J. 111, (2016).
100. Sarmento, M. J. et al. Exploiting the tunability of stimulated emission depletion
microscopy for super-resolution imaging of nuclear structures. Nat. Commun. 9, (2018).
101. Scipioni, L., Di Bona, M., Vicidomini, G., Diaspro, A. & Lanzanò, L. Local raster image
correlation spectroscopy generates high-resolution intracellular diffusion maps.
Commun. Biol. 1, (2018).
102. Albert Pan, Y. et al. Zebrabow: Multispectral cell labeling for cell tracing and lineage
analysis in zebrafish. Dev. 140, (2013).
241
103. Zipfel, W. R. et al. Live tissue intrinsic emission microscopy using multiphoton-excited
native fluorescence and second harmonic generation. Proc. Natl. Acad. Sci. U. S. A. 100,
(2003).
104. Rock, J. R., Randell, S. H. & Hogan, B. L. M. Airway basal stem cells: A perspective on their
roles in epithelial homeostasis and remodeling. DMM Disease Models and Mechanisms
vol. 3 (2010).
105. Rock, J. R. et al. Basal cells as stem cells of the mouse trachea and human airway
epithelium. Proc. Natl. Acad. Sci. U. S. A. 106, (2009).
106. Bird, D. K. et al. Metabolic mapping of MCF10A human breast cells via multiphoton
fluorescence lifetime imaging of the coenzyme NADH. Cancer Res. 65, (2005).
107. Ranjit, S., Malacrida, L., Jameson, D. M. & Gratton, E. Fit-free analysis of fluorescence
lifetime imaging data using the phasor approach. Nat. Protoc. 13, 1979–2004 (2018).
108. Lakowicz, J. R., Szmacinski, H., Nowaczyk, K. & Johnson, M. L. Fluorescence lifetime
imaging of free and protein-bound NADH. Proc. Natl. Acad. Sci. U. S. A. 89, (1992).
109. Skala, M. C. et al. In vivo multiphoton microscopy of NADH and FAD redox states,
fluorescence lifetimes, and cellular morphology in precancerous epithelia. Proc. Natl.
Acad. Sci. U. S. A. 104, (2007).
110. Sharick, J. T. et al. Protein-bound NAD(P)H Lifetime is Sensitive to Multiple Fates of
Glucose Carbon. Sci. Rep. 8, (2018).
111. Stringari, C. et al. Phasor approach to fluorescence lifetime microscopy distinguishes
different metabolic states of germ cells in a live tissue. Proc. Natl. Acad. Sci. U. S. A. 108,
(2011).
112. Stringari, C. et al. Multicolor two-photon imaging of endogenous fluorophores in living
tissues by wavelength mixing. Sci. Rep. 7, (2017).
113. Sun, Y. et al. Endoscopic fluorescence lifetime imaging for in vivo intraoperative diagnosis
of oral carcinoma. in Microscopy and Microanalysis vol. 19 (2013).
114. Ghukasyan, V. V. & Kao, F. J. Monitoring cellular metabolism with fluorescence lifetime of
reduced nicotinamide adenine dinucleotide. J. Phys. Chem. C 113, (2009).
115. Walsh, A. J. et al. Quantitative optical imaging of primary tumor organoid metabolism
predicts drug response in breast cancer. Cancer Res. 74, (2014).
116. Conklin, M. W., Provenzano, P. P., Eliceiri, K. W., Sullivan, R. & Keely, P. J. Fluorescence
lifetime imaging of endogenous fluorophores in histopathology sections reveals
differences between normal and tumor epithelium in carcinoma in situ of the breast. Cell
242
Biochem. Biophys. 53, (2009).
117. Browne, A. W. et al. Structural and functional characterization of human stem-cell-
derived retinal organoids by live imaging. Investig. Ophthalmol. Vis. Sci. 58, (2017).
118. Livet, J. et al. Transgenic strategies for combinatorial expression of fluorescent proteins
in the nervous system. Nature 450, (2007).
119. Weissman, T. A. & Pan, Y. A. Brainbow: New resources and emerging biological
applications for multicolor genetic labeling and analysis. Genetics vol. 199 (2014).
120. Pan, Y. A., Livet, J., Sanes, J. R., Lichtman, J. W. & Schier, A. F. Multicolor brainbow
imaging in Zebrafish. Cold Spring Harb. Protoc. 6, (2011).
121. Raj, B. et al. Simultaneous single-cell profiling of lineages and cell types in the vertebrate
brain. Nat. Biotechnol. 36, (2018).
122. Mahou, P. et al. Multicolor two-photon tissue imaging by wavelength mixing. Nat.
Methods 9, 815–818 (2012).
123. Loulier, K. et al. Multiplex Cell and Lineage Tracking with Combinatorial Labels. Neuron
81, (2014).
124. North, T. E. & Goessling, W. Haematopoietic stem cells show their true colours. Nature
Cell Biology vol. 19 (2017).
125. Chen, C. H. et al. Multicolor Cell Barcoding Technology for Long-Term Surveillance of
Epithelial Regeneration in Zebrafish. Dev. Cell 36, (2016).
126. Panetta, K., Gao, C. & Agaian, S. No reference color image contrast and quality measures.
IEEE Trans. Consum. Electron. (2013) doi:10.1109/TCE.2013.6626251.
127. Hall, M. et al. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl.
11, (2009).
128. A Primer on Kernel Methods. in Kernel Methods in Computational Biology (2019).
doi:10.7551/mitpress/4057.003.0004.
129. Bruton, D. {RGB} Values for visible wavelengths. (1996).
130. Hasler, D. & Suesstrunk, S. E. Measuring colorfulness in natural images. in Human Vision
and Electronic Imaging VIII (2003). doi:10.1117/12.477378.
131. Sos S. Agaian and Karen P. Lentz and Artyom M. Grigoryan. A New Measure of Image
Enhancement. IASTED Int. Conf. Signal Process. \& Commun. (2000).
132. Agaian, S. S., Silver, B. & Panetta, K. A. Transform coefficient histogram-based image
243
enhancement algorithms using contrast entropy. IEEE Trans. Image Process. (2007)
doi:10.1109/TIP.2006.888338.
133. Trinh, L. A. et al. A versatile gene trap to visualize and interrogate the function of the
vertebrate proteome. Genes Dev. 25, 2306–2320 (2011).
134. Jin, S. W., Beis, D., Mitchell, T., Chen, J. N. & Stainier, D. Y. R. Cellular and molecular
analyses of vascular tube and lumen formation in zebrafish. Development 132, (2005).
135. Megason, S. G. In toto imaging of embryogenesis with confocal time-lapse microscopy.
Methods Mol. Biol. 546, (2009).
136. Huss, D. et al. A transgenic quail model that enables dynamic imaging of amniote
embryogenesis. Dev. 142, (2015).
137. Holst, J., Vignali, K. M., Burton, A. R. & Vignali, D. A. A. Rapid analysis of T-cell selection in
vivo using T cell-receptor retrogenic mice. Nat. Methods 3, (2006).
138. Kwan, K. M. et al. The Tol2kit: A multisite gateway-based construction Kit for Tol2
transposon transgenesis constructs. Dev. Dyn. 236, (2007).
139. Kawakami, K. et al. A transposon-mediated gene trap approach identifies
developmentally regulated genes in zebrafish. Dev. Cell 7, (2004).
140. Urasaki, A., Morvan, G. & Kawakami, K. Functional dissection of the Tol2 transposable
element identified the minimal cis-sequence and a highly repetitive sequence in the
subterminal region essential for transposition. Genetics 174, (2006).
141. White, R. M. et al. Transparent Adult Zebrafish as a Tool for In Vivo Transplantation
Analysis. Cell Stem Cell 2, (2008).
142. Arnesano, C., Santoro, Y. & Gratton, E. Digital parallel frequency-domain spectroscopy
for tissue imaging. J. Biomed. Opt. 17, (2012).
143. Tsurui, H. et al. Seven-color fluorescence imaging of tissue samples based on fourier
spectroscopy and singular value decomposition. J. Histochem. Cytochem. 48, (2000).
144. Amat, F. et al. Fast, accurate reconstruction of cell lineages from large-scale fluorescence
microscopy data. Nat. Methods 11, 951–958 (2014).
145. Ueno, T. & Nagano, T. Fluorescent probes for sensing and imaging. Nature Methods vol. 8
(2011).
146. Lichtman, J. W. & Conchello, J. A. Fluorescence microscopy. Nature Methods vol. 2 910–
919 (2005).
244
147. Truong, T. V., Supatto, W., Koos, D. S., Choi, J. M. & Fraser, S. E. Deep and fast live
imaging with two-photon scanned light-sheet microscopy. Nat. Methods 8, (2011).
148. Chen, B. C. et al. Lattice light-sheet microscopy: Imaging molecules to embryos at high
spatiotemporal resolution. Science (80-. ). 346, (2014).
149. Kredel, S. et al. mRuby, a bright monomeric red fluorescent protein for labeling of
subcellular structures. PLoS One 4, (2009).
150. Sakaue-Sawano, A. et al. Visualizing Spatiotemporal Dynamics of Multicellular Cell-Cycle
Progression. Cell 132, (2008).
151. Wade, O. K. et al. 124-Color Super-resolution Imaging by Engineering DNA-PAINT Blinking
Kinetics. Nano Lett. 19, (2019).
152. Strauss, S. & Jungmann, R. Up to 100-fold speed-up and multiplexing in optimized DNA-
PAINT. Nat. Methods 17, (2020).
153. Zimmermann, T., Rietdorf, J. & Pepperkok, R. Spectral imaging and its applications in live
cell microscopy. FEBS Lett. 546, (2003).
154. Zimmermann, T. Spectral imaging and linear unmixing in light microscopy. Adv. Biochem.
Eng. Biotechnol. 95, (2005).
155. Fereidouni, F., Bader, A. N., Colonna, A. & Gerritsen, H. C. Phasor analysis of multiphoton
spectral images distinguishes autofluorescence components of in vivo human skin. J.
Biophotonics 7, (2014).
156. Scipioni, L., Rossetta, A., Tedeschi, G. & Gratton, E. Phasor S-FLIM: a new paradigm for
fast and robust spectral fluorescence lifetime imaging. Nat. Methods 18, (2021).
157. Castello, M. et al. A robust and versatile platform for image scanning microscopy
enabling super-resolution FLIM. Nat. Methods 16, (2019).
158. Hedde, P. N., Cinco, R., Malacrida, L., Kamaid, A. & Gratton, E. Phasor-based
hyperspectral snapshot microscopy allows fast imaging of live, three-dimensional tissues
for biomedical applications. Commun. Biol. 4, (2021).
159. Lanzanò, L., Scordino, A., Privitera, S., Tudisco, S. & Musumeci, F. Spectral analysis of
Delayed Luminescence from human skin as a possible non-invasive diagnostic tool. in
European Biophysics Journal vol. 36 (2007).
160. Depasquale, J. A. Actin Microridges. Anatomical Record vol. 301 (2018).
161. Okuda, K. S. & Hogan, B. M. Endothelial Cell Dynamics in Vascular Development: Insights
From Live-Imaging in Zebrafish. Frontiers in Physiology vol. 11 (2020).
245
162. Isogai, S., Lawson, N. D., Torrealday, S., Horiguchi, M. & Weinstein, B. M. Angiogenic
network formation in the developing vertebrate trunk. Development 130, (2003).
163. Diaspro, A. et al. Multi-photon excitation microscopy. BioMedical Engineering Online vol.
5 (2006).
164. Diaspro, A., Chirico, G. & Collini, M. Two-photon fluorescence excitation and related
techniques in biological microscopy. Quarterly Reviews of Biophysics vol. 38 (2005).
165. Diaspro, A. & Robello, M. Two-photon excitation of fluorescence for three-dimensional
optical imaging of biological structures. Journal of Photochemistry and Photobiology B:
Biology vol. 55 (2000).
166. Wagnières, G. A., Star, W. M. & Wilson, B. C. In Vivo Fluorescence Spectroscopy and
Imaging for Oncological Applications. Photochemistry and Photobiology vol. 68 (1998).
167. Févotte, C. & Dobigeon, N. Nonlinear hyperspectral unmixing with robust nonnegative
matrix factorization. IEEE Trans. Image Process. 24, 4810–4819 (2015).
168. Heslop, D., von Dobeneck, T. & Höcker, M. Using non-negative matrix factorization in the
‘unmixing’ of diffuse reflectance spectra. Mar. Geol. 241, (2007).
169. Confocal Microscopy: Methods and Protocols. vol. 1075 (Humana New York, NY, 2014).
170. Shi, W. et al. Pre-processing visualization of hyperspectral fluorescent data with
Spectrally Encoded Enhanced Representations. Nat. Commun. 11, 1–15 (2020).
171. Parichy, D. M., Ransom, D. G., Paw, B., Zon, L. I. & Johnson, S. L. An orthologue of the kit-
related gene fms is required for development of neural crest-derived xanthophores and a
subpopulation of adult melanocytes in the zebrafish, Danio rerio. Development (2000).
172. Wagnieres, G. A., Star, W. M. & Wilson, B. C. Invited Review ln Vivo Fluorescence
Spectroscopy and Imaging for Oncological Applications. 68, 603–632 (1998).
173. Cutrale, F. et al. Hyperspectral phasor analysis enables multi- plexed 5D in vivo imaging.
Nat. Publ. Gr. (2017) doi:10.1038/nmeth.4134.
174. Taylor, R. C. Experiments in physical chemistry (Shoemaker, David P.; Garland, Carl W.). J.
Chem. Educ. 45, (1968).
175. Shimozono, S., Iimura, T., Kitaguchi, T., Higashijima, S. I. & Miyawaki, A. Visualization of
an endogenous retinoic acid gradient across embryonic development. Nature 496,
(2013).
246
176. Dalal, R. B., Digman, M. A., Horwitz, A. F., Vetri, V. & Gratton, E. Determination of particle
number and brightness using a laser scanning confocal microscope operating in the
analog mode. Microsc. Res. Tech. 71, 69–81 (2008).
Abstract (if available)
Abstract
The ability to generate data for biological research has accelerated greatly in the past decade, especially in the field of optical microscopy, providing a wealth of information about biological structures and processes which have been instrumental to advances in the field of biology. This plethora of data, expanded in both quantity and detail, shifts the challenge to analysis and information extraction. This is especially true for live biological imaging where the increase from two dimension (x,y) to five dimensions (x,y,z,c (channel),t (time)) greatly increases the complexity of analysis. Custom analytical tools and workflows are required to effectively analyze these large, and complex multidimensional data sets from live biological imaging. In this dissertation, I provide specific workflows and techniques for improving investigation and extraction of data for two complementary areas of image processing: (1) dynamic, structurally guided segmentation, (2) scalable multiplexing. For dynamic segmentation, I have determined effective routines to virtually dissect visually homogenous biological populations within complex non-orthogonal biological structures. For scalable multiplexing, I aim to increase the multiplexing capabilities of multidimensional fluorescence microscopy by developing state-of-the-art computational tools to better visualize and untangle hyperspectral imaging data. The projects developed for these two areas of image processing provide real-world tools and applications which build on and reinforce one another in order to overcome the challenges and pitfalls that occur when interacting with experimental multidimensional biological data, positioning us to better understand the complex nature of biology.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Multiplexing live 5d imaging with multispectral fluorescence: Advanced unmixing through simulation and machine learning
PDF
Hyperspectral phasor for multiplexed fluorescence microscopy and autofluorescence-based pathologic diagnosis
PDF
Extending multiplexing capabilities with lifetime and hyperspectral fluorescence imaging
PDF
Machine learning and image processing of fluorescence lifetime imaging microscopy enables tracking and analysis of subcellular metabolism
PDF
A medical imaging informatics based human performance analytics system
PDF
Development of a quantitative proteomic molecular imaging platform using multiplexed nanoscale Raman scattering contrast agents
PDF
Decoding the embryo: on spatial and genomic tools to characterize gene regulatory networks in development
PDF
Exploring the role of vegfr transcripts during vascular development in the zebrafish embryo
PDF
Development of an integrated biomechanics informatics system (IBIS) with knowledge discovery and decision support tools based on imaging informatics methodology
PDF
Physiologically-based modeling to predict monoclonal antibody pharmacokinetics in humans from physiochemical properties
PDF
Cyberinfrastructure management for dynamic data driven applications
PDF
Studies into computational intelligence approaches for the identification of complex nonlinear systems
PDF
Scaling up deep graph learning: efficient algorithms, expressive models and fast acceleration
PDF
Dynamical representation learning for multiscale brain activity
PDF
Architectural innovations for mitigating data movement cost on graphics processing units and storage systems
PDF
Probabilistic data-driven predictive models for energy applications
PDF
Computational tools for large-scale analysis of brain function and structure
PDF
Statistical modeling and process data analytics for smart manufacturing
PDF
Miniature phased-array transducer for colorectal tissue characterization during TEM robotic surgery; and, Forward-looking phased-array transducer for intravascular imaging
PDF
Identifying and leveraging structure in complex cooperative tasks for multi-agent reinforcement learning
Asset Metadata
Creator
Koo, Eun (Daniel) Sang
(author)
Core Title
Analytical tools for complex multidimensional biological imaging data
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Biomedical Engineering
Degree Conferral Date
2022-12
Publication Date
09/20/2022
Defense Date
06/03/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
biomechanics,cardiac development,computational tools,dimensionality reduction,fluorescence,hyperspectral imaging,image processing,lateral plate mesoderm,left-right asymmetry,microscopy,multidimensional imaging,multispectral imaging,OAI-PMH Harvest,simulation,unmixing,zebrafish
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Fraser, Scott (
committee chair
), Trinh, Le (
committee member
), Zavaleta, Cristina (
committee member
)
Creator Email
danielk605@gmail.com,eunsangk@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC112013949
Unique identifier
UC112013949
Legacy Identifier
etd-KooEunDani-11231
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Koo, Eun (Daniel) Sang
Type
texts
Source
20220921-usctheses-batch-983
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
biomechanics
cardiac development
computational tools
dimensionality reduction
fluorescence
hyperspectral imaging
image processing
lateral plate mesoderm
left-right asymmetry
multidimensional imaging
multispectral imaging
simulation
unmixing
zebrafish