\documentclass{article}
\usepackage[affil-it]{authblk}
\usepackage{graphicx}
\usepackage[space]{grffile}
\usepackage{latexsym}
\usepackage{textcomp}
\usepackage{longtable}
\usepackage{tabulary}
\usepackage{booktabs,array,multirow}
\usepackage{amsfonts,amsmath,amssymb}
\providecommand\citet{\cite}
\providecommand\citep{\cite}
\providecommand\citealt{\cite}
\usepackage{url}
\usepackage{hyperref}
\hypersetup{colorlinks=false,pdfborder={0 0 0}}
\usepackage{etoolbox}
\makeatletter
\patchcmd\@combinedblfloats{\box\@outputbox}{\unvbox\@outputbox}{}{%
\errmessage{\noexpand\@combinedblfloats could not be patched}%
}%
\makeatother
% You can conditionalize code for latexml or normal latex using this.
\newif\iflatexml\latexmlfalse
\providecommand{\tightlist}{\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}%
\AtBeginDocument{\DeclareGraphicsExtensions{.pdf,.PDF,.eps,.EPS,.png,.PNG,.tif,.TIF,.jpg,.JPG,.jpeg,.JPEG}}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\begin{document}
\title{Automatic Detection and Classification of Ca2+ Release Events in
Confocal Line- and Framescan Images}
\author{Ardo Illaste}
\affil{Affiliation not available}
\date{\today}
\maketitle
\section{Introduction}\label{introduction}
Numerous methods exist for analysing sparks in confocal linescans. These
employ various different approaches: noise tresholding
\cite{R_os_2001,Picht_2007}, wavelet transform \cite{Szab__2010}, etc.
Recently, a method was developed by Tian et al., \cite{Tian_2012},
where fluorescence time trace in each pixel is fitted and provides a
practically noise free approximation of the original fluorescence data.
This pixel-by-pixel method had, however, several limitations which made
it impractical to be used for Ca2+ release event detection.
Here we extend the method by Tian et al., in several ways. The new
method allows for Ca2+ release event classification based on
pixel-by-pixel denoising of the original signal.
\section{Methods}\label{methods}
\subsection{Cell isolations}\label{cell-isolations}
\subsection{Experimental}\label{experimental}
Data acquisition was performed on two confocal setups. Linescan images
were obtained on a Olympus FluoView 1000 confocal microscope. Framescan
images were recorded with a VTInfinity multi-beam confocal microscope
recording 512x64 pixel images at 150Hz freqency.
\subsection{Computational}\label{computational}
The detection algorithm is presented in the Results section.
\section{Results}\label{results}
\subsection{Pixel by pixel event
classification}\label{pixel-by-pixel-event-classification}
The algorithm is presented schematically on figure 1. Each subroutine is
explained in detail below.
\subsubsection{Image preprocessing}\label{image-preprocessing}
The only preprocessing step used is convolving the image with a
$$(2n+1)\times(2n+1) $$ kernel where the center element is \(1/(n+1)\)
and the k-th layer surrounding the center is made up of values
\(1/{(8k\cdot(n+1))}\). For example, when n=1 the kernel would be
\[
\left( \begin{array}{ccc}
^1/_{16} & ^1/_{16} & ^1/_{16}\\\\
^1/_{16} & ^1/_{2}& ^1/_{16}\\\\
^1/_{16} & ^1/_{16} & ^1/_{16} \end{array} \right)
\] Convolving the image with this kind of kernel reduces
the noise while retaining more of the original signal than simple
averaging. Contributions from the i-th layer around the center will have
the same weight as the central pixel. In this work we use the kernel
with \(n=1\).
\subsubsection{Region detection}\label{region-detection}
Before it is possible to fit the fluorescence signal from each pixel
with a transient function, candidate regions containing possible events
must be detected. For this we have modified a continous wavelet
transform based peak detection algorithm by Du et al.\cite{Du_2006}.
Whereas the the original algorithm of Du et al. provides the location of
the peak, we have extended it to also yield the the width of the peak.
The original algorith works by joining the wavelet transform values
along local maxima for increasing window lengths. Ridge lines obatained
in this manner are taken to correspond to peaks if they satisfy certain
criteria (length of the ridge, SNR, etc.) In our extension, the width of
the peak is obtained from finding the first maximum of the wavelet
transform values along the ridge line.
Region estimation is performed iteratively. This is done to ensure that
regions containing overlapping events are correctly identified. At each
pass regions which have no overlaps and regions which only overlap with
regions having a lower peak SNR are fitted. After a successful fit, the
result is subtracted from the original signal and region detection is
performed again, followed by fitting. This process is continued until no
more regions are detected or successfully fitted.
\subsubsection{Signal fitting}\label{signal-fitting}
The function used for fitting Ca2+ release events is shown on Figure
\ref{fig:fig1}. The shape of the function is described by 4 parameters:
amplitude (\(A\)), rise and decay time constants
(\(\tau_{rise},\tau_{decay}\)) and plateau duration (\(d\)). An
additional parameter (\(\mu\)) determines the time when the
maximum is reached. For the optimizer to obtain good performance when
fitting the function used for fitting should be continously
differentiable. The function describing a transient is:
\[
g(A, d, \tau_{d}, \tau_{r}, \mu, t) =
A\cdot\left\{
\begin{array}{l l}
1-\exp\left( -\frac{t-\mu}{\tau_r}\right)\cdot\exp\left(-2\right) & \quad \mu-2\tau_r\le t\lt\mu\\\\
1-\exp(-2) & \quad \mu\le t\lt \mu+d\\\\
\exp\left( -\frac{t-\mu-d}{\tau_d}\right)\cdot\left(1-\exp(-2)\right)& \quad t\ge\mu+d\\\\
0 &\quad \mathrm{otherwise}
\end{array} \right.
\]
The transient consists of four phases: zero level before the onset of
the transient, an exponential increase with time constant
\(\tau_r\) starging when \(t=\mu-2\tau_r\), a plateau phase of
duration \(d\) starting at \(t=\mu\) and an
exponential decay with time constant \(\tau_d\) starting at
\(t=\mu+d\)
The transient function is is convolved with a gaussian
\(G(\sigma)\) to yield the actual fitting function:
\[
f(A, d, \tau_{d}, \tau_{r}, \mu, t, \sigma) = g \ast G
\] For notational purposes we shall represent the fit
function parameters by (\mathbf{p}= \left[
\begin{array}{c c c c c}
A&d&\tau{d}&\tau_{r}&\mu
\end{array}
\right]
). The smoothing parameter \(\sigma\) will be fixed for all
pixels. Therefore, for the \emph{i}-th pixel the \emph{k}-th event is
represented by \(f(\mathbf{p}_{i,k},t)\).
The entire raw signal for the \emph{i}-th pixel can be represented as:
\[
F_i(t) = b(\mathbf{q}_i, t) + \sum_{k=0}^{m} f(\mathbf{p}_{i,k},t) + W + R
\] , where \(b\) is a n-th order
polynomial with \(\mathbf{q}_i\) being the polynomial coefficients
for the \emph{i}-th pixel, summation is performed over all \emph{m}
events in the pixel, \(W\) consists of the noise and
\(R\) is remaining residual not captured in the baseline
nor events. Ideally, \(R=0\), but achieving this is limited
by the accuracy of the event region detection (we cannot fit what we do
not detect) and whether or not our fitting function is general enough to
be able to approximate various types of events.
Because it is not know which part of the signal is the event and which
is the baseline the first fit also has to estimate the baseline
properties. Signal in the candidate region is fitted with an extended
fit function (Figure \ref{fig:fig1}) that also depends on relaxation
baseline \(B\) and baseline offset \(C\).
The \(C\) parameter allows for the possibility of an
elevated background before the release event.
After paramater optimization with the extended fit function the same
signal is fitted with a line. For both models the corrected Akaike
Information Criterion (AICc \cite{Burnham_2004}) is calculated and the
region is accepted only if the AICc for the fit function is less than
the AICc for the line. This ensures that the goodness of the fit
obtained with the fit function justifies the use of a more complicated
model.
In order to reconstruct the entire image with all the pixels it is also
necessary to know the location of the pixels. The parameter vector
\(\mathbf{p}_i\) is extended with the coordinates of pixel \emph{i}:
\[
\mathbf{r}_i = \left[\mathbf{p}_i,x_i,y_i\right]
=\left[A_i, d_i ,\tau_{d,i},\tau_{r,i},\mu_i,x_i,y_i\right] =
\left[\mathbf{p}^s_{i}, \mathbf{p}^p_{i}\right]
\] , where \(\mathbf{p}^s=\left[A,d,\tau{d},\tau_{r}\right]\) is the shape paramater
vector and \(\mathbf{p}^p=\left[\mu,x,y\right]\) is the position parameter vector.
All \emph{k} events detected in pixel \emph{i} are represented by
vectors \(\mathbf{p}^s_{i,k}\) and \(\mathbf{p}^p_{i,k}\). These vectors can
be stacked to creat an event matrix for pixel \emph{i}:
\[
E_i =
\left(
\begin{array}{l l}
\mathbf{p}^s_{i,0}&\mathbf{p}^p_{i,k}\\\\
\mathbf{p}^s_{i,1}&\mathbf{p}^p_{i,k}\\\\
\ldots\\\\
\mathbf{p}^s_{i,k}&\mathbf{p}^p_{i,k}
\end{array}
\right)
=
\left(
\begin{array}{l l}
E_i^s & E_i^p
\end{array}
\right)
\] The event matrix for the entire image is obtained by
stacking all pixel event matrices: \[
E=\left(
\begin{array}{l}
E_0\\\\
E_1\\\\
\vdots\\\\
E_i
\end{array}
\right)
=\left(
\begin{array}{l l}
E_0^s& E_0^p\\\\
E_1^s& E_1^p\\\\
\vdots&\vdots\\\\
E_i^s& E_i^p
\end{array}
\right)
=\left(
\begin{array}{l l}
E^s & E^p
\end{array}
\right)
\] , where
\(E^s\) and \(E^p\) are shape and position
submatrices, respectively, containing event shape and position
parameters for all events from all pixels and making up the entire image
event matrix \(E\). The separation of shape and position
parameters for events is necessary in the next clustering step.\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.70\columnwidth]{figures/fitting-1/fitting-1}
\caption{{Replace this text with your caption%
}}
\end{center}
\end{figure}
\subsubsection{Clustering}\label{clustering}
Having determined the events in each pixel it is possible to reconstruct
the image with reduced noise levels using the matrix \(E\).
However, this will not tell us anything about the properties of actual
release events (e.g., spark/wave numbers or properties) as these
macroscopic events are made up of several events from different pixels.
It is therefore necessary to combine elementary events from various
pixels into macroscopic release events.
This is achieved using the clustering method DBSCAN \cite{Sander_1998}.
The works in the parameter space and finds clusters of arbitrary shape
based on the density of events. This is preferable to standard
clustering methods which often yield radially symmetric clusters
(k-means, etc).
Clustering is performed in two steps. First, pixel events are
distributed into groups accoring to their shape i.e., clustering is done
on the matrix \(E^s\). This is possible because, although the
function used for fitting various release events (e.g., sparks or waves)
is the same, the shape parameters of a event approximating a spark are
likely to be more similar to other spark events rather than wave events.
This is clearly visible on Figure x where \ldots{} . In the second
clustering step, the \(E^p\) matrix is cluster for each shape
group and physically nearby clusters of similar events are obtained.
With this two-step approach, release events of various types consisting
on elementary events from multiple pixels are obtained.
\subsubsection{Event characterisation}\label{event-characterisation}
\subsection{Biological results}\label{biological-results}
\section{Discussion}\label{discussion}
\selectlanguage{english}
\FloatBarrier
\bibliographystyle{plain}
\bibliography{bibliography/converted_to_latex.bib%
}
\end{document}