%\documentclass[referee,usenatbib]{mn2e}
\documentclass[usenatbib]{mn2e}
\usepackage{graphicx}
\usepackage[space]{grffile}
\usepackage{latexsym}
\usepackage{textcomp}
\usepackage{longtable}
\usepackage{tabulary}
\usepackage{booktabs,array,multirow}
\usepackage{amsfonts,amsmath,amssymb}
\usepackage{natbib}
\usepackage{url}
\usepackage{hyperref}
\hypersetup{colorlinks=false,pdfborder={0 0 0}}
\usepackage{etoolbox}
\makeatletter
\patchcmd\@combinedblfloats{\box\@outputbox}{\unvbox\@outputbox}{}{%
\errmessage{\noexpand\@combinedblfloats could not be patched}%
}%
\makeatother
% You can conditionalize code for latexml or normal latex using this.
\newif\iflatexml\latexmlfalse
\AtBeginDocument{\DeclareGraphicsExtensions{.pdf,.PDF,.eps,.EPS,.png,.PNG,.tif,.TIF,.jpg,.JPG,.jpeg,.JPEG}}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
%\usepackage{natbib}
\usepackage[normalem]{ulem}
\newcommand*\mean[1]{\bar{#1}}
\title[MeqSilhouette : A mm-VLBI observation and signal corruption simulator]{MeqSilhouette : A mm-VLBI observation and signal corruption simulator}
\author[Tariq Blecher]{Tariq Blecher$^{1}$, Roger Deane$^{2}$, Gianni Bernardi$^{3}$, Oleg Smirnov$^{4}$\\
$^{1}$Rhodes University
\\
$^{2}$Affiliation not available\\
$^{3}$Affiliation not available\\
$^{4}$Affiliation not available}
\begin{document}
%\date{Accepted 1988 December 15. Received 1988 December 14; in original form 1988 October 11}
%\pagerange{\pageref{firstpage}--\pageref{lastpage}} \pubyear{2002}
\maketitle
\label{firstpage}
%\begin{keywords}
%circumstellar matter -- infrared: stars.
%\end{keywords}
\selectlanguage{english}
\begin{abstract}
\newline
The Event Horizon Telescope (EHT) aims to spatially resolve the shadow (or silhouette) of the supermassive black holes in M87 and the Galactic Centre. The primary scientific objectives are to test general relativity in the strong-field regime and to probe accretion and jet-launch physics at event-horizon scales. This is made possible by the technique of Very Long Baseline Interferometry (VLBI) at millimetre wavelengths, which can achieve angular resolutions of order $\sim10~\mu$-arcsec. However, this approach suffers from unique observational challenges, including scattering in the troposphere and interstellar medium; rapidly time-variable source structure in both polarized and total intensity; as well as non-negligible antenna pointing errors. In this, the first paper in a series, we present the MeqSilhouette software package which is specifically designed to accurately simulate EHT observations. It includes realistic descriptions of a number of signal corruptions that can limit the ability to measure the key science parameters. This can be used to quantify calibration requirements, test parameter estimation and imaging strategies, and investigate systematic uncertainties that may be present. In so doing, a stronger link can be made between observational capabilities and theoretical predictions with the goal of maximising scientific output from the upcoming order of magnitude increase in EHT sensitivity.
\end{abstract}
Keywords : instrumentation: interferometers, submillimetre: general, Galaxy: centre, atmospheric effects, techniques: high angular resolution
\section{Introduction}\label{sec:intro}
The principal goal of the Event Horizon Telescope (EHT) is to spatially resolve the gravitationally-lensed photon ring (or `silhouette') of a supermassive black hole \citep{2009astro2010S..68D}. The two primary targets are Sgr~A$^\star$ and M87, both of which are expected to have gravitationally-lensed photon rings with apparent angular diameters of $\theta_{\rm pr} \sim 20-50$~$\mu$-arcsec \citep*{Falcke_2013,Broderick_2009}. The extreme angular resolution required, blurring effects due to scattering by the interstellar medium (ISM; e.g. \citealt{Gwinn_2014}), as well as the transition to an optically thin inner accretion flow at (sub)mm-wavelengths (\citealt{Serabyn_1997}; \citealt{Falcke_1998}), necessitates that the EHT be comprised of antennas separated by thousands of kilometres and operating at high radio frequency ($\nu>200$~GHz).
Performing what is known as Very Long Baseline Interometry (VLBI) at mm-wavelengths presents unique calibration challenges, including very short atmospheric coherence times that are typically $\lesssim$10~s \citep{Doeleman_2009}, low calibrator source sky density, complex and variable calibrator source structure, and antenna pointing accuracies that are a non-negligible fraction of the antenna primary beam. These effects may place significant limitations on the sensitivity, image fidelity, and dynamic range that can be achieved with mm-VLBI. Furthermore, unaccounted for systematic and/or non-Gaussian uncertainties could preclude robust, accurate Bayesian parameter estimation and model selection analyses of accretion flow \citep[e.g.][]{Broderick_2016} and gravitational physics \citep[e.g.][]{Psaltis_2016}, two of the EHT's many objectives.
Over the past decade, significant effort has been placed on advanced radio interferometric calibration and imaging algorithms for centimetre and metre-wave facilities in response to the large number of new arrays in construction or design phase (e.g. MeerKAT, ASKAP, SKA, LOFAR, HERA). A leading software package in this pursuit is \textsc{MeqTrees}\footnote{https://ska-sa.github.io/meqtrees/} \citep*{Noordam_2010}, which was developed to simulate, understand and address the calibration issues to be faced with the greatly enhanced sensitivity, instantaneous bandwidth, and field-of-view of such facilities. For example, \textsc{MeqTrees} is rooted in the Measurement Equation mathematical formalism \citep{Hamaker_1996}, which parameterizes the signal path into distinct $2 \times 2$ complex matrices called Jones matrices. This formalism and applications thereof are laid out in (Smirnov 2011a,b,c) and are arbitrarily generalized to model any (linear) effect, including undesired signal corruptions that often may have subtle yet systematic effects. \textsc{MeqTrees} has been applied to correct for direction dependent calibration errors to JVLA and WSRT observations, achieving record-breaking high dynamic range images \cite{Smirnov_2011c}. The effectiveness provided by the Measurement Equation formalism in radio interferometric calibration provides a strong motivation to explore its application to challenging goal of imaging a supermassive black hole silhouette with mm-VLBI.
Recently, there has been an increase in the attention given to simulating EHT observations of Sgr~A$^*$ (\citealt{Fish_2014}; \citealt{Lu_2014}; \citealt{2015arXiv151201413B}). However, these are primarily focused on image reconstruction and assume perfect phase calibration i.e. no troposphere-induced fringe-fitting errors; perfect antenna pointing accuracy; perfect phasing efficiency; and in most cases simple, non-variable Gaussian kernel smoothing to simulate ISM scattering. Clearly, as the EHT array is enhanced (and likely expanded), so too must the interferometric simulations evolve to provide ever-more realistic predictions on the confidence levels with which parameters can be extracted and hence exclude theoretical models of gravity and/or accretion flows.
Given the significant, yet surmountable, observational challenges that the EHT faces, we have undertaken a project to leverage metre and cm-wavelength simulation and calibration successes and build a \textsc{MeqTrees}-based mm-VLBI-specific software package called \textsc{MeqSilhouette}. While \textsc{MeqTrees} has not yet been used in the context of mm-wavelength observations, the framework is agnostic to higher frequency implementation as long as the Measurement Equation is appropriately constructed. \textsc{MeqSilhouette} enables realistic interferometric simulations of mm-VLBI observations in order to gain deeper understanding of a wide range of signal propagation and calibration effects. In this paper we describe the software package along with its main features that include the capability to simulate tropospheric, ISM scattering, and time-variable antenna pointing error effects. As will be demonstrated in a forthcoming series of papers, this technology enables us to understand a wide range of mm-VLBI signal propagation and calibration systematics, quantify their effect on accretion flow and gravitational theoretical model selection, and hence maximise the scientific utility from EHT observations.
The paper is organized as follows: in section~\ref{sec:basic_scat} we provide an introductory discussion on scattering theory; in section~\ref{sec:sim} we describe the implementation of the simulator and provide demonstrations of the most important modules; section~\ref{sec:discussion} summarises our results and outlines our current plan for investigations with and future implementations of \textsc{MeqSilhouette}; and finally we conclude in section~\ref{sec:conclusion}.
\section{Theoretical background}\label{sec:basic_scat}
Millimetre wavelength radiation originating at the Galactic Centre is repeatedly scattered along the signal path to the Earth-based observer. The first occurrence is due to electron plasma in the interstellar medium (ISM) (\citealt{Bower_2006}, \citealt{Gwinn_2014}), while the second is due to poorly-mixed water vapour in the Earth's troposphere (\citealt*{Carilli_1999}, \citealt*{Lay_1997}). It is essential that the effects of the scattering phenomena are understood for a rigorous calibration and interpretation of data. Towards this end, simulation modules approximating scattering in both media are implemented in \textsc{MeqSilhouette}. As an introduction to the separate descriptions of each, we review a simple scattering model.
An electro-magnetic wave is scattered when it passes through a medium with refractive index inhomogeneities. Following \citet{Narayan_1992}, this effect can be modeled as a thin screen, located between source and observer planes and orientated perpendicular to the line-of-sight. The screen, indexed by coordinate vector $\mathbf{x}$, adds a stochastic phase $\phi(\mathbf{x})$ to the incoming wave at each point on the screen, yielding a corrugated, outgoing wavefront. We define the Fresnel scale as $r_{\rm F} = \sqrt{\lambda D_{\rm os}/2\pi}$, where $D_{\rm os}$ is the observer-scatterer distance, or the distance where the geometrical path difference $\frac{2\pi}{\lambda} (D_{\rm os} - \sqrt{D_{\rm os}^2 + r_{\rm F}^2}) =\frac{1}{2}$~rad.
To determine the resultant electric field at a point in the plane of the observer, indexed by coordinate vector $\mathbf{X}$, one has to take into account all possible ray paths from the screen to $\mathbf{X}$. To illustrate the model, a calculation of the electric field amplitude, $|E(\mathbf{X})|$ yields the Fresnel-Kirchoff integral \citep*{BORN_1980}
\begin{equation}\label{Fresnel- Kirchoff}
|E(\mathbf{X})| = C \int_{screen} \exp\left[i\phi(\mathbf{x}) + i \frac{(\mathbf{x}-\mathbf{X})^2}{2 r_{\rm F}}\right]\mathbf{dx},
\end{equation}
where C is a numerical constant.
The statistics of $\phi(\mathbf{x})$ can be described by a power spectrum or equivalently the phase structure function,
\begin{equation}\label{eq:D_phi}
D_\phi (\mathbf{x},\mathbf{x'}) = < \left[ \phi(\mathbf{x} +\mathbf{x'}) - \phi(\mathbf{x})\right]^2 >,
\end{equation}
where $\mathbf{x}$ and $\mathbf{x'} $ represent two points on the screen and $<>$ denotes the ensemble average.
There is evidence that $D_\phi$ can be reasonably approximated by a power law dependence on the absolute distance $r$ between points on the screen \citep{Armstrong_1995,carilli_1997}
\begin{equation}
D_\phi (r) = (r/r_0)^\beta,\qquad r^2 = \mathbf{x}^2 - \mathbf{x'}^2
\label{kolmogorov}
\end{equation}
where $r_{\rm 0}$ is the phase coherence length scale defined such that $D_\phi(r_{\rm 0}) = 1$~rad.
Kolmogorov turbulence, which describes how kinetic energy injected at an outer length scale $r_{\rm out}$ cascades to increasingly smaller scales until finally dissipated at an inner length scale $r_{\rm in}$, predicts $\beta = 5/3$ in the domain ${r_{\rm in}<1$~AU \citep*{Johnson_2015a}, and also for the troposphere with $r \Delta t_{\rm ism}$,
\end{itemize}
where $R$ rounds the fraction to the nearest integer.
To demonstrate the implementation and provide an example of intraday ISM variability, we present the results of a simulated observation of 10 minutes duration at 14:00 UTC on four consecutive days in Fig.~\ref{ISM_sequence}. To compare to published observations, we use the three-station EHT array consisting of The Submillimeter Telescope (SMT) in Arizona, The Combined Array for Research in Millimeter-wave Astronomy (CARMA) in California and the James Clerk Maxwell Telescope (JCMT) on Mauna Kea. The relative transverse velocity between the observer and scattering screen is set at $50~\rm{km\,s}^{-1}$, to be consistent with \citet{2016arXiv160106571O}. The source is a circular Gaussian with a $\rm{FHWM}=40$~$\mu$-arcsec, approximately the angular distance that a scattering screen would travel over $\sim 4$~days. The source size has been chosen such that it is consistent with the latest estimate of the size of Sgr~A$^\star$ at $230$~GHz \citep{Fish_2011}. Closure quantities are model dependent and calculated as specified in \citep{Rogers_1995} where the thermal noise was added based on the System Equivalent Flux Density (SEFD) table in \citep{Lu_2014}. The SEFD provides a measure of the thermal noise associated with a particular station.\selectlanguage{english}
\begin{figure}
\begin{center}
\includegraphics[width=1.00\columnwidth]{figures/ism/ism}
\caption{{{\bf The top panel, left to right}, shows the original $\rm FWHM = 40$~$\mu$-arcsec Gaussian, the ISM scattered image on the first and last nights of the observation respectively. {\bf The bottom panel, left to right,} shows the evolution of the 10 minute-averaged closure phase with epoch, {\sl uv}-tracks for any particular night and the visibility amplitudes $|V|$ of the unscattered (red) and scattered (grey-scale) sources as a function of {\sl uv-}distance. Variations of the flux on the shortest baselines reveal total flux modulation while flux variations $\Delta |V|$ on longer baselines and non-zero closure phases track the fluctuations in refractive noise. When compared to the latest published observations of Sgr~A$^{\star}$ \protect\protect\citep{Fish_2016}, we see that the observed and simulated closure phase variability are consistent. Furthermore, ISM scattering simulations can constrain the variability fraction associated with the screen, enabling a more robust estimation of source variability.\label{ISM_sequence}%
}}
\end{center}
\end{figure}
\subsection{Pointing Errors}\label{sec:pointing}
All antennas suffer pointing errors to some degree due to a variety of factors including dish flexure due to gravity, wind and thermal loading, as well as drive mechanics. This corresponds to an offset primary beam, which should only translate to minor amplitude errors if the pointing error $\theta_{\rm PE}$ is significantly smaller than the primary beam (i.e. $\theta_{\rm PE} \ll \theta_{\rm PB}$). In the Measurement Equation formalism, this offset can be represented by a modified (shifted) primary beam pattern in the {\bf \it E}-Jones term
\begin{equation}
{\bf E}_p(l,m) = {\bf E}(l_0 + \delta l_p, m_0 + \delta m_p),
\end{equation}
where $\delta l_p, \delta m_p$ correspond to the directional cosine offsets.
We investigate the effect of pointing errors on the 50~m (i.e. fully illuminated) LMT dish configured in an eight station VLBI array. The LMT has been measured to have an absolute pointing accuracy of $\sigma_{\rm abs} = 1-3$~arcsec, where smaller offsets occur when observing sources closer to zenith, and a tracking pointing accuracy $\sigma_{\rm track} < 1$~arcsec \footnote{http://www.lmtgtm.org/telescope/telescope-description/}. We explore the observational effect of these errors through three different pointing error models which explore different instructive and plausible scenarios. The LMT has been singled out as this may well serve as a reference station for the EHT array given its sensitivity and central geographic location. The source used is a circular Gaussian of characteristic size $\Theta_{\rm src}=50$ $\mu$-arcsec, located at the phase centre. For this investigation, as long as $\Theta_{\rm src} \ll \theta_{\rm PB}$, the exact structure of the source is unimportant. We approximate the LMT beam profile using an analytic WSRT beam model \citep{Popping_2008} with a factor of 2 increase in the beam factor $C$ to take into account the increased dish size
\begin{equation}
E(l, m) = \cos^3(C\nu \rho),\qquad \rho = \sqrt{\delta l_p^2 + \delta m_p^2}
\end{equation}
where $C$ is a constant, with value $C \approx 130$~GHz$^{-1}$. Note that the power beam $EE^H$ becomes $\cos^6$, giving a $\rm{FWHM} = 6.5 $~arcsec. In Fig.~\ref{fig:pointing}, we show this for pointing accuracies spanning the range from $\rho \sim 0-4.5$~arcsec.
In the first case we assume a constant pointing error and plot the RMS relative visibility amplitude error $\sigma_{\Delta V/V_0}$ on baselines to LMT, where $\Delta V = V_{\rm point} - V_{0}$, $V_{\rm point}$ and $V_{0}$ are the amplitude of the visibility with and without pointing errors respectively. This simulation is meant to be instructive as to the typical amplitude error in the simplest possible scenario.
Also interesting to consider is a slower, continuous time-variable pointing error associated with the tracking error $\sigma_{\rm track}$. Physically this could be attributed to changes in wind, thermal and gravitational loading which all change with telescope pointing direction and over the course of a typical few hour observation. Using the MeqTrees software package, such behaviour has been demonstrated to occur with the Westerbork Synthesis Radio Telescope (WSRT) \cite{Smirnov_2011c}\footnote{See also https://indico.skatelescope.org/event/171/session/9/contribution/20}. This is modeled as a sinusoid with period sampled from a uniform distribution between 0.5 and 6 hours, and a peak amplitude $A_{\rho} = \sqrt{2} \sigma_{\rho}$ , where the factor $\sqrt{2}$ relates the amplitude to the RMS for periodic zero mean waveforms.
Whilst a stationary phase centre is tracked, the pointing error should evolve slowly and smoothly, however, in mm-VLBI observations the phase centre is often shifted to another source/calibrator. This would cause the pointing error to change abruptly, with an absolute pointing error $\sigma_{\rm abs}$. Source/calibrator change is scheduled every 5-10 minutes in a typical millimetre observation. The point is that even though EHT will be able to determine the pointing offset when observing a calibrator with well known structure, when the antennas slew back to a source (e.g. Sgr~A$^\star$) with less certain or variable source structure, the pointing error could change significantly. This is exacerbated by the scarcity of mm-wavelength calibrators, which are often widely separated from the source. We simulate this effect by re-sampling the pointing error every 10 minutes from a Gaussian of characteristic width equal to the quoted pointing error. We perform 50 realisations of the simulation for each pointing offset to generate reasonable uncertainties.
In Fig.~\ref{fig:pointing} we show the results of pointing offsets for the three cases listed above. Stochastic variability with $\sigma_{\rm abs}$ consistent with its measured range can give fractional visibility amplitude error up to $0.4$. Slow variability $\sigma_{\rm abs}$ consistent with measured values only gives fractional errors up to $0.05$. The constant error follows closely the mean of the slow variability case. If this effect is non-separable from the calibration model used, it could be interpreted as intrinsic variability, source substructure, and/or noise. For future observations at 345~GHz, these effects will be even more pronounced, given the narrower primary beam.\selectlanguage{english}
\begin{figure}
\begin{center}
\includegraphics[width=1.00\columnwidth]{figures/images2/point}
\caption{{RMS relative amplitude error induced by pointing error with the 50~m (i.e. fully illuminated) LMT antenna as a function of pointing error offset $\rho$ at 230~GHz. We assume that these errors are degenerate or non-separable from the self-calibration/fringe-fitting model used. See text for the description of the three models used. This simulation capability enables constraints on the magnitude of pointing-induced errors given a particular pointing calibration strategy.\label{fig:pointing}%
}}
\end{center}
\end{figure}
\subsection{Troposphere}
The coherence and intensity of millimetre wavelength electromagnetic waves are most severely deteriorated in the lowest atmospheric layer, the troposphere, which extends up to an altitude of $7-10$~km above sea level and down to a temperature $T \sim 218$~K \citep{Thompson_2001}. The troposphere is composed of primary gases $\rm N_2$ and $\rm O_2$, trace gases e.g. water vapour and $\rm CO_2$, as well as particulates of water droplets and dust.
The absorption spectrum in the GHz range (e.g. \citealt{Pardo_2001}) is dominated by several transitions of $\rm H_2O$ and $\rm O_2$ as well as a pseudo-continuum opacity which increases with frequency. The pseudo-continuum opacity is due to the far wings of a multitude of broadened water vapour lines above 1~THz \citep{Carilli_1999}.
In contrast to the other atmospheric chemical components, water vapour mixes poorly and its time-variable spatial distribution induces rapid fluctuations in the measured visibility phase at short wavelengths. The water vapour column density is measured as the depth of the column when converted to the liquid phase and is referred to as the precipitable water vapour (PWV). The PWV is, via the real component of the refractive index, directly proportional to phase offset,
\begin{equation}
\delta\phi \approx \frac{12.6\pi}{\lambda} \times w,
\end{equation}\label{eq:phi-pwv}
\noindent where $w$ is the depth of the PWV column \citep*{Carilli_1999} and an atmospheric temperature $T=270$~K has been assumed. This relationship between phase and water vapour content has been experimentally verified \cite{hogg_1981}. At 230~GHz, the change in PWV needed to offset the phase by 1~rad is $\Delta w\approx0.03$~mm. This sensitive dependence of phase coherence on atmospheric stability is aggravated by typically low antenna observation elevation angles, uncorrelated atmospheric variations between stations, and observing with a sparse VLBI array.
Our focus is to model three primary, inter-related observables which are the most relevant to mm-VLBI: turbulence-driven fluctuations in the visibility phase $\delta \phi$; signal attenuation due to the atmospheric opacity $\tau$; and the increase in system temperature due to atmospheric emission at a brightness temperature $T_{\rm atm}$.
Our approach is to model these observables as being separable into mean and turbulent components which are simulated independently. The mean tropospheric simulation module performs radiative transfer with a detailed model of the electromagnetic spectrum of each atmospheric constituent. The turbulent simulation module represents a scattering approach to account for the decoherence properties of the visibility phase.
\subsubsection{Average Troposphere}
The problem of radiative transfer through a static atmosphere is well described and implemented by the Atmospheric Transmission at Microwaves (\textsc{atm}) package \citep{Pardo_2001}. \textsc{atm} has been incorporated into \textsc{MeqSilhouette} to provide a fast and sophisticated procedure to calculate average opacities, sky brightness temperatures and time delays. Here we provide a brief summary of the theory underpinning the package but see the original paper for further detail. \textsc{atm} is commonly used in the ALMA community \citep{Curtis_2009,Nikolic_2013} and has been tested with atmospheric transmission spectra taken on Mauna Kea \citep{Serabyn_1998}.
We start from the unpolarised radiative transfer equation, which is unidirectional in the absence of scattering,
\begin{equation}
\frac{dI_\nu (s) }{ds} = \epsilon_\nu(s) -\kappa_\nu(s) I_\nu (s),
\end{equation}\label{eq:rad_trans}
where $s$ is the coordinate along the signal path through the atmosphere, $I_\nu(s)$ is the specific intensity, $\epsilon_\nu$ is the macroscopic emission coefficient and $\kappa_\nu$ is the macroscopic absorption coefficient.
The goal is to integrate this equation over the signal path which requires $\kappa_\nu$ as a function of altitude and frequency. In practice, this involves a triple sum over altitude layer, chemical species and rotational energy transition. Atmospheric temperature and pressure profiles are calculated based on several station dependent inputs, namely, ground temperature and pressure and the precipitable water vapour column depth.
A general equation of the absorption coefficient for a transition between a lower $l$ and upper $u$ states is given in the original paper. Here we merely point out that it should be proportional to the energy of the photon, $h\nu_{l \to u}$, the transition probability or Einstein coefficient, $ B_{l \to u}$, the line-shape, $f(\nu,\nu_{l \to u})$ and the number densities $N$ of electronic populations. Line profiles which describe pressure broadening (perturbations to the Hamiltonian due to the presence of nearby molecules) and Doppler broadening are used. The condition of detailed balance further requires that decays from the upper state are included yielding, $g_u B_{u \to l} =g_l B_{l \to u}$, where $g$ is the degeneracy of the electronic state. Putting this together we find,
\begin{equation}
\kappa(\nu) _{l \to u} \propto h\nu B_{l \to u} \left(\frac{N_l}{g_l} - \frac{N_u}{g_u} \right) f(\nu,\nu_{l \to u}),
\end{equation}
\noindent where the Einstein coefficients are calculated from the inner product of the initial and final states with the dipole transition operator. The number densities of the two states, $N_u$ and $N_l$ in local thermodynamic equilibrium (LTE) are simply related to the local number density and temperature via Boltzmann statistics.
Typical opacities and sky brightness temperatures for ALMA, Submillimeter Array (SMA) and SPT are shown in Fig.~\ref{fig:mean_atm}. A typical PWV range \cite{Lane_1998}, ground pressure and temperature were assumed for each site. Note that both the opacity and brightness temperature are inversely proportional to the ground temperature and proportional to ground pressure.\selectlanguage{english}
\begin{figure}
\begin{center}
\includegraphics[width=1.00\columnwidth]{figures/opacity-1/opacity-}
\caption{{Best fit opacity (red) and sky brightness temperature (blue) line solutions at $\nu =230$~GHz as a function of precipitable water vapour (PWV) for three indicative ground pressures and temperatures which approximately represent the sites of SPT (solid), ALMA (diamond) and SMA (dashed). The legend shows the estimated input ground (pressure, temperature) parameters for each site.\label{fig:mean_atm}%
}}
\end{center}
\end{figure}
\subsubsection{Turbulent troposphere}
Visibility phase instability $\delta \phi(t)$ due to tropospheric turbulence is a fundamental limitation to producing high fidelity, science-quality maps with a mm-VLBI array \citep{Thompson_2001}. The coherence time-scale is typically too rapid ($\lesssim10$~s) for fast switching calibration, so other calibration procedures (e.g. water vapour radiometry, paired antennas, sub-arrays, and/or self-calibration) must be used. Self-calibration is the most commonly used but is limited by the integration time needed to obtain adequate SNR to fringe fit. This often leads to the use of closure quantities to perform model fitting (\citealt{Doeleman_2001}; \citealt{Fish_2011}).
Following from section~\ref{sec:basic_scat}, we can model the statistics of $\delta \phi(t)$ with a thin, frozen, Kolomogorov-turbulent phase screen moving with a bulk velocity, $v$. We set the height $h$ of the screen at the water vapour scale height of 2~km above ground. We will show later that the thickness $\Delta h$ of the atmospheric turbulent layer can be neglected in our implementation. At $1.3$~mm, the Fresnel scale is $r_F \approx 0.45$~m and experiments show annual variations of $r_0 \sim 50 - 500$~m above Mauna Kea \citep{Masson_1994} and $r_0 \sim 90 - 700$~m above Chajnantor \citep*{Radford_1998}, where both sites are considered to have excellent atmospheric conditions for millimetre astronomy. As $r_F < r_0$, this is an example of weak scattering.
The required field-of-view (FoV) of a global mm-VLBI array is typically FoV~$< 1$~mas or ~$\sim10~\mu$m at a height of 2~km, which is roughly 7-8 orders of magnitude smaller than the tropospheric coherence length. The tropospheric corruption can therefore be considered constant across the FoV and, from the perspective of the Measurement Equation, modeled as a diagonal Jones matrix per time step. As VLBI baselines are much longer than the coherence length, $|\mathbf{b}| \ge 1000$~km~$>> \rm r_0$, the phase screen at each site must be simulated independently.
Our aim then is to produce a phase error time sequence $\left\{\delta \phi(t_i)\right\}$ for each station which is added to the visibility phase. We invoke the frozen screen assumption and write the structure function as a function of time, $D (t) = D(r)|_{r=vt}$. The temporal structure function provides an efficient route to sample the variability of the troposphere at the integration time of the dataset, $t_{\rm int} \sim 1$~sec. This definition is only applicable in the regime $r > r_F$ to ensure diffraction effects are negligible \cite{Thompson_2001}.
The temporal variance of the phase is consequently a function of the temporal structure function, \citep*{Treuhaft_1987}
\begin{equation}
\sigma^2_{\phi}(t_{\rm int}) = (1/t_{\rm int})^2 \int_{0}^{t_{\rm int}} (t_{\rm int}-t) D_{\phi}(t) dt.
\end{equation}
Assuming power-law turbulence and integrating yields,
\begin{equation}
\sigma^2_\phi (t_{\rm int})=\left[\frac{1}{\sin\theta(\beta^2 +3\beta +2)}\right]\left(\frac{t_{\rm int}}{t_0}\right)^{\beta},
\end{equation}
\noindent where $t_0 = r_0/v$ is the coherence time and $1/\sin\theta$ is the approximate airmass which arises as $D_\phi \propto w$. As $r<<\Delta h$, where $\Delta h$ is the thickness of the turbulent layer, an thin screen exponent of $\beta = 5/3$ is justified \citep*{Treuhaft_1987}. The phase error time-series takes the form of a Gaussian random walk per antenna. At mm-wavelengths, the spectrum of water vapour is non-dispersive up to a few percent \cite{Curtis_2009} and so we can assume a simple linear scaling across the bandwidth. Fig.~\ref{delay_plots} shows an example simulation of the turbulent and total delays at the SMA and ALMA sites.
Phase fluctuations $\delta\phi(t)$ can also simulated by taking the inverse Fourier transform of the spatial phase power spectrum. However this approach is much more computationally expensive, e.g. for an observation length $t_{\rm obs}$ involving $N_{\rm ant}=8$ independent antennae with dish radii $r_{\rm dish}=15$~m, wind speed $v=10$~m\,s$^{-1}$ and pixel size equal to $r_{\rm F}$, the number of pixels $N_{\rm pix} \approx N_{\rm ant} t_{\rm obs} r_{\rm dish}^2/(v r_{\rm f}^3) \sim 10^8$. Additionally, due to fractal nature of ideal Kolmogorov turbulence, the power spectrum becomes unbounded as the wavenumber approaches zero which makes it difficult to determine the sampling interval of the spatial power spectrum \cite{Lane_1992}.\selectlanguage{english}
\begin{figure}
\begin{center}
\includegraphics[width=1.00\columnwidth]{figures/images1/delays}
\caption{{Simulation of the total delay (left) and the turbulent atmospheric delay (right) for SMA (blue) and ALMA (green) sites towards Sgr~A$^\star$. Ground pressures and temperatures are the same as Fig.~\ref{fig:mean_atm}, precipitable water vapour above each station is set to $w=2$~mm, and the instantaneous zenith coherence time is set $T_0=10$~s for both stations. Note that all tropospheric parameters are, however, independently set. The conversion from time delay to phase at 230~GHz is $1$~rad~$=0.7$~ps.\label{delay_plots}%
}}
\end{center}
\end{figure}
\subsubsection{Limitations to high-fidelity image reconstruction}\label{sec:trop_errors}
A primary objective of \textsc{MeqSilhouette} is to understand and constrain systematic errors in mm-VLBI observations. In this section the tropospheric module is used to estimate the effect on image quality in the presence of varying amounts of tropospheric phase noise.
We simulate the simple scenario of a sky model consisting only of a 2.4 Jy point source, which is consistent with the measured flux of Sagittarius A$^\star$ at 230~GHz, and a phase coherence time of ten seconds above each station. We approximate the effect of imperfect calibration by adding a small fraction of the turbulent phase noise. For this example, we do not include the mean delay component, assuming it to be completely mitigated in the calibration. Currently, we only use conventional imaging techniques which do not make use of any bispectrum components.\selectlanguage{english}
\begin{figure}
\begin{center}
\includegraphics[width=0.98\columnwidth]{figures/trop-images/trop-images}
\caption{{The effect of residual troposphere phase noise on interferometric images of a point source observed for 12 hours at 230~GHz with 4~GHz bandwidth with the following array : SPT, ALMA, SMA, SMT, LMT and JCMT, assuming the same SEFDs as \protect\protect\citet{Lu_2014} and an elevation limit of 15$^\circ$. For simplicity the weather parameters at each station were set to: coherence time $t_{\rm 0}=10$~sec; PWV depth $w=1$~mm; ground pressure $P=600$~mb; ground temperature $T =273$~K. {\bf Top left:} interferometric map with thermal noise only. {\bf Top right:} atmospheric attenuation and sky noise (due to non-zero opacity) with 1\% of the turbulent phase noise added. {\bf Bottom left:} as previous but with 3\% of turbulent phase contribution. {\bf Bottom right:} as previous but with 6\% turbulent phase contribution. The fractional turbulent phase contributions are illustrative of the effect of fringe-fitting errors. Note the decrease in the source peak flux with increasing turbulent tropospheric phase noise. Note further that the peak source centroid is offset from its true position (black crosshairs). \label{fig:trop_images}%
}}
\end{center}
\end{figure}
\section{Discussion}\label{sec:discussion}
In this paper, we present the first release of the \textsc{MeqSilhouette} synthetic data simulation package. The pipeline is optimised towards mm-VLBI, taking into account user-specified stages of the signal propagation path, which enables the quantification of a range of systematic effects. Focus has been placed on modeling the effects of signal transmission through the ISM and troposphere as well as instrumental errors (i.e. pointing error and thermal noise). Time variability in all relevant domains (source, array, ISM, troposphere) is implemented. The run time for a typical simulation with a realistic instrumental setup is on the order of minutes. Implementation of polarisation effects is intended in the next release.
The ISM scattering implementation \textsc{ScatterBrane}, based on \citet*{Johnson_2015a}, has been incorporated into the pipeline. Fig.~\ref{ISM_sequence} provides an example of closure phase and flux variability over a 4 day period using a static source. Accurate simulation of the ISM-induced closure phase variation is essential in order to make any inference on asymmetric, event-horizon scale structure \citep[e.g.][]{Fish_2016,2016arXiv160106571O}. This will become even more important as the EHT sensitivity increases by an order of magnitude in the near future. Note that if the source has intrinsic spatial variability as in the case of a hotspot model \cite{Doeleman_2009}, this will increase ISM variability as the relative motion between source, screen and observer is increased.
In section~\ref{sec:pointing}, we show how antenna pointing errors of the LMT could introduce fractional RMS amplitude variations $\sigma_{\Delta V/V_0} \le 0.4$ on the timescale of phase centre switching. This would occur if the calibrator is widely separated from the source, as is often the case in mm-VLBI. In contrast tracking errors are less problematic with $\sigma_{\Delta V/V_0} \le 0.05$. If the gain error is non-separable from the calibration model used, it could be interpreted as intrinsic variability, substructure and/or increased noise. If unaccounted for, this effect has the potential to limit the dynamic range of mm-VLBI images. Further tests to constrain the pointing uncertainties of EHT stations will lead to more accurate interferometric simulations and hence the overall impact on black hole shadow parameter estimation. Here we demonstrate the capability to incorporate a range of plausible pointing error effects into a full simulation pipeline.
In section~\ref{sec:trop_errors} we explore the observational consequences of observing through a turbulent troposphere. In this simulation, we assume a simple point source model and apply increasing levels of turbulence-induced phase fluctuations before imaging using the two dimensional inverse fast Fourier transforms. We note a rapid attenuation in peak flux due to incoherent averaging, slight offsets in the source centroid and the presence of spurious imaging artefacts. Suprisingly, in this configuration, there was no evidence of blurring or a loss of resolution with the uncertainties. In an upcoming paper, we perform a systematic exploration of the turbulent tropospheric effects on the accuracy of fringe-fitting algorithms/strategies, through use of an automated calibration procedure and including the added complexity of a time-variable source.
Significant progress has been made in the theoretical and numerical modeling of the inner accretion flow and jet launch regions near a supermassive black hole event horizon. With \textsc{MeqSilhouette}, we now have the ability to couple these with sophisticated interferometric and signal propagation simulations. This offers a tool to enable a more closely-knit and effective interplay between theoretical predictions and observational capabilities. Moreover, detailed interferometric simulations will enable us to quantify systematic effects on the black hole and/or accretion flow parameter estimation.
\section{Conclusion}\label{sec:conclusion}
In light of the science objectives of mm-VLBI observations and software advances in the broader radio interferometry community, a mm-VLBI data simulator has been developed. An important feature is that this simulation pipeline is performed using the {\sc Measurement Set} format, in line with ALMA and future VLBI data formats. The focus has been placed on simulating realistic data given an arbitrary theoretical sky model. To this end, the simulator includes signal corruptions in the interstellar medium (ISM), troposphere and instrumentation. Examples of typical corruptions have been demonstrated, which show that each corruption can significantly affect the inferred scientific parameters. Particular focus has been placed on EHT observations, however, the pipeline is completely general with respect to observation configuration and source structure. Time variability in all domains (source, array, ISM, troposphere) is implemented. Future releases of \textsc{MeqSilhouette} will include polarisation dependent corruptions. The creation of a close interface between sophisticated theoretical and interferometric mm-VLBI simulations will enhance the scientific opportunities possible with the EHT.
\section*{Acknowledgements}
We thank Michael Johnson and Katherine Rosenfeld for making the {\sc ScatterBrane} code publicly available, and for helpful discussions. Similarly, we thank Bojan Nikolic for \textsc{atm} support. We are grateful to Monika Mo\'{s}cibrodwska for supplying theoretical simulations we used in testing and to Ilse van Bemmel and Lindy Blackburn for useful discussions. We thank Paul Galatis for assistance with the \textsc{MeqSilhouette} logo design. The financial assistance of the South African SKA Project (SKA SA) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at are those of the author and are not necessarily to be attributed to the SKA SA. www.ska.ac.za. O.~Smirnov's research is supported by the South African Research Chairs Initiative of the Department of Science and Technology and National Research Foundation.
\selectlanguage{english}
\FloatBarrier
\bibliographystyle{mnras}
\bibliography{bibliography/converted_to_latex.bib%
}
\end{document}