The polluting plumes emitted by buildings evolve in shape and optical depth over a couple of minutes or shorter. This characteristic evolution was exploited by showing a group of consecutive background subtracted images in order over time to identify plumes as they change in shape, expand and track the trajectory as they rise into the air. Background images were created by averaging the surrounding images and taking the difference between the target image and the averaged baseline. Once subtracted from the target image, plumes are visible in the resulting frame, along with any other changes to the image such as cars, clouds, and other features which will constitute potential contaminants (false positives). 
The following section provides detail on the application used for tagging of plumes and other features. We also provide a description of the different features tagged and examples in both the original and background subtracted images.

Image Labeling

In order to apply labels to the images and extract plumes to generate the training set, a web-based tool was built using d3.js, a data visualization library in Javascript \citet{Bostock_2011}, and Flask, a web application framework in Python \citep{unterwaditzer2010}, as is seen in Figures \ref{753144} and \ref{827908}. The tool allowed us to apply bounding box labels to batches of consecutive images; collect summary statistics on number of tags per day, or per image; linked tags between images to track appearance of the same plume across multiple images; as well as account for who applied tags to which images to assess for intrinsic human bias. The images are manipulated in Python and OpenCV \citep{opencv_library} before displaying the images on screen.