Shannon Jones

and 5 more

The Heliospheric Imagers on board NASAs twin STEREO spacecraft show that coronal mass ejections (CMEs) can be visually complex structures. To explore this complexity, we created a web-based citizen science project in collaboration with the UK Science Museum, in which participants were shown pairs of differenced CME images, taken by the inner cameras of the Heliospheric Imagers (HI-1) on board the twin NASA STEREO spacecraft between 2008 and 2016. Participants were asked to decide which image in each pair appeared the most complicated. 4,028 volunteers conducted 246,692 comparisons of 20,190 image pairs, with each pair being classified by 12 independent users. A Bradley-Terry model was then applied to these data to rank the CMEs by their visual complexity. This complexity ranking revealed that the annual average visual complexity values follow the solar activity cycle, with a higher level of complexity being observed at the peak of the cycle, and the average complexity of CMEs observed by HI1-A was significantly higher than the complexity of CMEs observed by HI1-B. Visual complexity was found to be associated with CME size and brightness, but the differences between HI1-A and HI1-B images suggest that complexity may be influenced by the scale-sizes of structure in the CMEs. Whilst it might not be surprising that the complexity observed in these CME images follows the trend observed in sunspots and the solar cycle; these results demonstrate that there is a quantifiable change in the structure of CMEs seen in the inner heliosphere.

Tom Hart

and 13 more

Many of the species in decline around the world are subject to different environmental stressors across their range, so replicated large-scale monitoring programmes, are necessary to disentangle the relative impacts of these threats. At the same time as funding for long-term monitoring is being cut, studies are increasingly being criticised for lacking statistical power. For those taxa or environments where a single vantage point can observe individuals or ecological processes, time-lapse cameras can provide a cost-effective way of collecting time series data replicated at large spatial scales that would otherwise be impossible. However, networks of time-lapse cameras needed to cover the range of species or processes create a problem in that the scale of data collection easily exceeds our ability to process the raw imagery manually. Citizen science and machine learning provide solutions to scaling up data extraction (such as locating all animals in an image). Crucially, citizen science, machine learning-derived classifiers, and the intersection between them, are key to understanding how to establish monitoring systems that are sensitive to – and sufficiently powerful to detect –changes in the study system. Citizen science works relatively ‘out of the box’, and we regard it as a first step for many systems until machine learning algorithms are sufficiently trained to automate the process. Using Penguin Watch (www.penguinwatch.org) data as a case study, we discuss a complete workflow from images to parameter estimation and interpretation: the use of citizen science and computer vision for image processing, and parameter estimation and individual recognition for investigating biological questions. We discuss which techniques are easily generalizable to a range of questions, and where more work is needed to supplement ‘out of the box’ tools. We conclude with a horizon scan of the advances in camera technology, such as on-board computer vision and decision making.