References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C.,
Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow,
I., Harp, A., Irving, G., Israd, M., Jia, Y., Jozefowics, R., Kaiser,
L., Kudlur, M., … Zheng, X. (2015). TensorFlow: Large-scale
machine learning on heterogeneous systems . Software available from
tensorflow.org. arxiv:1603.04467
Alexander, J. S., Zhang, C., Shi, K., & Riordan, P. (2016). A granular
view of a snow leopard population using camera traps in Central China.Biol. Conserv. 197 , pp. 27-31.
https://doi.org/10.1016/j.biocon.2016.02.023
Chitwood, M. C., Lashley, M. A., Higdon, S. D., DePerno, C. S., &
Moorman, C. E. (2020). Raccoon Vigilance and Activity Patterns When
Sympatric with Coyotes. Diversity , 12(9), pp. 341.
https://doi.org/10.3390/d12090341
Edwards, S., Gange, A. C., & Wiesel, I. (2016). An oasis in the desert:
the potential of water sources as camera trap sites in arid environments
for surveying a carnivore guild. J. Arid Environ. , 124 ,
pp. 304-309. https://doi.org/10.1016/j.jaridenv.2015.09.009
Ferreira-Rodríguez, N., & Pombal, M. A. (2019). Bait effectiveness in
camera trap studies in the Iberian Peninsula. Mammal Res. ,64(2), pp. 155-164.
https://doi.org/10.1007/s13364-018-00414-1
Fink, G. A., Frintrop, S., & Jiang, X. (2019). Pattern recognition:
41st DAGM German Conference. Dortmund, Germany. Spring
Nature . pp. 394. ISBN: 978-3-030-33676-9
Glover‐Kapfer, P., Soto‐Navarro, C.A., & Wearn, O.R. (2019).
Camera‐trapping version 3.0: current constraints and future priorities
for development. Remote Sens. Ecol. Conserv. , 5, pp. 209-223.
https://doi.org/10.1002/rse2.106
Gomez, A., Diez, G., Salazar, A., & Diaz, A. (2016). Animal
identification in low quality camera-trap images using very deep
convolutional neural networks and confidence thresholds.International Symposium on Visual Computing. pp. 747-756.
https://doi.org/10.1016/j.ecoinf.2017.07.004
Han, D., Liu, Q., & Fan, W. (2018). A new image classification method
using CNN transfer learning and web data augmentation. Expert
Syst. Appl. 95 , pp. 43-56.
https://doi.org/10.1016/j.eswa.2017.11.028
Jiménez, C. F., Quintana, H., Pacheco, V., Melton, D., Torrealva, J., &
Tello, G. (2010). Camera trap survey of medium and large mammals in a
montane rainforest of northern Peru. Rev Peru. Biol. ,17 (2), pp. 191-196. https://doi.org/10.15381/RPB.V17I2.27
Karanth, K. U. (1995). Estimating tiger populations from camera-trap
data using Michler capture models. Biol. Conserv. , 71 (3),
pp. 333-338.
https://doi.org/10.1016/0006-3207(94)00057-W
Kolbert, E. (2014). The sixth extinction: An unnatural history .
New York: Henry Holt and Company.
Krasin I., Duerig T., Alldrin N., Ferrari V., Abu-El-Haija S.,
Kuznetsova A., Rom H., Uijlings J., Popov S., Veit A., Belongie S.,
Gomes V., Gupta A., Sun C., Chechik G., Cai D., Feng Z., Narayanan D.,
Murphy K. (2017). OpenImages: A public dataset for large-scale
multi-label and multi-class image classification . Available from
https://github.com/openimages
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Imagenet
classification with deep convolutional neural networks.ACM , 60 (6), pp. 84-90. https://doi.org/10.1145/3065386
McCallum, J. (2013). Camera trap use and development in field ecology.Mammal Rev ., 43, pp.
196-206. https://doi.org/10.1111/j.1365-2907.2012.00216.x
Norouzzadeh, M. S., Nguyen, A., Kosmala, M., Swanson, A., Palmer, M. S.,
Packer, C., & Clune, J. (2018). Automatically Identifying, Counting,
and Describing Wild Animals in Camera-Trap Images with Deep Learning.Proc. Natl. Acad. Sci , 115(25). E5716–E5725.
https://doi.org/10.1073/pnas.1719367115
Shin, H. C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, I.,
Mollura, D., & Summers, R. M. (2016). Deep convolutional neural
networks for computer-aided detection: CNN architectures, dataset
characteristics and transfer learning. IEEE Trans. Med. Imag. ,
35(5), pp. 1285-1298. https://doi.org/10.1109/tmi.2016.2528162
Silveira, L., Jacomo, A. T., & Diniz-Filho, J. A. F. (2003). Camera
trap, line transect census and track surveys: a comparative evaluation.Biol. Conserv ., 114(3), pp. 351-355.
https://doi.org/10.1016/s0006-3207(03)00063-6
Steenweg, R., Hebblewhite, M., Kays, R., Ahumada, J., Fisher, J. T.,
Burton, C., Townsend, S. E., Carbone, C., Rowcliffe J. M., Whittington,
J., Brodie, J., Royle J. A., Switalski, A., Clevenger, A. P., Helm, N.
& Rich, L.N. (2017). Scaling‐up camera traps: Monitoring the planet’s
biodiversity with networks of remote sensors. Front. Ecol.
Environ ., 15(1), pp. 26-34. https://doi.org/10.1002/fee.1448
Tzutalin. LabelImg. Git code (2015). Available from
https://github.com/tzutalin/labelImg
Willi, M., Pitman, R.T., Cardoso, A.W., Locke, C., Swanson, A., Boyer,
A., Veldthuis, M., & Fortson, L. (2019) Identifying animal species in
camera trap images using deep learning and citizen science.Methods Ecol. Evol. , 10, pp. 80-91.
https://doi.org/10.1111/2041-210X.13099
Wolf, C., & Jolion, J. M. (2006). Object count/area graphs for the
evaluation of object detection and segmentation algorithms. Int.
J. Doc. Anal. Recognit. , 8 (4), pp. 280-296. arXiv:1807.01544v2
WWF (2018) Living
Planet Report 2018: Aiming higher (eds. Grooten N & Almond REA). WWF
International, Gland, Switzerland.
Xie, M., Jean, N., Burke, M., Lobell, D., & Ermon, S. (2015). Transfer
learning from deep features for remote sensing and poverty mapping. In:Proceedings 30th AAAI Conference on Artificial Intelligence ,
30(1). arXiv:1510.00098.