Statistical Learning: Machine Learning Approaches and
Beyond
Statistical learning (Hastie et al . 2009) is a branch of
machine learning concerned with the development and study of algorithms
to perform specific tasks with minimal instruction. The tasks involve an
explicit goal, such as parameter estimation or classification, and
require a clear objective function, such as minimizing a cost function
or correctly classifying data. To the extent that animals also have
clear objective functions (e.g., ultimately: increasing individual
fitness; proximally: eating, avoiding being eaten, reproducing), and
that these objectives might be satisfied by performing a specific
movement-related task (e.g., selecting appropriate places to forage), it
is useful to draw a general analogy between a machine-learning algorithm
and an animal that learns. As described above, we use the termtask-based learning when referring to this type of process.
A wide range of machine learning approaches emphasize the importance of
improvement through experience (Jordan & Mitchell 2015), which is close
to some definitions of animal learning. Good examples are artificial
neural networks (ANN), a class of biologically inspired learning
algorithms. The input of an ANN, typically the sensory perception of the
agent or animal, is propagated through a network of idealized neurons,
which is readjusted by experience-generated reward signals. The output
of the ANN induces observable behaviour.
Another learning-like algorithm is a Bayesian probabilistic model for
inference (which can, incidentally, also drive an ANN). While Bayesian
reasoning is most often applied for statistical tasks such as parameter
estimation and complex model fitting, it is also viewed as a central,
probabilistic model for human cognition and learning (Chater et
al. 2006; Tenenbaum et al. 2006). In the specific context of
animal movement, prior information represents existing knowledge or
existing preference sets (e.g., spatial memory and selection
coefficients). Bayesian perspectives readily permit such prior knowledge
to be updated with new data (experiences) gained by an animal’s movement
through the environment. For example, Michelot et al. (2019) draw
an explicit analogy between stochastic rule-based animal movement and a
Gibbs sampler performing Markov chain Monte Carlo sampling. The
resulting posterior distributions accurately reflect the animal’s
resource selection function (RSF). The equivalence between an optimizing
algorithm and an animal gaining familiarity with its landscape provides
an interesting template. One could generate a similar animal (sampler)
that does update its movement coefficients based on the mismatch between
its experiences and the environment. Box 3 builds on these rule-based
decision-making ideas to draw connections between mobile autonomous
robots and learning animals.