Introduction
Biological entities invariably depend on numerous biotic and abiotic parameters. These parameters are often maintained during experiments so as to cause minimal alteration to experimental results. This approach of isolation research has successfully led to numerous discoveries. Nonetheless, subtle variations cooperatively play significant functions in any experimental system and systematic studies have emerged with newer supportive data. Many such multi-parametric studies are modelled and simulated by well-defined mathematical support.
Recent developments and application of high-throughput technologies like microarrays, next-generation sequencing (genomics) and mass spectrometry (proteomics and metabolomics), have enabled generation of enormous amounts of data. These technologies allow simultaneous capture of multiple parameters associated with any biological research that give way to generation of an array of associated metadata. Enormity of such huge datasets gives rise to high dimensionality that represents association with respective parameters. These databases deposited through publications and repositories, have allowed their independent research and analyses by a number of researchers all over the world. Such multitude of information makes it hard to comprehend results and recognise biological patterns. Statistical tools developed for such conditions of data overload include tools of dimension reduction like principal component analysis (PCA) and partial least square (PLS). These are available through software packages like SAS (Statistical Analysis System, SAS Institute, USA), SPSS (Statistical Package for the Social Sciences, IBM Corporation, USA) and open source environments like ‘R’ project and PSPP provide easier ways to analyse such datasets. These represent very few important aspects of statistics in biology as far as data interpretation is concerned. However, there are a number of other things which need to be considered during planning of biological study like experimental design, sampling strategy and model selection to keep random effects as low as possible. Here too mathematics and statistics have played seminal roles, due to substantial and ongoing development in the very active areas of "mathematical and statistical modelling".
Predictive analytics use statistical or mathematical or machine learning methods to make predictions about future or unknown outcomes. Traditionally these methods have been used extensively in finance and general management fields but in last few years, their application in health care is becoming eminent. In a most basic sense, a model is an abstract representation of reality. Since late 1980s, people have tried to develop mathematical modeling to understand the spread of infectious diseases like measles \cite{Schenzle1984}, HIV \cite{Lin_1993}, dengue \cite{Syafruddin_2013}; \cite{Rangkuti_2014}, TB \cite{Side_2015} and recently Zika virus \cite{Bonyah_2016}. Successes of these models has led to development of models for lifestyle diseases, which will be discussed in Section 5.
In the present review, we have limited ourselves to an overview about recent algorithms and models as applied to fields affecting human welfare and health. Here, human welfare through bacterial quorum sensing and its inhibition, human health through major disease of cancer and diabetes have been discussed in the following sections.