For the i.i.d. setting in which two groups are compared, several simulation studies have indicated that rank tests, such as the Wilcoxon rank sum test or SAMSeq\cite{li2013finding},  applied to appropriately normalised data work well in the sense that they control the FDR at the nominal level. These nonparametric tests, however, also suffer from a few disadvantages. The first is that they are not applicable to studies with a design that is more complicated than comparing two or more groups. For example, the inclusion of a continuous covariate or confounder is not possible. The second disadvantage is that these procedures are specifically developed for hypothesis testing and that they do not come with a corresponding estimate of a relevant effect size. Parametric methods, on the other hand, can be used for testing hypotheses that are explicitly formulated in terms of e.g. (log) fold changes, and these methods also provides estimates of this effect size. Finally, these rank tests are often much less sensitive than their parametric competitors. This is a price that these methods pay for being nonparametric. 
Probabilistic Index Models (PIM) are a class of semiparametric models \cite{thas2012probabilistic} that (1) generate the classical rank tests \cite{de2015regression}, (2) provide estimates of corresponding and informative parameters (effect sizes), and (3) possess the flexibility of regression models and hence allow for the inclusion of covariates. Although the original theory is asymptotic in nature, hypothesis tests can often be implemented as permutation tests or empirical likelihood methods can be used, with good results with sample sizes as small as 25 \cite{amorim2018small}. Thus, PIMs resolve the first two disadvantages that we listed for the classical nonparametric rank tests. Regarding the third disadvantage (small sensitivity), semiparametric methods may bring a solution. Upon using semiparametric efficiency theory \cite{tsiatis2007semiparametric}, the estimation method can be designed so as to provide semiparametric efficient parameter estimators within a well-defined class of semiparametric models and restricted to asymptotically linear estimators. The latter can be considered as a compromise between the very wide class of nonparametric models and the very restrictive class of parametric models. For PIMs these semiparametric efficient estimators have been constructed and studied by \cite{stijn}. Their simulation study, however, indicated that , as compared to the original estimator of \cite{thas2012probabilistic}, only a small gain in precision could be obtained, and this came with a strongly increased computational cost.
Recent advances in semiparametric estimation theory and machine learning have resulted in algorithm-driven procedures for the construction of optimal statistical procedures. \cite{luedtke2020learning}, for example, developed a deep adversarial learning method for constructing optimal minimax estimators, without the need of data and not restricted to asymptotically linear estimators. These methods are very useful in settings where a parametric model is known in advance but the optimal estimator is very hard to construct analytically. When data are available, adaptive semiparametric efficient estimators may be constructed as in \cite{bickel1993efficient} or by the method of targeted maximum likelihood learning \cite{van2006targeted}\cite{van2018targeted}. These methods adapt the estimator to the data that have to be analysed. A disadvantage that these procedures have in common is that they typically require a large sample size. In high-dimensional or large scale settings, as in microbiome studies, we are in a different situation: we have data on many similar features (taxa), but for each feature we only have a small number of observations. This largely prohibits the use of adaptive procedures to be applied to each individual feature, but on the other hand, information can be shared among the features, particularly because it is realistic to assume that all features have distributions that are not very dissimilar.