Rationale, aims and objectives In the US, the reluctance of the federal government to impose a national stay-at-home policy in wake of COVID19 pandemic has left the decision of how to achieve social distancing to individual state governors. We hypothesized that in the absence of formal guidelines, the decision to close a state reflects the classic Weber-Fechner law of psychophysics- the amount by which a stimulus (such as number of cases or deaths) must increase in order to be noticed as a fraction of the intensity of that stimulus. Methods On April 12, 2020 we downloaded data from the New York Times database from all 50 states and the District of Columbia; by that time all but 7 states had issued the stay-at-home orders. We fitted the Weber-Fechner logarithmic function by regressing the log2 of cases and deaths respectively against the daily counts. We also conducted Cox regression analysis to determine if the probability of issuing the stay-at-home order increases proportionally as the number of cases or deaths increases. Results We found that the decision to issue the state-at-home order reflects the Weber-Fechner law. Both the number of infections (p=<0.0001; R2=0.79) and deaths (p<0.0001; R2=0.63) were significantly associated with the decision to issue the stay-at-home orders. The results indicate that for each doubling of infections or deaths, an additional 4 to 6 states will issue stay-at-home orders. Cox regression showed that when the number of deaths reached 256 and the number of infected people were over 16,000 the probability of issuing “stay-at-home” order was close to 100%. We found no difference in decision-making according to the political affiliation; the results remain unchanged on July 16,2020. Conclusions when there are not clearly articulated rules to follow, decision-makers resort to simple heuristics, in this case one consistent with the Weber-Fechner law.
Rationale Assessing the performance of diagnostic tests requires evaluation of the amount of diagnostic uncertainty the test reduces (i.e. 0% - useless test, 100% - perfect test). Statistical measures currently dominating the evidence-based medicine (EBM) field and particularly meta-analysis (e.g. sensitivity and specificity), cannot explicitly measure this uncertainty reduction. Mutual information (MI), an information theory statistic, is a more appropriate metric for evaluating diagnostic tests as it explicitly quantifies uncertainty and, therefore, facilitates natural interpretation of a test’s value. In this paper, we propose the use of MI as a single measure to express diagnostic test performance and demonstrate how it can be used in meta-analysis of diagnostic test studies. Methods We use two cases from the literature to demonstrate the applicability of MI meta-analysis in assessing diagnostic performance. These cases are: 1) Meta-analysis of studies evaluating ultrasonography (US) to detect endometrial cancer and 2) meta-analysis of studies evaluating magnetic resonance angiography to detect arterial stenosis. Results Results produced by the MI meta-analyses are comparable to the results of meta-analyses based on traditionally used statistical measures. However, the results of MI are easier to understand as it relates directly to the extent of uncertainty a diagnostic test can reduce. For example, a US test diagnosing endometrial cancer is 40% specific and 94% sensitive. The combination of these values is difficult to interpret and may lead to inappropriate assessment (e.g. one could favour the test due to its high sensitivity, ignoring its low specificity). In terms of MI however, the test reduces diagnostic uncertainty by 10%, which is marginal and thus the test is clearly not very useful. Conclusions We have demonstrated the suitability of MI in assessing the performance of diagnostic tests, which can facilitate easier interpretation of the true utility of diagnostic tests.
Rationale, aims and objectives New therapies are increasingly approved by regulatory agencies such as the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) based on testing in non-randomized clinical trials. These treatments have typically displayed “dramatic effects” (i.e., effects that are considered large enough to obviate the combined effects of bias and random error). The agencies, however, have not identified how large these effects should be to avoid the need for further testing in randomized controlled trials (RCTs). We investigated the effect size that would circumvent the need for further RCTs testing by the regulatory agencies. We hypothesized that the approval of therapeutic interventions by regulators is based on heuristic decision-making whose accuracy can be best characterized by the application of signal detection theory (SDT). Methods We merged the EMA and FDA database of approvals based on non-RCT comparisons. We excluded duplicate entries between the two databases. We included a total of 134 approvals of drugs and devices based on non-RCTs. We integrated Weber-Fechner law of psychophysics and recognition heuristics within SDT to provide descriptive explanations of the decisions made by the FDA and EMA to approve new treatments based on non-randomized studies without requiring further testing in RCTs. Results Our findings suggest that when the difference between novel treatments and the historical control is at least one logarithm (base 10) of magnitude, the veracity of testing in non-RCTs seems to be established. Conclusion Drug developers and practitioners alike can use the change in one logarithm of effect size as a benchmark to decide if further testing in RCTs should be pursued, or as a guide to interpreting the results reported in non-randomized studies. However, further research would be useful to better characterize the threshold of effect size above which testing in RCTs is not needed.