<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="discussion">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">NEJSDS</journal-id>
<journal-title-group><journal-title>The New England Journal of Statistics in Data Science</journal-title></journal-title-group>
<issn pub-type="ppub">2693-7166</issn><issn-l>2693-7166</issn-l>
<publisher>
<publisher-name>New England Statistical Society</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">NEJSDS4B</article-id>
<article-id pub-id-type="doi">10.51387/23-NEJSDS4B</article-id>
<article-categories>
<subj-group subj-group-type="heading"><subject>Commentary and/or Historical Perspective</subject></subj-group>
<subj-group subj-group-type="area"><subject>Statistical Methodology</subject></subj-group>
</article-categories>
<title-group>
<article-title>Invited Discussion of J.O. Berger: Four Types of Frequentism and Their Interplay with Bayesianism<xref ref-type="fn" rid="j_nejsds4b_fn_001"><sup>✩</sup></xref></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Pericchi</surname><given-names>Luis</given-names></name><email xlink:href="mailto:luarpr@gmail.com">luarpr@gmail.com</email><xref ref-type="aff" rid="j_nejsds4b_aff_001"/>
</contrib>
<aff id="j_nejsds4b_aff_001">Department of Mathematics, <institution>University of Puerto Rico Rio Piedras</institution>, <country>Puerto Rico</country>. E-mail address: <email xlink:href="mailto:luarpr@gmail.com">luarpr@gmail.com</email></aff>
</contrib-group>
<author-notes>
<fn id="j_nejsds4b_fn_001"><label>✩</label>
<p>Main article: <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.51387/22-NEJSDS4">10.51387/22-NEJSDS4</ext-link>.</p></fn>
</author-notes>
<pub-date pub-type="ppub"><year>2023</year></pub-date><pub-date pub-type="epub"><day>31</day><month>5</month><year>2023</year></pub-date><volume>1</volume><issue>2</issue><fpage>142</fpage><lpage>144</lpage><history><date date-type="accepted"><day>15</day><month>8</month><year>2022</year></date></history>
<permissions><copyright-statement>© 2023 New England Statistical Society</copyright-statement><copyright-year>2023</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>Open access article under the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">CC BY</ext-link> license.</license-p></license></permissions><related-article related-article-type="commentary-article" ext-link-type="doi" xlink:href="10.51387/22-NEJSDS4" id="j_nejsds4b_ppc_001"/>
</article-meta>
</front>
<body>
<sec id="j_nejsds4b_s_001">
<label>1</label>
<title>A Convergence of the Schools of Statistics?</title>
<p>One of the merits of this far reaching article is to show that not all “Frequentisms” are equal. Furthermore that there are frequentist approaches which are compelling scientifically, notably the “Empirical Frequentist” (EP), which can be paraphrased as <italic>“The proof of the pudding is in the eating”</italic>. Somewhat surprisingly to some (but anticipated in Wald’s admissibility Theorems in Decision Theory), is the conclusion that the easiest and best way to achieve the EP property is through Bayesian reasoning, perhaps more exactly, through Objective Bayesian reasoning. (I am avoiding the expression Empirical Bayesian reasoning which would be appropriate if it wasn’t associated with a very particular group of methods. It is argued below that a better name would be “Bayes Empirical”) I concentrate on Hypothesis Testing since that is the most challenging area of deeper disagreement among schools.</p>
<p>From this substantive classification of Frequentisms, emerges the opportunity for a convergence, which is even more satisfying than a compromise, between schools. This may only be fully achieved if the prior probabilities are known, which is not usually the case. However, particularly in Hypothesis Testing, prior probabilities can and should be estimated and its uncertainty acknowledged in a Bayesian way. This may be termed perhaps, Bayes Empirical: The systematic empirical study of Prior Possibilities based on relevant data, acknowledging its uncertainty.</p>
<sec id="j_nejsds4b_s_002">
<label>1.1</label>
<title>A General Standard for Most (If Not All) of Statistics</title>
<p>A striking and enlightening bold affirmation in the paper, that will be remembered is:</p><disp-quote>
<p><italic>The empirical frequentist principle seems compelling to most statisticians</italic></p></disp-quote>
<p>Jim Berger, De Finetti’s Lecture, ISBA 2021, and 2022 (this article)</p>
<sec id="j_nejsds4b_s_003">
<label>1.1.1</label>
<title>We focus on Hypothesis Testing</title>
<p>In this respect, perhaps the best indication of the crisis of the bad versions of frequentism, is the reaction against the practitioners upside-down interpretation of p-values, what is called the prosecutor’s fallacy, in this case taking a p-value as the probability of the Null.</p>
<p>One of the important messages of the paper: it is NOT empirical frequentist the correct interpretation of p-values. It is stated in the article: <italic>“reporting the p-value as the error probability is terrible according to the empirical frequentist principle, it is reasonable only when the (unknown)</italic> <inline-formula id="j_nejsds4b_ineq_001"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\pi _{0}}$]]></tex-math></alternatives></inline-formula> <italic>is very small”</italic>. This phrase leads us to two major conclusions:</p>
<p>i) p-values needs calibration from an empirical point of view and ii) the paramount importance of prior probabilities to all schools of statistics.</p>
</sec>
</sec>
<sec id="j_nejsds4b_s_004">
<label>1.2</label>
<title>The World of Statistics Is Changing Regarding Significance Testing, Why?</title>
<p>The timing of the growing awareness about the flaws of current practices in Significance Testing suggest: The change in attitudes is not due to the Mathematics: Neither because very good mathematical reasons as 
<list>
<list-item id="j_nejsds4b_li_001">
<label>i)</label>
<p>Completeness Wald’s Theorems of Decision Theory.</p>
</list-item>
<list-item id="j_nejsds4b_li_002">
<label>ii)</label>
<p>Obedience of Likelihood Principle.</p>
</list-item>
<list-item id="j_nejsds4b_li_003">
<label>iii)</label>
<p>Conditional (superior) inference.</p>
</list-item>
<list-item id="j_nejsds4b_li_004">
<label>iv)</label>
<p>Not even Stein’s paradox, etc.</p>
</list-item>
</list> 
It is because of the Science: “Why most published research findings are false?”, the famous <bold>Empirical Frequentist</bold> apothegm coined by Ioannidis (2005) [<xref ref-type="bibr" rid="j_nejsds4b_ref_001">1</xref>].</p>
<p>Next we insist in a theme of the foremost importance, which can be derived from the paper: Prior Probabilities are at least as important as Power.</p>
<p>I denote as False Discovery Rate (fdr) equation (10) of the article, with known prior probabilities <inline-formula id="j_nejsds4b_ineq_002"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\pi _{0}}$]]></tex-math></alternatives></inline-formula>, and Type I Error <italic>α</italic> and Power <italic>β</italic>: 
<disp-formula id="j_nejsds4b_eq_001">
<alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">f</mml:mi>
<mml:mi mathvariant="italic">d</mml:mi>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>·</mml:mo>
<mml:mi mathvariant="italic">α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>·</mml:mo>
<mml:mi mathvariant="italic">α</mml:mi>
<mml:mo>+</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>−</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>·</mml:mo>
<mml:mi mathvariant="italic">β</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ fdr=\frac{{\pi _{0}}\cdot \alpha }{{\pi _{0}}\cdot \alpha +(1-{\pi _{0}})\cdot \beta },\]]]></tex-math></alternatives>
</disp-formula> 
from which we may construct a table changing priors and power to check their influence on fdr (Table <xref rid="j_nejsds4b_tab_001">1</xref>).</p>
<table-wrap id="j_nejsds4b_tab_001">
<label>Table 1</label>
<caption>
<p>Prior <inline-formula id="j_nejsds4b_ineq_003"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\pi _{0}}$]]></tex-math></alternatives></inline-formula>, Power <italic>β</italic> and False Discovery Rate (fdr), for <inline-formula id="j_nejsds4b_ineq_004"><alternatives><mml:math>
<mml:mi mathvariant="italic">α</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0.05</mml:mn></mml:math><tex-math><![CDATA[$\alpha =0.05$]]></tex-math></alternatives></inline-formula>.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: right; border-top: double; border-bottom: solid thin"><inline-formula id="j_nejsds4b_ineq_005"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\pi _{0}}$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-top: double; border-bottom: solid thin"><italic>β</italic></td>
<td style="vertical-align: top; text-align: left; border-top: double; border-bottom: solid thin">fdr</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: right">0.9</td>
<td style="vertical-align: top; text-align: left">0.9</td>
<td style="vertical-align: top; text-align: left">0.333</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: right">0.9</td>
<td style="vertical-align: top; text-align: left">0.5</td>
<td style="vertical-align: top; text-align: left">0.47</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: right">0.8</td>
<td style="vertical-align: top; text-align: left">0.8</td>
<td style="vertical-align: top; text-align: left">0.2</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: right">0.5</td>
<td style="vertical-align: top; text-align: left">0.9</td>
<td style="vertical-align: top; text-align: left">0.05</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: right; border-bottom: solid thin">0.5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.091</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The conclusion is: fdr is more sensitive about <inline-formula id="j_nejsds4b_ineq_006"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\pi _{0}}$]]></tex-math></alternatives></inline-formula> than to <italic>β</italic>. <bold>Science needs Baselines or “prevalences” as they are called in epidemiology, i.e. prior probabilities of hypothesis</bold>. In the following figure, I graph fdr versus prior probability, for different power lines (<inline-formula id="j_nejsds4b_ineq_007"><alternatives><mml:math>
<mml:mn>0.5</mml:mn>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:mi mathvariant="italic">β</mml:mi>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:mn>0.9</mml:mn></mml:math><tex-math><![CDATA[$0.5\lt \beta \lt 0.9$]]></tex-math></alternatives></inline-formula>) Power Versus Prior, showing High Sensitivity to the Prior, and less to power. This suggest a change of emphasis in Statistics to achieve the Empirical Frequentist synthesis.Admittedly, this strong conclusion is based on just a few examples, but the reasoning of the article seems compelling.</p>
<fig id="j_nejsds4b_fig_001">
<label>Figure 1</label>
<caption>
<p>fdr versus prior probabilities, for different powers.</p>
</caption>
<graphic xlink:href="nejsds4b_g001.jpg"/>
</fig>
</sec>
<sec id="j_nejsds4b_s_005">
<label>1.3</label>
<title>What If the Prior Probabilities Are Unknown? Then fdr Is a Random Variable Based on a Random Prior Probability Based on Surveys</title>
<p>Bayes empirical approach should try to acknowledge all sources of variability, including the information on which the prior is based empirically. In the equation (10) quoted above, <inline-formula id="j_nejsds4b_ineq_008"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\pi _{0}}$]]></tex-math></alternatives></inline-formula> is now not assumed precisely known. But we can, and should!, organize a survey. Suppose then, that our knowledge is based on a small survey on which <inline-formula id="j_nejsds4b_ineq_009"><alternatives><mml:math>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>100</mml:mn></mml:math><tex-math><![CDATA[$n=100$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds4b_ineq_010"><alternatives><mml:math>
<mml:mi mathvariant="italic">S</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>90</mml:mn></mml:math><tex-math><![CDATA[$S=90$]]></tex-math></alternatives></inline-formula>, so that <inline-formula id="j_nejsds4b_ineq_011"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.9</mml:mn></mml:math><tex-math><![CDATA[${\hat{\pi }_{0}}=0.9$]]></tex-math></alternatives></inline-formula>, thus if, the initial prior for <inline-formula id="j_nejsds4b_ineq_012"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\pi _{0}}$]]></tex-math></alternatives></inline-formula> is say Jeffreys prior, then the posterior of <inline-formula id="j_nejsds4b_ineq_013"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\pi _{0}}$]]></tex-math></alternatives></inline-formula> is <inline-formula id="j_nejsds4b_ineq_014"><alternatives><mml:math>
<mml:mi mathvariant="italic">B</mml:mi>
<mml:mi mathvariant="italic">e</mml:mi>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mi mathvariant="italic">a</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">|</mml:mo>
<mml:mi mathvariant="italic">S</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" stretchy="false">/</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">S</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" stretchy="false">/</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Beta({\pi _{0}}|S+1/2,n-S+1/2)$]]></tex-math></alternatives></inline-formula>.</p>
<p>Now fdr is a random variable, and it may have large dispersion when the variability on priors is acknowledged.</p>
<fig id="j_nejsds4b_fig_002">
<label>Figure 2</label>
<caption>
<p>Histogram of fdr as a function of random prior probabilities.</p>
</caption>
<graphic xlink:href="nejsds4b_g002.jpg"/>
</fig>
<p>If the “priors are unknown” (and usually they are) is not the end of Bayes, on the contrary we model the priors as random variables and estimate its distribution, Bayesianly. In doing so we respect the variability of the Information about the Prior, see for example Mossman and Berger (2001) [<xref ref-type="bibr" rid="j_nejsds4b_ref_003">3</xref>].</p>
</sec>
</sec>
<sec id="j_nejsds4b_s_006">
<label>2</label>
<title>Making the p-Value, Lower Bound Closer to a Bayes Factor</title>
<p>Shall we forget p-values? <italic>“p-values are just too familiar and useful to ditch”</italic> David Spiegelhalter (2017) [<xref ref-type="bibr" rid="j_nejsds4b_ref_005">5</xref>].</p>
<p>The paper study the well known calibration of p-values: 
<disp-formula id="j_nejsds4b_eq_002">
<alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mi mathvariant="italic">e</mml:mi>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mi mathvariant="italic">B</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
<mml:mi mathvariant="italic">u</mml:mi>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mi mathvariant="italic">d</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">e</mml:mi>
<mml:mo>·</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo>·</mml:mo>
<mml:mi mathvariant="italic">l</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
<mml:mi mathvariant="italic">g</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ LowerBound(p)=-e\cdot p\cdot log(p),\]]]></tex-math></alternatives>
</disp-formula> 
which has the advantage of depending only on p, but “as a lower bound lack strict empirical frequentist justification”, as stated in the article. Another problem of the bound is that it does not change with n. But we may invoke the Bayes Factor as a function of a p-value.</p>
<p>The <inline-formula id="j_nejsds4b_ineq_015"><alternatives><mml:math>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mi mathvariant="italic">e</mml:mi>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mi mathvariant="italic">B</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
<mml:mi mathvariant="italic">u</mml:mi>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mi mathvariant="italic">d</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$LowerBound(p)$]]></tex-math></alternatives></inline-formula> can be simply modified to approximate a Bayes Factor as (Pericchi and Perez (2017) [<xref ref-type="bibr" rid="j_nejsds4b_ref_004">4</xref>]) 
<disp-formula id="j_nejsds4b_eq_003">
<alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">A</mml:mi>
<mml:mi mathvariant="italic">B</mml:mi>
<mml:mi mathvariant="italic">F</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mi mathvariant="italic">e</mml:mi>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mi mathvariant="italic">B</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
<mml:mi mathvariant="italic">u</mml:mi>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mi mathvariant="italic">d</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>·</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi mathvariant="italic">π</mml:mi>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">χ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">l</mml:mi>
<mml:mi mathvariant="italic">o</mml:mi>
<mml:mi mathvariant="italic">g</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
</mml:mrow>
</mml:msqrt>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ ABF(p,n)=LowerBound(p)\cdot \sqrt{\frac{2\pi n}{{e^{2}}\cdot ({\chi ^{2}}(1-p)+log(n))}}.\]]]></tex-math></alternatives>
</disp-formula> 
This modification has been published in [<xref ref-type="bibr" rid="j_nejsds4b_ref_006">6</xref>].</p>
</sec>
<sec id="j_nejsds4b_s_007">
<label>3</label>
<title>Is This Paper Showing the Future of Statistics as an Unified Field? I Hope so!</title>
<p>A field like statistics divided in fighting schools is not good. The present paper, presents implicitly a rout for convergence of schools. This contrasts with other predictions that anticipated a fully Bayesian world:</p><disp-quote>
<p><italic>THE FUTURE OF STATISTICS- A BAYESIAN 21ST CENTURY. “It had originally been my intention to follow Orwell and use 1984 in the title, but de Finetti (1974) suggests 2020”.</italic></p></disp-quote>
<p>Dennis Lindley, “Advances in Applied Probability”, (1975) [<xref ref-type="bibr" rid="j_nejsds4b_ref_002">2</xref>].</p>
<sec id="j_nejsds4b_s_008">
<label>3.1</label>
<title>Conclusion</title><disp-quote>
<p><italic>Perhaps, instead, the direction of growth of Statistics for the rest of 21st century,</italic></p>
<p><italic>will be Bayesian... and Empirical Frequentist.</italic></p></disp-quote>
</sec>
</sec>
</body>
<back>
<ref-list id="j_nejsds4b_reflist_001">
<title>References</title>
<ref id="j_nejsds4b_ref_001">
<label>[1]</label><mixed-citation publication-type="journal"> <string-name><surname>Ioannidis</surname>, <given-names>J. P.</given-names></string-name> (<year>2005</year>). <article-title>Why most published research findings are false</article-title>. <source>PLoS medicine</source> <volume>2</volume>(<issue>8</issue>) <fpage>124</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/09332480.2005.10722754" xlink:type="simple">https://doi.org/10.1080/09332480.2005.10722754</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=2216666">MR2216666</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds4b_ref_002">
<label>[2]</label><mixed-citation publication-type="journal"> <string-name><surname>Lindley</surname>, <given-names>D.</given-names></string-name> (<year>1975</year>). <article-title>Advances in Applied Probability</article-title>. <source>Supplement: Proceedings of the Conference on Directions for Mathematical Statistics</source>. <volume>7</volume> <fpage>106</fpage>–<lpage>115</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds4b_ref_003">
<label>[3]</label><mixed-citation publication-type="journal"> <string-name><surname>Mossman</surname>, <given-names>D.</given-names></string-name> and <string-name><surname>Berger</surname>, <given-names>J. O.</given-names></string-name> (<year>2001</year>). <article-title>Intervals for posttest probabilities: a comparison of 5 methods</article-title>. <source>Medical Decision Making</source> <volume>21</volume>(<issue>6</issue>) <fpage>498</fpage>–<lpage>507</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds4b_ref_004">
<label>[4]</label><mixed-citation publication-type="other"> <string-name><surname>Pericchi</surname>, <given-names>L. R.</given-names></string-name> and <string-name><surname>Perez</surname>, <given-names>M. q. E.</given-names></string-name> (2017). Converting P-Values in Adaptive Robust Lower Bounds of Posterior Probabilities to increase the reproducible Scientific “Findings”. <italic>arXiv preprint arXiv:</italic><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1711.06219"><italic>1711.06219</italic></ext-link>.</mixed-citation>
</ref>
<ref id="j_nejsds4b_ref_005">
<label>[5]</label><mixed-citation publication-type="journal"> <string-name><surname>Spiegelhalter</surname>, <given-names>D.</given-names></string-name> (<year>2017</year>). <article-title>Too familiar to ditch</article-title>. <source>Significance</source> <volume>14</volume>(<issue>2</issue>) <fpage>41</fpage>.</mixed-citation>
</ref>
<ref id="j_nejsds4b_ref_006">
<label>[6]</label><mixed-citation publication-type="journal"> <string-name><surname>Velez Ramos</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Guerra</surname>, <given-names>L. R.</given-names></string-name> and <string-name><surname>Perez Hernandez</surname>, <given-names>M. E.</given-names></string-name> (<year>2023</year>). <article-title>From <italic>p</italic>-Values to Posterior Probabilities of Null Hypotheses</article-title>. <source>Entropy</source> <volume>25</volume>(<issue>4</issue>) <fpage>618</fpage>. <uri>https://doi.org/10.3390/e25040618</uri>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>
