Nonparametric e-tests of symmetry

The notion of an e-value has been recently proposed as a possible alternative to critical regions and p-values in statistical hypothesis testing. In this paper we consider testing the nonparametric hypothesis of symmetry, introduce ana-logues for e-values of three popular nonparametric tests, define an analogue for e-values of Pitman’s asymptotic relative efficiency, and apply it to the three non-parametric tests. We discuss limitations of our simple definition of asymptotic relative efficiency and list directions of further research


Introduction
The study of the efficiency of nonparametric tests that started in the late 1940s is often regarded as a success story in statistics. Some nonparametric tests, such as Wilcoxon's signed-rank and rank-sum tests, are highly efficient even when used in the framework of popular parametric models, such as the Gaussian model. Theoretical results mostly concern asymptotic efficiency of those tests, but there is also empirical evidence for their finite-sample efficiency. While some nonparametric tests (such as Wilcoxon's) became very popular after their high efficiency had been discovered, others (such as Wald and Wolfowitz's run test) were gradually discarded from the statistical literature after their low efficiency had been demonstrated [15,Introduction]. The usual approach to hypothesis testing is based on critical regions or pvalues, but in this paper we replace them with their alternative, e-values (see, e.g., [21,18,7]). We show that some of the old results about the efficiency of nonparametric tests carry over to hypothesis testing based on e-values. To distinguish our notions of power, tests, etc., from the standard notions, we add the prefix "e-". (The prefix "p-" is sometimes added to signify standard notions based on p-values, but in this paper we rarely need it since the key notion that we are interested in, Pitman's asymptotic relative efficiency, is defined in terms of critical regions rather than p-values. ) We explain basics of e-testing in Sect. 2, and in particular, we state an analogue of the Neyman-Pearson lemma in e-testing. In the following section, Sect. 3, we give a simple example of a parametric e-test, one for testing the null hypothesis N (0, 1) against an alternative N (θ, 1) in an IID situation.
In Sect. 4 we give the first, and in some sense most powerful, of the three examples of nonparametric e-tests that we discuss in this paper. It was introduced by Fisher in his 1935 book [5]. Our nonparametric null hypothesis is that of symmetry around 0 (and for simplicity we consider independent observations coming from a continuous distribution). After that (Sect. 5) we define the asymptotic relative efficiency of e-tests in the spirit of Pitman's definition [16]. We regard our definition of asymptotic relative efficiency as a direct translation of the classical definition. Then in Sect. 6 we compute the Pitman-type asymptotic relative efficiency of the Fisher-type test discussed in Sect. 4. This is complemented by similar computations for e-versions of the sign test in Sect. 7 and Wilcoxon's signed-rank test in Sect. 8. Our results for all three tests agree perfectly with the classical results. This is just a first step, and in Sect. 9 we discuss limitations of our approach (which are considerable) and list natural directions of further research.

General principles of e-testing
Let P be a given probability measure on a sample space Ω (a measurable space). Our null hypothesis is {P }; it is simple in the sense of containing a single probability measure.
We observe ω ∈ Ω and are interested in whether ω was generated from P . An e-variable for testing P is an [0, ∞]-valued random variable E such that E dP ≤ 1. In order to be used for testing, we need to choose E before we observe ω. By Markov's inequality, E can be large only with a small probability (for any threshold c > 1, P (E ≥ c) ≤ 1/c); therefore, observing a large E casts doubt on ω being generated from P .
In the classical Neyman-Pearson approach to hypothesis testing, in addition to P we also have an alternative hypothesis Q. The e-power of an e-variable E is then defined as log E dQ. This is an analogue of the usual notion of power, but it only works in regular cases. One of such regular cases will be discussed in the next section.
Lemma 2.1. For given null and alternative hypotheses P and Q, respectively, such that Q ≪ P , the largest e-power is attained by the likelihood ratio dQ/dP : for any e-variable E, And if Q ≪ P is violated, the largest e-power is ∞.
The likelihood ratio dQ/dP in Lemma 2.1 is understood to be the Radon-Nikodym derivative of Q w.r. to P .
Proof of Lemma 2.1. If Q ≪ P is violated, there is an event A ⊆ Ω such that P (A) = 0 and Q(A) > 0. Then the e-power of the e-variable It remains to consider the case Q ≪ P . In this case, let q be the probability density function of Q w.r. to P . In terms of q, we can rewrite (1) as q log E dP ≤ q log q dP, i.e., q log E q dP ≤ 0.
The last inequality follows from log x ≤ x − 1.
According to Lemma 2.1, which is an analogue for e-values of the Neyman-Pearson lemma, the optimal e-variable for testing a null hypothesis P against an alternative Q ≪ P is the likelihood ratio dQ/dP . The maximum e-power is  [11] of the alternative Q from the null hypothesis P .
We will sometimes refer to log E as the observed e-power of E; the e-power is then the expectation of the observed e-power w.r. to the alternative hypothesis Q.
The notion of e-power is very close to Shafer's [18] implied target, the main difference being that the implied target only depends on the null hypothesis P and the e-variable E.

A parametric e-test
We start our discussion of specific e-tests from a very simple parametric case, that of the Gaussian statistical model Q θ := N (θ, 1), θ ∈ R, with the variance known to be 1. We observe realizations of independent Z 1 , . . . , Z n ∼ N (θ, 1). The null hypothesis P is N (0, 1), and we are interested in the alternatives Q = Q θ = N (θ, 1) for θ ̸ = 0.
For observations z 1 , . . . , z n and a given alternative N (θ, 1), the likelihood ratio of the alternative to the null hypothesis is The corresponding optimal e-power is The interpretation of the optimal e-power (3) usually depends on the law of large numbers and its refinements (such as the central limit theorem and large deviation inequalities). The key feature of the definition log E dQ of the e-power of E under the alternative Q that requires explanation is the presence of log (it is discussed in detail by, e.g., Shafer [18,Sect. 2.2.1]). The idea is that a typical e-value is obtained by multiplying components coming from the individual observations z i . This can be seen from (2) (and also expressions (8), (15), and (19) below, which are typical). Taking the logarithm leads to a much more regular distribution, which is, e.g., approximately Gaussian under standard regularity conditions. In the case of (2), the key component of the logarithm is n i=1 z i , and we can apply, e.g., the central limit theorem to see that the observed e-power is between the narrow limits 1 2 nθ 2 ±c √ nθ with probability close (in this particular case, even exactly equal) to Φ(c) − Φ(−c), where c > 0 and Φ is the standard Gaussian cumulative distribution function.
Remark 3.1. To get the full idea of the power of E under Q, we need the whole distribution of the observed e-power log E under Q, and replacing it by its expectation is a crude step. (The next step might be, e.g., complementing the expectation with the standard deviation of log E under Q.) We leave such more realistic notions of power for future research.
We regard the family (2) of e-variables as a test (an e-test) of the null hypothesis N (0, 1). While for several important statistical models there are uniformly most powerful p-tests (see, e.g., [13,Chap. 3]), this is not the case for e-tests, and the e-tests considered in this paper are always families of e-variables.
The fact that the e-variable (2) depends on the unknown alternative parameter θ is a disadvantage. A natural way out is to integrate it under the prior distribution N (0, 1) over θ, which gives us the e-variable (cf. Remark 3.2 below). Notice that the operation of integration makes the evariable "two-sided": while (2) The remaining disadvantage of the e-variable (4) is that it is valid only under the simple Gaussian null hypothesis N (0, 1). In the following sections we will replace this simple null hypothesis with a composite nonparametric one.
Remark 3.2. In our computations in this paper we often use the formula where A > 0 and B ∈ R.

Fisher-type nonparametric e-test of symmetry
Let Z 1 , . . . , Z n be continuous IID random variables. We are interested in the null hypothesis that their distribution is symmetric around 0. This is an example of a nonparametric hypothesis, since the distribution of Z 1 , . . . , Z n is not described in a natural way by finitely many real-valued parameters. Intuitively, we are interested in two alternatives: the one-sided alternative that Z i , even though IID, are not symmetric but shifted to the right; and the two-sided alternative that Z i are shifted to the right or to the left. A typical case in applications is where Z i := Y i − X i , X i is a pre-treatment measurement, and Y i is a post-treatment measurement, and we are interested in whether the treatment has any effect. Assuming that raising X i is desirable, the one-sided alternative is that the treatment is beneficial.
We will formalize our null hypothesis in a way similar to repetitive and oneoff structures [20, Sects. 11.2.4 and 11.2.5]. However, we will not need general definitions and will adapt them to our special case.
The symmetry model for a sample size n is the pair (t, b), where t : R n → Σ is the mapping t : (z 1 , . . . , z n ) → (|z 1 | , . . . , |z n |) from the sample space R n to the summary space [0, ∞) n , and b is the Markov kernel that maps each summary (z 1 , . . . , z n ) ∈ [0, ∞) n to the uniform probability measure on the set An e-variable for testing the null hypothesis of symmetry is a function E : R n → [0, ∞] such that E db(t(z 1 , . . . , z n )) ≤ 1 for all z 1 , . . . , z n . It is admissible if ≤ holds as = for all z 1 , . . . , z n ; in other words, if it ceases to be an e-variable (w.r. to the symmetry model) as soon as its value is increased at any point.
In this section we discuss the first of our three e-tests for testing symmetry. We are interested in the e-variables of the form where S(z 1 , . . . , z n ) := n i=1 z i , λ > 0 is a positive parameter, and C is chosen to make E an admissible e-variable, i.e., C = C(λ, t(z 1 , . . . , z n )) := log exp(λS)db(t(z 1 , . . . , z n )) (i.e., C := log E exp(λS), the expectation being under the null hypothesis). Lemma 4.1 will give a convenient formula for computing C.
The form (6) for our e-variables can be justified by the analogy with the e-variable (2) that we obtained in the Gaussian case. The expression for the normalizing constant C will, however, be different and will be derived momentarily.
The justification of the symmetry model from the point of view of standard statistical modelling is that, under the null hypothesis of symmetry, t is a sufficient statistic giving rise to b as conditional distribution.
For simplicity, we will assume that z 1 , . . . , z n are all different (under our assumption that the random variables Z 1 , . . . , Z n are continuous, the realizations will be all different almost surely).
Lemma 4.1. The value of C in (6) is given by Proof. We find (Alternatively, we can see straight away that the average of (8) below w.r. to b(t(z 1 , . . . , z n )) is 1.) Plugging (7) into (6) gives the e-variable .  The e-variable (8) dominates in the sense E ′ ≤ E. Therefore, E ′ is also an e-variable, albeit inadmissible in general. To check the inequality E ′ ≤ E, it suffices to check that Expanding both sides into Taylor's series shows that this inequality indeed holds for all x. The inequality is no excessively lose, especially for small values of x (which will be the case that we will be interested in when computing the Pitman efficiencies): cf. Figure 1.  In order to get rid of the dependence of (8) or (9) on λ, we can integrate these expression over a prior distribution on λ. This can be easily done explicitly (see Remark 3.2) in the case of (9) and the prior distribution N (0, 1) on λ: The right-hand side of (11) is close to the right-hand side of (4) under N (0, 1) as the null hypothesis: this follows from n i=1 z 2 i ≈ n (for large n and with high probability). However, this relatively small change drastically changes the property of validity of the e-test: while the right-hand side of (4) is an etest of N (0, 1) only, the right-hand side of (11) is a test of the nonparametric hypothesis of symmetry.

Results for Charles Darwin's data
In this subsection we will compute Fisher-type nonparametric e-values for data used by Darwin [3, Chap. 1] to test whether cross-fertilization of plants was advantageous to the progeny as compared with self-fertilization. This was an important question from the evolutionary point of view, and Darwin's preliminary work has convinced him that cross-fertilization was indeed advantageous; in particular, nature went to great lengths to prevent self-fertilization [2]. Table 1 reports results for a small subset of Darwin's data, those for maize. This subset was analyzed for Darwin by Francis Galton (as Darwin describes in detail in [3, Chap. 1]) and was reanalyzed by Fisher in [5, Chap. III]. Fisher offered both parametric analysis (assuming the Gaussian distribution) and novel nonparametric analysis, and his finding was that Student's t-test and his nonparametric test produce remarkably similar results. Table 1 lists the differences in height between 15 pairs of matched plants, with a cross-and self-fertilized plant in each pair (meaning a plant grown from a cross-or self-fertilized seed, respectively). A positive difference means that the cross-fertilized plant is taller, which we a priori expect to happen more often. Fisher was interested in two alternatives to the null hypothesis of symmetry: the one-sided alternative of positive observations being more common than negative ones and the two-sided alternative of asymmetry (with positive observations being either more or less common than negative ones).
Fisher's p-value for testing the one-sided hypothesis is 2.634%, and his pvalue for testing the two-sided hypothesis is twice as large, 5.267%. Therefore, the one-sided p-value is significant but not highly significant, whereas the twosided p-value is not even significant.   Table 1, and in order to make λ less arbitrary we normalize z 1 , . . . , z 15 by dividing them by the standard deviation of these 15 numbers. Jeffreys's [10, Appendix A] rule of thumb is to consider an e-value of 10 as being analogous to a p-value of 1% and to consider an e-value of √ 10 ≈ 3.162 as being analogous to a p-value of 5%. This makes Figure 2 roughly comparable to Fisher's p-values, especially if we ignore the inadmissible simplified e-values. If we guess in advance that λ := 0.5 is a good parameter value, we will get an e-value of 7.651. More realistically, averaging the e-values for λ ∈ [0, 1] will give the one-sided e-value 5.149. Replacing λ ∈ [0, 1] by λ ∈ [−1, 1] gives the two-sided e-value 2.633 not reaching the threshold of √ 10.

Pitman-type asymptotic relative efficiency
The following definition is in the spirit of Pitman's definition, which can be found in, e.g., [19,Sect. 14.3]. Let (Q θ | θ ∈ Θ) be a statistical model, i.e., a set of probability measures on the real line R, with the observations generated from it in the IID fashion. We assume, for simplicity, that Θ = R and regard Q 0 as the null hypothesis; informally, the alternative is either one-sided, θ > 0, or two-sided, θ ̸ = 0 (for specific e-tests, we will have the same results for one-sided and two-sided Pitman efficiency). By an e-variable we mean an e-variable with respect to Q n 0 . In our asymptotic framework we consider sequences of parameter values θ ν that depend on the "difficulty" ν = 1, 2, . . . of our testing problem; in the one-sided case we will assume θ ν ↓ 0 (the sequence is strictly decreasing and converges to 0), and in the two-sided case we will assume θ ν → 0.
Let E n 1 and E n 2 be families of e-variables on R n ; we are interested in the case where E n 1 is a family of interest to us (a nonparametric e-test such as (8) above, or (16) or (17) below) and E n 2 is the baseline family of all e-variables on R n . The asymptotic relative efficiency of E n 1 w.r. to E n 2 is c if, for any β > 0 and any θ ν ↓ 0 (one-sided case) or θ ν → 0 (two-sided case), we have n ν,2 /n ν,1 → c, where n ν,j , j = 1, 2, is the minimal number of observations n such that ∃E ∈ E n j : log E dQ n θν ≥ β.
For example, if the asymptotic relative efficiency is 0.5, the best e-test in (E n 1 ) requires twice as many observations n as the best test in (E n 2 ) to achieve the same e-power (if the best e-tests exist).  The idea of using an auxiliary parametric statistical model (Q θ ), such as the Gaussian model, to assay the efficiency of nonparametric e-tests is illustrated in Figure 3. We are testing a nonparametric null hypothesis (the hypothesis of symmetry in this paper), but we are afraid that for a popular parametric model (the Gaussian model Q θ := N (θ, 1) in this paper, which plays the role of an assay statistical model ) our testing method loses a lot. We are interested in the case where the intersection between the nonparametric null hypothesis and the assay model contains only one probability measure; we refer to this intersection as the parametric null hypothesis in Figure 3 (in this paper, it is {N (0, 1)}). For a given simple alternative hypothesis Q = Q θ in the assay model (shown as the red dot in Figure 3), we are hoping to show that the best e-power achieved for testing the simple parametric null hypothesis vs Q is not much better than the best e-power achieved for testing the composite (and usually massive) nonparametric null hypothesis. Or, if Pitman-type notion of efficiency is to be used (as in this paper), that the same e-power is attained for numbers of observations that are not wildly different.
For all three nonparametric e-tests considered in this paper (Sects. 6-8 below) we will need the number n ν,2 of observations required by our baseline, which is, by Lemma 2.1, the likelihood ratio dN (θ ν , 1)/dN (0, 1). By (3), achieving an e-power of β requires approximately observations (namely, ⌈2βθ −2 ν ⌉ observations). As already mentioned, we always use the Gaussian model as our assay model, which justifies using (6) with S(z 1 , . . . , z n ) := z 1 + · · · + z n as a nonparametric e-test. The sign and Wilcoxon versions will be natural modifications (corresponding to relaxing the symmetry assumption).
Remark 5.1. In the context of regular statistical models such as Gaussian, it is natural to set θ ν = cν −1/2 . In this case the "difficulty" ν (referred to as "time" in [19,Sect. 14.3]) becomes proportional to the number of observations required to achieve a given e-power.

Asymptotic efficiency of the Fisher-type e-test
In the classical case, the relative efficiency of Fisher's test is 1 [6, Chapter 7, Example 4.1], as first shown by Hoeffding [9] (according to Mood [14]). Let us check that this remains true for the e-version as well.
First we find informally a suitable e-variable in the family (8) and then show that it requires the optimal number (12) of observations to achieve an e-power of β. Under the symmetry model, each observation z i is split into its magnitude m i := |z i | and sign s i := sign(z i ). Given the magnitudes, the signs are independent and P(s i = 1) = 1/2 under the null hypothesis N (0, 1) and hypothesis N (θ ν , 1). The conditional likelihood ratio for the signs is This is Fisher's e-test (8) corresponding to λ := θ ν . Its observed e-power is Since, under the alternative hypothesis N (θ ν , 1), the e-power is We obtain the optimal e-power (3) with θ = θ ν , and so the asymptotic relative efficiency of Fisher's e-test is 1.

Sign e-test
In this and following sections we use (6) for different statistics S, and with C still chosen to make E λ an admissible e-variable. In this section we make the simplest choice of S(z 1 , . . . , z n ) in (6), which is the number k of positive z i among z 1 , . . . , z n . This gives the sign e-test with parameter λ > 0. The use of the signs for hypothesis testing goes back to [1].
To obtain a useful alternative representation of the sign e-test, let p ∈ (0, 1) be defined by the equation (so that λ becomes the log-odds ratio). The e-variable (6) then becomes The last expression is the likelihood ratio of an alternative to the null hypothesis, and so is an admissible e-variable. This gives us the representation of the sign e-test. The equality between the last two terms in (14) gives an explicit expression for C, which in turn gives the alternative representation E λ (z 1 , . . . , z n ) = e k 2 1 + e λ n (16) of the sign e-test.
In view of our informal alternative hypothesis, we are often interested in λ > 0, i.e., p > 1/2.
Notice that we are actually testing a wider null hypothesis than the symmetry model, since the magnitudes of z i do not matter. Namely, the sign e-test is valid for testing the hypothesis that the signs of Z 1 , . . . , Z n are ±1 independently. (A similar remark can also be made about the nonparametric e-test discussed in the following section, which in fact tests an intermediate null hypothesis.) As before, we have a dependence of the sign e-test (15) on p. To get rid of this dependence, we can, e.g., integrate (15) over p ∈ [0, 1], obtaining E(z 1 , . . . , z n ) := 2 n B(k + 1, n − k + 1), where B is the beta function. For testing the one-sided hypothesis we can integrate (15) over the uniform probability measure on [0.5, 1], which gives E(z 1 , . . . , z n ) := 2 n+1 B(k + 1, n − k + 1) − B(0.5; k + 1, n − k + 1) , where the second entry of B stands for the incomplete beta function.

Efficiency of the sign test
In this and next sections we consider the same assay parametric model and still assume that the null hypothesis is N (0, 1) and the alternative is N (θ ν , 1). Suppose we only observe the signs s i of z i , which is the case when testing the null hypothesis with the sign e-test. By Lemma 2.1 the largest e-power for an e-variable of this kind will be achieved by the likelihood ratio for the signs.
The sign of Z i is 1 with probability 1/2 under the null hypothesis and 1/2 +θ ν / √ 2π under the alternative forθ ν ∼ θ ν , due to the first-order Taylor approximation of Φ. With k being the number of positive z i , the likelihood ratio for the signs is This is an instance of the sign e-test (15), corresponding to p = 1/2 +θ ν / √ 2π. The observed e-power of this e-test is (we have used the second-order Taylor approximation). This gives the e-power To achieve an e-power of β, the sign e-test needs ∼ πβθ −2 ν observations. Therefore, the asymptotic efficiency of the sign e-test is 2/π ≈ 0.64, exactly the same as in the standard case [6,Example 3.1]. (In the standard case the sign test is usually compared with the t-test, but in this paper we use an even more basic assay parametric model; namely, we assume that the variance is known to be 1.) Since the asymptotic efficiency is approximately 2/3, we can say that the sign test wastes every third observation in our Gaussian setting. This is the least efficient of the three nonparametric e-tests considered in this paper when efficiency is measured using the Gaussian assay model as yardstick.

Sign test for Darwin's data
It is interesting that the sign test gives the one-sided p-value of 0.00369 and the two-sided p-value of 0.00739. In contrast with Fisher's p-test, both p-values are highly significant, the reason being that the two negative numbers in Table 1 are so large in absolute value. Figure 4 is an analogue of Figure 2 for the sign test. The attainable evalues are now much larger, and the average over all p ∈ [0, 1] is 19.310. To use Jeffreys's [10, Appendix A] expressions, we have strong evidence against the null hypothesis of cross-and self-fertilization being equally efficient. The corresponding one-sided e-value, found as the average over all p ∈ [0.5, 1], is 38.544, and in Jeffreys's terminology it provides very strong evidence (for crossfertilization tending to produce taller plants, in this context). Table 1 Table 1 corresponding to Darwin's  Table 97). With this amount of evidence, statistics is hardly needed to see that the evidence is really overwhelming.

Wilcoxon's signed-rank e-tests
Wilcoxon's signed-rank test [22] is based on arranging the magnitudes |z i | of the observations in the ascending order and assigning to each its rank, which is a number in the range {1, . . . , n}: the observation z i with the smallest |z i | gets rank 1, the one with the second smallest |z i | gets rank 2, etc. Notice that the symmetry model (i.e., the uniform probability measure on (5)) implies that for any set S ⊆ {1, . . . , n}, the probability is 2 −n that the observations with the ranks in S will be positive and all other observations will be negative. This determines the distribution (conditional on the magnitudes |z i |) of Wilcoxon's statistic V n defined as the sum of the ranks of the positive observations.
We will be interested in the nonparametric e-test (6) with S := V n , i.e., The following lemma gives a convenient formula for computing C.
Lemma 8.1. The value of C in (17) is given by Proof. Using Fisher's conditional distribution (the uniform probability measure on (5)), we can write C in the form where Λ := exp(λ), and using the recursion (obtained by splitting all subsets of {1, . . . , i} into those that do not contain i and those that do), we obtain (1 + Λ i ).

Efficiency of Wilcoxon's signed-rank e-test
Our derivation in this subsection will follow [12, Example 3.3.6]. The statistic V n being Wilcoxon's signed-rank statistic defined at the beginning of this section, is asymptotically normal both under the null hypothesis N (0, 1), and under the alternative hypothesis N (θ ν , 1), The mean value 1/2+θ ν / √ π in (22) is found as the first-order approximation to the probability of Z 1 +Z 2 > 0, where Z 1 and Z 2 are independent and distributed according to the alternative hypothesis N (θ ν , 1) (see [12, (3.3.40)]). Namely, it is obtained from Z 1 + Z 2 ∼ N (2θ ν , 2) and from the standard Gaussian density being 1/ √ 2π at 0. From (21) and (22) we obtain the asymptotic likelihood ratio observations, is close, in a suitable sense, to the full distribution of the e-test, such as (8), (16), or (19) (with n ν,1 observations and the corresponding value of the parameter). Therefore, a fuller treatment of asymptotic relative efficiency will not use e-power directly (which will make it more complicated).

Definition of efficiency in terms of mixtures
Our definition of Pitman-type efficiency is close to being a direct translation of the classical one. It considers the alternatives N (0, θ ν ) that approach the null hypothesis N (0, 1) as the difficulty ν increases. In the classical case, this works perfectly for many popular assay models because of the existence of a uniformly most powerful test: the optimal size α critical region does not depend on ν (assuming θ ν > 0). In the e-case, on the contrary, the optimal e-variable does depend on ν.
A possible alternative definition would be to replace N (θ ν , 1) by a mixture N (θ, 1)µ ν (dθ) of N (θ, 1) with respect to a probability measure µ ν (dθ) that is increasingly concentrated around θ = 0 as ν → ∞. In a sense, the assay statistical model considered in this paper is "pure" in that it consists of pure Gaussian distributions. Considering mixtures N (θ, 1)µ ν (dθ) would make the results more realistic but would also make the definitions much more complicated.

Other assay models
In our efficiency results, the Gaussian model can be replaced by other statistical models. It is particularly interesting to compare nonparametric e-tests with the optimal e-tests under those models; nowadays, comparison with the t-test, which was done in many of the classical papers (e.g., [8]), looks less convincing for non-Gaussian assay models.
Our choice of the form (6) of the nonparametric e-tests considered in this paper was motivated by the Gaussian assay model: see the likelihood ratio (2). Using other assay models would lead to other nonparametric e-tests. Therefore, varying the assay model may be a useful design tool for nonparametric e-tests.

Other notions of efficiency
The Pitman-type notion of efficiency is "local", in the sense of being defined in terms of progressively more difficult alternatives that tend to the null hypothesis as ν → ∞. It is the most popular notion of efficiency for nonparametric tests, but it would be interesting to develop e-versions of other, non-local, notions of asymptotic relative efficiency (see, e.g., [15,Chap. 1]).