1 Introduction
Suppose we observe sequentially a stream of random variables ${X_{1}},{X_{2}},\dots $ , whose marginal distributions may change at some unknown time, or changepoint, ν. To take one concrete example that we generalize later, denote the data stream by ${X_{1}},\dots ,{X_{v}}\sim {P_{{\mu _{0}}}}$ and ${X_{v+1}},{X_{v+2}},\dots \sim {P_{\mu }}$ where ${P_{{\mu _{0}}}}$ and ${P_{\mu }}$ are some probability distributions with parameters ${\mu _{0}},\mu \in \mathbb{R}$, respectively. Let ${\mathbb{P}_{\nu }}$ and ${\mathbb{E}_{\nu }}$ denote probability and expectation, respectively, with respect to the distribution of the entire infinite data stream, when the change occurs at time ν. If there is no change, we think of ν as being equal to ∞, and we let ${\mathbb{P}_{\infty }}$ and ${\mathbb{E}_{\infty }}$ refer to the corresponding probability and expectation.
We are concerned with designing sequential changepoint detection procedures to determine, at each time, whether a changepoint has occurred in the near past that (i) provide non-asymptotic false alarm guarantees, (ii) allow for non-parametric classes of pre- and post-change distributions and (iii) are computationally efficient. Formally, a sequential changepoint detection algorithm consists of a data-dependent stopping rule ${N^{\ast }}\ge 1$. If not specified explicitly, the underlying filtration with respect to which ${N^{\ast }}$ is defined is assumed to be the natural filtration generated by the data stream ${X_{1}},{X_{2}},\dots $ , but in some cases it is beneficial to coarsen the filtration. If the stopping time ${N^{\ast }}$ is finite, then we declare that a changepoint has been detected, in the sense that sufficient evidence has accumulated to support the hypothesis that the data generating distribution has changed. If the algorithm never stops, we set ${N^{\ast }}=\infty $ and no changepoint is proclaimed. To evaluate the performances of detection algorithms, we can check how quickly the algorithm can detect a change in the distribution while controlling the frequency of false alarms.
To control false alarms, we adopt the average run length (ARL) metric [23], defined by
We say that the “ARL is controlled at level α” if ${\mathbb{E}_{\infty }}{N^{\ast }}\ge 1/\alpha $. An equivalent error metric is the False Alarm Rate (FAR), which is the reciprocal of the ARL, and we would like to ensure that the FAR is at most α, and we call a sequential change detection procedure as “valid” if it satisfies the above constraint.
A widely-used measure of the speed of detection after a changepoint is the worst average delays conditioned on the least favorable observations before the change [18], or conditioned on the event that the algorithm stops after the changepoint [25]. These are defined by
where the subscripts indicate the authors, and it is known that ${\mathcal{J}_{P}}({N^{\ast }})\le {\mathcal{J}_{L}}({N^{\ast }})$ [22].
Our implicit objective is to (approximately) minimize ${\mathcal{J}_{L}}({N^{\ast }})$ or ${\mathcal{J}_{P}}({N^{\ast }})$ while guaranteeing that the ARL is controlled at a prespecified level $\alpha \in (0,1)$. We differ from other work in that our focus is on nonparametric and composite pre- and post-change distributions, as well as on deriving nonasymptotic guarantees. In several such settings, it is a priori unclear how to define any valid sequential change detection algorithm, let alone an optimal one. Accordingly, we first address the design problem, and then move to questions involving (approximate) optimality for detection delays.
1.1 Prior Work and Our Contributions
If both pre- and post-change parameters ${\mu _{0}},\mu $ are known and the distributions have densities ${p_{{\mu _{0}}}}$ and ${p_{\mu }}$ with respect to some common reference measure, then the CUSUM procedure [23] has been known to achieve the optimal worst average delay (exactly for ${\mathcal{J}_{L}}({N^{\ast }})$ and asymptotically for ${\mathcal{J}_{P}}({N^{\ast }})$ as $\alpha \to 0$) among all procedures controlling ARL at the same level [18, 21, 31, 14]. Recall that the CUSUM procedure is defined by the stopping time ${N_{\mathrm{CU}}^{\ast }}:=\inf \{n\ge 1:{M_{n}^{\mathrm{CU}}}\ge {c_{\alpha }^{\mathrm{CU}}}\}$, where ${c_{\alpha }^{\mathrm{CU}}}\gt 0$ is a constant chosen so that ${\mathbb{E}_{\infty }}({N_{\mathrm{CU}}^{\ast }})=1/\alpha $, and the test statistic ${M_{n}^{\mathrm{CU}}}$ is defined by the recursive formula
The Shiryaev–Roberts (SR) procedure [37, 32] is defined by the stopping time ${N_{\mathrm{SR}}^{\ast }}:=\inf \{n\ge 1:{M_{n}^{\mathrm{SR}}}\ge {c_{\alpha }^{\mathrm{SR}}}\}$, where ${c_{\alpha }^{\mathrm{SR}}}\gt 0$ is a constant chosen so that ${\mathbb{E}_{\infty }}({N_{\mathrm{SR}}^{\ast }})=1/\alpha $, and the test statistic ${M_{n}^{\mathrm{SR}}}$ is obtained recursively as
Unlike CUSUM, the SR procedure does not achieve exact minimax optimality for ${\mathcal{J}_{L}}({N^{\ast }})$. However, the SR procedure and its generalized versions enjoy strong asymptotic optimality guarantees [26, 27, 41, 39].
(1.4)
\[ {M_{n}^{\mathrm{CU}}}=\frac{{p_{\mu }}({X_{n}})}{{p_{{\mu _{0}}}}({X_{n}})}\cdot \max \big\{{M_{n-1}^{\mathrm{CU}}},1\big\},\hspace{2.5pt}\hspace{2.5pt}{M_{0}^{\mathrm{CU}}}:=0.\](1.5)
\[ {M_{n}^{\mathrm{SR}}}=\frac{{p_{\mu }}({X_{n}})}{{p_{{\mu _{0}}}}({X_{n}})}\cdot \big[{M_{n-1}^{\mathrm{SR}}}+1\big],\hspace{2.5pt}\hspace{2.5pt}{M_{0}^{\mathrm{SR}}}:=0.\]The literature sometimes assumes that the pre-change distribution is known or can be approximated with a high precision by using the previous history. However, the post-change distribution is typically unknown and is assumed to belong to a family of distributions $\mathcal{P}:=\{{p_{\mu }}:\mu \in \Theta \}$. In this case, one natural approach would be to replace the unknown parameter μ with an estimator $\widehat{\mu }$. If we use the maximum likelihood estimator (MLE), then we obtain the CUSUM procedure based on the generalized likelihood ratio (GLR) rule [e.g., see 1, 49, 38]. For those not familiar with sequential change detection, [15] provides a good overview.
Limitations of Prior Work Though the usage of GLR statistic for the sequential change detection problem has a long history and often yields good empirical performance, the current literature has two main limitations.
First, most existing methodology relies on parametric assumptions on the family of distributions (e.g.: exponential families), both pre-change and post-change. There have been attempts to move away from this setting, and we will discuss these later. However, a general framework for deriving sequential change detection procedures in general nonparametric or composite settings has not been previously presented in the generality that we do here.
Second, the study of statistical properties has typically focused on the asymptotic regime of $\alpha \to 0$, unless the GLR statistic is defined on a well-separated post-change parameter space. In this paper, we guarantee nonasymptotic control on the ARL at a prespecified level (such as $\alpha =0.001$). In fact, in many settings considered, we do not know of any existing method to guarantee (even asymptotic) ARL control.
(We think that in theory and practice, the first is a bigger issue than the second. Luckily, our solution for the first automatically handles the second. Indeed, in the composite and nonparametric settings we consider, it is quite unclear how to control the ARL in any asymptotic sense.)
Finally, a direct online implementation of the GLR rule is infeasible since the memory and computation time both increase at least linearly as $n\to \infty $. One natural approach to tackle this online implementation issue is to use window-limited versions of the GLR rule [49, 13]. For instance, a simple form of the window-limited GLR rule can be defined by, at each time n, computing $\widehat{\mu }$ over only times $n-W$ to n for a properly chosen window size $W\gt 0$. However, the optimal choice of window size W has been studied only in the asymptotic setting ($\alpha \to 0$). For a fixed α, the optimal window size depends on the difference between the pre- and post-change distributions, which is unknown.
Our Contributions We present a general framework for sequential change detection, focusing on (parametric and nonparametric) settings with composite pre- and post-change distributions, and nonasymptotic guarantees on the ARL:
-
1. We introduce the concept of an e-detector that underlies our construction of sequential change detection procedures. The e-detector utilizes a generalization of the underlying martingale structure of likelihood ratios in classical sequential change detection procedures. This e-detector framework is applicable in nonparametric settings including sub-Gaussian, sub-exponential, and bounded random variables, among many others. In such settings, there is no common reference measure and likelihood ratios cannot be directly defined, thus composite nonnegative supermartingales, or more generally “e-processes”, must be employed in their place.
-
2. Despite handling composite pre- and post-change distributions, even without an iid (independent and identically distributed) assumption on the data, our e-CUSUM and e-SR sequential change detection procedures based on e-detectors can always nonasymptotically control the ARL at level α.
-
3. Nonasymptotic bounds on the worst average delay are derived in special cases for nonparametric distributions with exponential tail decay (such as sub-Gaussian or sub-exponential), and they match the rate of known lower bounds for exponential families as $\alpha \to 0$.
-
4. Computationally feasible algorithms are presented, in order to run our procedures in an online fashion without windowing. Practical strategies to choose hyperparameters are discussed. These are based on an adaptive mixture method, with the number of mixture components growing slowly over time.
Our procedures have natural gambling interpretations, and our work can be viewed as setting the foundations for a game-theoretic approach to sequential change detection. Before discussing the general framework in detail, in the following subsection, we present a motivating real-world example involving bounded random variables to illustrate how our nonparametric framework can be easily used in settings in which it is nontrivial to apply other common methods.
1.2 Example: A Changepoint in Cleveland Cavaliers 2011–2018
The Cleveland Cavaliers are an American professional basketball team. We use the Cavaliers’ game point records over 2010–11 to 2017–18 NBA seasons to illustrate how our proposed nonparametric sequential change detection algorithm can be applied to detect an interesting changepoint in the Cavaliers’ recent history.2
The left plot in Figure 1 shows the difference between the scores of the Cavaliers and those of their opposing teams (also known as Plus-Minus) in all the games from the 2010–11 to the 2017–18 regular seasons. Each red line refers to the yearly average difference score in each season. Roughly, this value shows how well the Cavaliers performed against their opponent in terms of scoring. Typically, if a seasonal average is positive (or negative) then we may say that the Cavaliers showed a strong (or poor) performance in the corresponding season. The right plot shows a changepoint detected in early 2015; NBA fans may recall one major cause of the sharp improvement — LeBron James returned to the Cavaliers in 2014. However, how can we detect such a change on the fly by only tracking the Plus-Minus for each game?
This type of question fits well into the sequential change detection framework. Let ${X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be the sequence of Plus-Minus stats we observe sequentially. After observing a poor performance of the Cavaliers in 2010–11 season, we define the Cavaliers’ pre-change distribution on the Plus-Minus stats as follows: the average Plus-Minus of the team is less than or equal to ${\mu _{0}}:=-1$. Now, we are interested in detecting a meaningful performance improvement on the fly by defining the post-change distribution as follows: the average Plus-Minus of the team is greater than ${\mu _{1}}:=1$. Here, the gap $|{\mu _{1}}-{\mu _{0}}|$ between averages of Plus-Minus in pre- and post-changes refers to the degree of improvement we consider as a significant one.
Figure 1
Left: Plus-Minus of the Cavaliers from 2010–11 to 2017–18 seasons. Each horizontal red line corresponds to the seasonal average. Right: The sample path of (the logarithm of) one of our e-detectors. The horizontal red line is the threshold equal to $\log (1/\alpha )$ controlling the ARL by $1/\alpha =1000$. In this example, the procedure detects the changepoint in the Plus-Minus of the Cavaliers at the end of 2014–15 season.
Although the formulation of the problem is simple as described above, it is still nontrivial to fit this problem into commonly used sequential change detection procedures for the following reasons. First, it is not easy to choose a proper parametric model to fit observed Plus-Minus stats since they are integer-valued samples with varying mean and variance over seasons as Figure 1 illustrates. Second, even if we can choose a proper model, it is difficult to find a threshold to detect the changepoint since we are interested in detecting any changes larger than $|{\mu _{1}}-{\mu _{0}}|$ instead of a fixed post-change. Many common methods have been relying on a high-quality simulator or large enough sample history for pre-change observations to compute a valid threshold. For this example, however, it is hard to access such tools since the Cavaliers’ overall Plus-Minus stats are difficult to model directly, and it is tricky to justify using existing records to get a valid threshold as the team’s overall performance varies a lot around the 2010–11 season.
Based on the introduced framework of the sequential change detection procedure using e-detectors, we detour difficulties in the commonly used methods illustrated above as follows. First, due to the nonparametric nature of the new framework, we do not need to choose any parametric model to fit the data. Instead, we simply assume the absolute value of each Plus-Minus stat is bounded by a large number — in this example, we set 80 as the boundary. Though we set a conservatively large boundary, our detection procedure is variance-adaptive so that we can detect the changepoint efficiently without specifying the variance of observations. Second, the nonasymptotic analysis of the new framework makes it possible to choose an explicit detection boundary, which is equal to $\log (1/\alpha )$, to build a sequential change detection procedure controlling the ARL by $1/\alpha $ for any given $\alpha \in (0,1)$. In this example, we choose $\alpha ={10^{-3}}$ to make ARL is larger than at least 10 regular seasons of games. The right plot in Figure 1 shows the log of e-detectors on which we build the sequential change detection procedure. The red horizontal line corresponds to the detection boundary given by $\log (1/\alpha )$. We can check the procedure detects the changepoint of the Plus-Minus stat of the Cavaliers in the middle of the 2014–15 season. See Section 5.2 for the detailed explanation about how we construct the sequential change detection procedure based on the general methodology we introduce in the paper.
Paper Outline The rest of the paper is organized as follows. In Section 2, we introduce a general framework about how to build composite, nonparametric, and nonasymptotic sequential change detection procedures using e-detectors. Section 3 extends the previous framework to the case where we have a set of e-detectors and explain how to use a mixture method to combine multiple e-detectors effectively. In Section 4, we introduce an exponential structure of e-detectors that makes it possible to design a near-optimal detection procedure with an explicit upper bound on worst average delays. Based on the proposed framework, Section 5 presents two canonical examples of Bernoulli (parametric) and bounded random variables (nonparametric) cases with real data applications of the Cavaliers 2011–2018 statistics. We conclude with a discussion, and defer proofs to the supplement.
2 Nonparametric Sequential Change Detection Using E-detectors
2.1 Problem Setup
Let $\mathcal{P}$ denote the set of possible pre-change distributions, which could in general be a nonparametric class. We do not assume the observations in the sequence to be independent or identically distributed: each $P\in \mathcal{P}$ is a distribution over an infinite sequence of random variables.
We will assume throughout that up to the unknown changepoint ν, the observations ${X_{1}},\dots ,{X_{\nu }}$ follow a distribution $P\in \mathcal{P}$. The remaining observations ${X_{\nu +1}},{X_{\nu +2}},\dots $ are drawn from a distribution Q in a class of post-change distributions $\mathcal{Q}$. In this case, we let ${\mathbb{P}_{P,\nu ,Q}},{\mathbb{E}_{P,\nu ,Q}}$ and ${\mathbb{V}_{P,\nu ,Q}}$ denote probability, expectation and variance operators over the entire data stream.
If there never is a change, we will use the notation ${\mathbb{P}_{P,\infty }},{\mathbb{E}_{P,\infty }}$ and ${\mathbb{V}_{P,\infty }}$. Also, if a change occurs at the beginning ($\nu =0)$ then we use ${\mathbb{P}_{0,Q}},{\mathbb{E}_{0,Q}}$ and ${\mathbb{V}_{0,Q}}$. Note that technically ${\mathbb{P}_{0,Q}}={\mathbb{P}_{Q,\infty }}$, but we use the former to denote that Q is a post-change distribution and a changepoint has occurred at the very start, and the latter to denote that Q is a pre-change distribution and a changepoint never occurs.
Let $\mathcal{F}:={\{{\mathcal{F}_{n}}\}_{n\ge 0}}$ be a filtration where we let ${\mathcal{F}_{0}}:=\{\varnothing ,\Omega \}$ for simplicity. Let $M:={\{{M_{n}}\}_{n\ge 0}}$ be a nonnegative adapted process with respect to the filtration $\mathcal{F}$. If required, we define ${M_{\infty }}:={\limsup _{n\to \infty }}{M_{n}}$ and ${\mathcal{F}_{\infty }}:=\sigma ({\textstyle\bigcup _{n\ge 0}}{\mathcal{F}_{n}})$ [see, for example, 4]. It is common to consider the natural filtration ${\mathcal{F}_{n}}:=\sigma ({X_{1}},\dots ,{X_{n}})$, but there are situations where restricting the filtration could be advantageous (for example, when there are nuisance parameters). There are yet other situations when enlarging the filtration with external randomness can be useful.
Let $\mathcal{T}$ denote the set of all stopping times with respect to $\mathcal{F}$, but we will later see that it typically suffices to consider finite stopping times, or those with finite expectation.
Remark 2.1.
In our paper, the changepoint ν and the post-change observations also do not need to be independent of the pre-change observations, and they do not need to be identically distributed. In other words, ν could be a stopping time, and Q could itself depend on the pre-change data. It may be helpful to imagine an adversary who adaptively decides at each step whether or not to “stop” the pre-change data. If they choose to stop at time ν, there are two options for post-change points: (a) then can pick a distribution Q, draw a sequence ${Y_{1}},{Y_{2}},\dots \hspace{0.1667em}$, from Q, and reveal ${X_{\nu +i}}={Y_{i}}$ sequentially, or (b) the pick a distribution Q and draw ${X_{\nu +1}},{X_{\nu +2}},\dots \hspace{0.1667em}$ from $Q\mid {\mathcal{F}_{\nu }}$. In situations without a common reference measure, setting (b) may be tricky to formally define (especially if the pre-change data have probability zero under Q), so one may think of setting (a). All of our results on ARL do not require any assumptions on the changepoint ν or post-change distribution of the data. When analyzing detection delay, we will typically assume that ${X_{\nu +1}},{X_{\nu +2}},\dots \hspace{0.1667em}$ are independent of the pre-change data, and are drawn from Q as if the time ν was reset to zero; we believe this can be relaxed in future work. But for much of the paper, it is also okay to assume that the post-change data is drawn a data-dependent Q (or drawn from $Q\mid {\mathcal{F}_{\nu }}$).
With the appropriate definitions and setup in place, we can now define our central concept, an e-detector.
2.2 What Is an E-detector?
Definition 2.2 ($\mathcal{P}$-e-detector).
The process M is called an e-detector with respect to the class of pre-change distributions $\mathcal{P}$ if it satisfies the property
For brevity, we refer to M as “an e-detector for $\mathcal{P}$”, or if $\mathcal{P}$ is understood from context, then simply “an e-detector”. If a stopping time τ has a nonzero probability of being infinite, then inequality (2.1) is trivially satisfied. Thus, the condition is only really required to hold for stopping times with finite expectation under some P. This latter set of stopping times depends on $\mathcal{P}$, and so in order to not complicate notation, we continue to simply consider all stopping times $\mathcal{T}$. (Also note that the condition can only be satisfied if process M is integrable under any $P\in \mathcal{P}$, so this is implicitly assumed to be the case in what follows.)
By the linearity of expectation and Tonelli’s theorem, an average (or “a mixture”) of e-detectors is also an e-detector. More formally, if ${\{{M^{a}}\}_{a\in A}}$ is a set of e-detectors (where a is a tuning parameter), then so is $\textstyle\int {M^{a}}d\mu (a)$ for any fixed probability distribution μ over A. Later in this paper, we will in particular use finite mixtures of the form $({M^{1}}+{M^{2}}+\cdots +{M^{K}})/K$ in order to adapt to the unknown post-change distribution. (In fact, we will develop more sophisticated mixtures whose number of components grows slowly with time.) For later reference, we state the above as a proposition:
Proposition 2.3.
Let ${\{{M^{a}}\}_{a\in A}}$ be a set of e-detectors. Then for any probability measure μ on A, the mixture of e-detectors, $\textstyle\int {M^{a}}d\mu (a)$ forms a valid e-detector.
An e-detector provides a quantification of evidence for whether a changepoint has occurred or not, and may be continuously monitored, stopped and easily interpreted – e.g, a steep and steady increase of the process in recent times should be taken as an indication that a change has taken place. The following theorem shows how one can immediately obtain a sequential change detection procedure from an e-detector M.
Theorem 2.4.
For any $\alpha \in (0,1)$ and e-detector M, if we declare a changepoint at the stopping time
then we have
That is, the sequential change detection procedure in (2.2) controls the ARL at level α.
The informal proof is one line long: dropping subscripts, the definition of an e-detector implies that $\mathbb{E}{N^{\ast }}\ge \mathbb{E}{M_{{N^{\ast }}}}$, but by definition of ${N^{\ast }}$, we know that ${M_{N\ast }}\ge 1/\alpha $ if ${N^{\ast }}$ is finite. (If ${N^{\ast }}$ is not almost surely finite, the claim holds anyway.) The full proof is in Appendix A.1.
2.3 Constructing an E-detector Based on a Sequence of E-processes
The central building block of our e-detector is called an e-process. E-processes are newly-developed tools that have been shown to play a fundamental role in sequential hypothesis testing, especially in composite, nonparametric settings. E-processes are generalizations of nonnegative martingales and supermartingales, and in particular, e-processes are nonparametric and composite generalizations of likelihood ratios. They have strong game-theoretic roots, and have found utility in the meta-analysis, as well as for the purposes of anytime-valid inference in the presence of continuous monitoring [28, 29, 7, 11, 12]. The properties of e-processes have not yet been explored in changepoint analysis, and we undertake this effort here.
To understand their definition, we briefly forget about sequential change detection and consider testing the null hypothesis that ${X_{1}},{X_{2}},\dots \sim P$ for some $P\in \mathcal{P}$. An e-process for $\mathcal{P}$, called a $\mathcal{P}$-e-process, is a sequence of nonnegative random variables ${({E_{t}})_{t\ge 1}}$ such that for any $P\in \mathcal{P}$ and any stopping time τ, we have ${\mathbb{E}_{P}}[{E_{\tau }}]\le 1$. As before, the underlying filtration can be that of the data or that of $({E_{t}})$, or some enlargement of these. The value of ${E_{t}}$ measures evidence against the null (larger values, more evidence). A level-α sequential test can be obtained by rejecting the null as soon as ${E_{t}}$ exceeds $1/\alpha $; this is a consequence of Ville’s inequality [42, 11].
As a result of the optional stopping theorem, nonnegative $\mathcal{P}$-martingales (i.e., the process is a nonnegative P-martingales simultaneously for every $P\in \mathcal{P}$) and $\mathcal{P}$-supermartingales are examples of e-processes. However, e-processes are a distinct and more general class of processes. In fact, there exist natural classes $\mathcal{P}$ for which the only $\mathcal{P}$-martingales are constants, and the only $\mathcal{P}$-supermartingales are decreasing sequences, but there are e-processes for $\mathcal{P}$ that can increase to infinity when the data are not from $\mathcal{P}$. See [29] for one such example, arising from sequentially testing exchangeability of a binary sequence, and [6] for another example arising from testing log-concavity.
In this subsection, we show how to leverage e-processes to build e-detectors.
Definition 2.5 (${\mathrm{e}_{j}}$-process).
For any $j\ge 1$, ${\Lambda ^{(j)}}:={\{{\Lambda _{n}^{(j)}}\}_{n\ge 1}}$ is called an ${\mathrm{e}_{j}}$-process for $\mathcal{P}$ if it is a nonnegative adapted process such that ${\Lambda _{1}^{(j)}}=\cdots ={\Lambda _{j-1}^{(j)}}=1$, and
When $j=1$, the ${\mathrm{e}_{j}}$-process is simply a standard e-process, which tests whether the data distribution is different from the proclaimed null (pre-change) distribution. For $j\gt 1$, each ${\mathrm{e}_{j}}$-process can be viewed as an e-process that begins at time j, and tests whether the data occurring after time j are well explained by the null hypothesis.
Definition 2.6 (SR and CUSUM e-detectors).
Based on a sequence of e-processes ${\{{\Lambda ^{(j)}}\}_{j\ge 1}}$, define SR and CUSUM e-detectors ${M^{\mathrm{SR}}}$ and ${M^{\mathrm{CU}}}$, respectively by ${M_{0}^{\mathrm{SR}}}={M_{0}^{\mathrm{CU}}}:=0$ and for each $n\ge 1$,
It is not hard to check that the above processes are indeed e-detectors, meaning they satisfy (2.1).
Remark 2.7.
If each e-process starts with an initial weight smaller than 1 such that the sum of all initial weights is less than or equal to 1, then corresponding SR and CUSUM procedures control the probability of the false alarm, ${\sup _{P\in \mathcal{P}}}{\mathbb{P}_{P,\infty }}(\tau \lt \infty )\le \alpha $, which is a much more stringent error metric than the ARL. The price to pay is that if there is a change at time ν, then the detection delay will no longer be independent of ν, and will typically increase logarithmically with ν (so that the worst average delay is unbounded).
2.4 Constructing Computationally Efficient E-detectors Using Baseline Increments
In general, it may take $O(n)$ time to update the aforementioned e-detectors at time n. In order to construct e-detectors that can be updated online in sublinear time and memory (or even near-constant time and memory), it turns out to be computationally convenient to use a common “baseline” increment in order to build the underlying ${\mathrm{e}_{j}}$-processes, as we do below. Effectively, this amounts to using ${\mathrm{e}_{j}}$-processes that are $\mathcal{P}$-supermartingales, which is a special case of particular interest.
It is easy to check that if ${L^{1}}$ and ${L^{2}}$ are baseline increments, and ${A^{1}}$ and ${A^{2}}$ are nonnegative and predictable processes (meaning that ${A_{n}^{1}}$ and ${A_{n}^{2}}$ are both ${\mathcal{F}_{n-1}}$-measurable) such that ${A^{1}}+{A^{2}}$ is strictly positive, then the mixture $({A^{1}}{L^{1}}+{A^{2}}{L^{2}})/({A^{1}}+{A^{2}})$ also forms a baseline increment. In short, “predictable mixtures” retain the baseline increment property.
Comparing (2.6) to (2.4), we see that a baseline increment L is not itself an ${\mathrm{e}_{j}}$-process, because the expectation in (2.6) applies only at fixed times n, with the conditioning being on the previous step $n-1$, but (2.4) calculates expectations at any stopping time, and conditions on $j-1$. It is best to think about the baseline increment as the multiplicative increment that forms the ${\mathrm{e}_{j}}$-process, as follows.
Definition 2.9 (Baseline ${\mathrm{e}_{j}}$-process).
For a given baseline increment $L:={\{{L_{n}}\}_{n\ge 1}}$, we define the corresponding “baseline ${\mathrm{e}_{j}}$-process” ${\Lambda ^{(j)}}$, for each $j,n\in \mathbb{N}$, as below:
Under any pre-change distribution $P\in \mathcal{P}$, each ${\Lambda ^{(j)}}$ is a nonnegative supermartingale by definition of the baseline increment ${L_{i}}$. Therefore, a straightforward application of the optional stopping theorem implies that each ${\Lambda ^{(j)}}$ satisfies condition (2.4), and thus is a valid ${\mathrm{e}_{j}}$-process.
As an example of a baseline ${\mathrm{e}_{j}}$-process, consider the case where we have iid observations ${X_{1}},{X_{2}},\dots $ from a distribution ${p_{\theta }}$ parameterized by $\theta \in \Theta $, and the pre-change distribution is given by ${\theta _{0}}$. Then, for any post-change distribution ${p_{{\theta _{1}}}}$ with ${\theta _{1}}\ne {\theta _{0}}$, the likelihood ratio between two distributions, ${L_{n}}:={p_{{\theta _{1}}}}({X_{n}})/{p_{{\theta _{0}}}}({X_{n}})$ yields a baseline increment process with the inequality in (2.6) being replaced by the equality. Then, each ${\Lambda _{n}^{(j)}}$ is the likelihood ratio based on ${X_{j}},\dots ,{X_{n}}$. Further, instead of using a fixed post-change parameter ${\theta _{1}}$, we can also plug-in a running MLE or any other online nonanticipating estimator that is based on the previous history ${\mathcal{F}_{n-1}}$ only, say ${\widehat{\theta }_{n-1}}$, into the likelihood ratio. In this case, although the value ${L_{n}}:={p_{{\widehat{\theta }_{n-1}}}}({X_{n}})/{p_{{\theta _{0}}}}({X_{n}})$ at time n of the resulting process may depend on the previous history ${\mathcal{F}_{n-1}}$, the inequality in (2.7) will be satisfied as an equality, yelling a valid baseline ${\mathrm{e}_{j}}$-processes.
Remark 2.10.
While baseline increment processes provide a natural and computationally convenient way to construct ${\mathrm{e}_{j}}$-process, we emphasize that any e-detector, even one that does not use baseline increments, will automatically control the ARL by Theorem 2.4. To elaborate, baseline ${\mathrm{e}_{j}}$-processes are composite $\mathcal{P}$-supermartingales (meaning P-supermartingales for every $P\in \mathcal{P}$), but there exist other $\mathcal{P}$-e-processes—which are not $\mathcal{P}$-supermartingales—that naturally arise and these can be used to form e-detectors; for example using universal inference [47, Section 8].
Definition 2.11 (Baseline SR and CUSUM e-detectors).
When an SR or CUSUM e-detector is constructed using a sequence of baseline ${\mathrm{e}_{j}}$-processes (2.7), we call it a “baseline SR or CUSUM e-detector”.
Each baseline SR or CUSUM e-detector can be computed recursively like their classical analogs:
with ${M_{0}^{\mathrm{SR}}}={M_{0}^{\mathrm{CU}}}=0$ for each $n\in \mathbb{N}$. The above computational benefit is the primary reason to consider baseline e-detectors, but as mentioned in Remark 2.10 and when introducing e-processes, more general e-detectors are sometimes necessary for certain classes $\mathcal{P}$.
We briefly verify below that the processes ${M^{\mathrm{SR}}}$ and ${M^{\mathrm{CU}}}$ defined above are valid e-detectors. Indeed, for any stopping time τ and pre-change distribution $P\in \mathcal{P}$, if ${\mathbb{P}_{P,\infty }}(\tau =\infty )\gt 0$ then the condition of the e-detector in (2.1) holds trivially. If not, then τ is finite almost surely, and we have by linearity of expectation and the tower rule:
\[\begin{aligned}{}{\mathbb{E}_{P,\infty }}{M_{\tau }^{\mathrm{CU}}}& \le {\mathbb{E}_{P,\infty }}{M_{\tau }^{\mathrm{SR}}}\\ {} & ={\mathbb{E}_{P,\infty }}{\sum \limits_{j=1}^{\infty }}{\Lambda _{\tau }^{(j)}}1(j\le \tau )\\ {} & ={\sum \limits_{j=1}^{\infty }}{\mathbb{E}_{P,\infty }}{\Lambda _{\tau }^{(j)}}1(j\le \tau )\\ {} & ={\sum \limits_{j=1}^{\infty }}{\mathbb{E}_{P,\infty }}\big[1(j\le \tau ){\mathbb{E}_{P,\infty }}\big[{\Lambda _{\tau }^{(j)}}\mid {\mathcal{F}_{j-1}}\big]\big]\\ {} & \le {\sum \limits_{j=1}^{\infty }}{\mathbb{E}_{P,\infty }}1(j\le \tau )={\mathbb{E}_{P,\infty }}\tau ,\end{aligned}\]
where the first inequality comes from the nonnegativity of e-processes, and the second inequality comes from the definition of the e-process ${\Lambda ^{(j)}}$ for each $j\ge 1$. Note that this proof is also applicable to general SR and CUSUM e-detectors.2.5 Sequential Change Detection Procedures by Thresholding E-detectors
The value of any e-detector process, like ${M^{\mathrm{SR}}}$ or ${M^{\mathrm{CU}}}$, is directly interpretable without specifying an explicit threshold: a larger value signals an accumulation of evidence of a changepoint. These can be monitored and adaptively stopped. Nevertheless, to explicitly control the ARL at level α, we define SR and CUSUM-style change detection procedures, called “e-SR” and “e-CUSUM” procedures as follows.
Definition 2.12 (e-SR and e-CUSUM procedures).
Given SR and CUSUM e-detectors ${M^{\mathrm{SR}}}$ and ${M^{\mathrm{CU}}}$, define e-SR and e-CUSUM procedures by the stopping times
where ${c_{\alpha }}$ is a constant chosen to control the ARL of the e-CUSUM procedure by $1/\alpha $ for some $\alpha \in (0,1)$. By Theorem 2.4, $1/\alpha $ is a valid choice for ${c_{\alpha }}$.
We note ${c_{\alpha }}=1/\alpha $ may be a very conservative choice for the e-CUSUM procedure. Indeed, suppose we use the trivial e-processes, that is, we set ${\Lambda _{n}^{(j)}}:=1$ for all $j,n$. In this setting, the SR e-detector is given by ${M_{n}^{\mathrm{SR}}}=n$ for each n. In contrast, the CUSUM e-detector, ${M_{n}^{\mathrm{CU}}}$ is equal to 1 for all n. Therefore, any valid threshold we can choose for the e-SR procedure must be larger than $\lfloor 1/\alpha \rfloor $. On the other hand, any threshold above 1 makes ${N_{\mathrm{CU}}^{\ast }}=\infty $, which of course controls ARL by $1/\alpha $, but the true ARL is much above the target. Building from this trivial example, it is possible to construct nontrivial examples in which letting $\alpha \to 0$ makes the gap between tight thresholds of e-SR and e-CUSUM procedures arbitrarily different.
Remark 2.13.
Unless we assume the pre-change distribution is time-stationary, known and parametric, or we can access a good sample of the pre-change distribution or large enough historical data, computing a tight or even valid threshold ${c_{\alpha }}$ can be a challenging task. In the application sections below, we will mainly deal with non-stationary pre-change distributions where pre-change observations may not be identically distributed and thus all observations before the changepoint may follow different distributions. In this case, setting ${c_{\alpha }}=1/\alpha $ seems to be the only reasonable choice, and we recommend using the e-SR procedure rather than e-CUSUM since if we use the same threshold for both procedures, the former always detects the changepoint faster than the latter while provably controlling the ARL at the same level.
2.6 Some Nontrivial Instantiations of E-detectors
E-detectors can be thought of as a general reduction of change detection to sequential testing. Given the recent advances in nonparametric, composite sequential testing using nonnegative supermartingales and e-processes, our e-detectors now make new classes of change detection problems possible. We detail below some interesting nontrivial examples of change detection problems that can now be solved using e-detectors.
Example 1 (When Likelihood Ratios Are Well-defined).
Consider first the parametric case, when $\mathcal{P},\mathcal{Q}$ have a common reference measure and likelihood ratios are well-defined. When the pre- and post-change distributions are known (meaning $\mathcal{P},\mathcal{Q}$ are singletons), then the standard likelihood-ratio based CUSUM and SR processes from (1.4) and (1.5) are both e-detectors. If $\mathcal{Q}$ is composite, then taking a mixture likelihood ratio yields e-detectors (using either a non-anticipating predictable mixture or using a fixed mixture distribution). If $\mathcal{P}$ is also composite, but maximum likelihood estimation is efficient over $\mathcal{P}$, then e-processes can be constructed that take the ratio of mixture likelihoods over the alternative to maximum likelihood under the null, as done in universal inference [47, Section 8] or in [40]. If the “reverse information projection” (RIPr) is computable (analytically or numerically), then one can use the method of [7], though there are some subtleties: by default the method produces e-values over blocks of observations, which can be multiplied across independent blocks to produce a nonnegative supermartingale (and thus e-process) to be used within our framework. But sometimes, the sequence of RIPr’s over increasing sample sizes (nested blocks) automatically produces an e-process, and when this happens, it is more powerful than universal inference (see [7, 30] for details).
Example 2 (Change in Distribution).
In this example, $\mathcal{P}$ is the set of all iid product distributions over infinite sequences (or its convex closure, the set of all exchangeable distributions), so $\mathcal{P}=\{{\mu ^{\infty }}$ for some probability distribution $\mu \}$. The conformal sequential change detection procedures by [44, 43] are designed to test deviations from exchangeability, meaning that they develop a test martingale (and thus an e-process) for $\mathcal{P}$, meaning that their procedure fits neatly into our framework. Importantly, their filtration is restricted, and is smaller than the natural filtration of the data. This allows nonparametric martingales to exist, but the e-detector property only holds with respect to a smaller class of stopping times. Nevertheless, thresholding our e-detector at $1/\alpha $ still controls the ARL at level α, as the latter property is independent of the filtration used to construct the e-detector. As a side note, if one wanted to construct an e-detector that was valid at all stopping times with respect to the natural filtration of the data, e-detectors based on martingales provably do not suffice, but e-detectors based on e-processes can be constructed using the techniques from [29] (at least for categorical distributions).
Example 3 (Nonparametric Two-sample Testing).
Suppose we have two streams of (general multivariate) data: ${X_{1}},{X_{2}},\dots \sim {P_{X}}$ and ${Y_{1}},{Y_{2}},\dots \sim {P_{Y}}$. For simplicity below, assume that at time t, we observe one point from each stream (${X_{t}},{Y_{t}}$). Before the changepoint (if one exists), the distributions of ${X_{t}}$ and ${Y_{t}}$ are equal, meaning that $\mathcal{P}=\{{({P_{X}},{P_{Y}})^{\infty }}:{P_{X}}={P_{Y}}\}$. This is a very nonparametric class, since it specifies nothing about the distributions except for the fact that they are equal before the changepoint. After the changepoint, the streams have different distributions (maybe the distribution of X changes, or that of Y changes, or both), thus $\mathcal{Q}=\{{({P_{X}},{P_{Y}})^{\infty }}:{P_{X}}\ne {P_{Y}}\}$. For this very general nonparametric two-sample testing setup, [34] construct test martingales for $\mathcal{P}$ that are provably consistent against $\mathcal{Q}$ under minimal assumptions (in particular not requiring any minimum separation between the different distributions after the changepoint, since the tests automatically adapt to the closeness of the unknown alternative). These test martingales fit seamlessly into our e-detector framework, yielding new and practicable e-detectors for detecting a change from homogeneity to non-homogeneity between the streams.
Example 4 (Nonparametric Independence Testing).
In this problem setting, we observe a pair of random variables $({X_{t}},{Y_{t}})\sim {P_{XY}}$ at each step t, where ${X_{t}},{Y_{t}}$ can each lie in a general space. Before the changepoint (if one exists), the data are independent, meaning that $\mathcal{P}=\{{P_{XY}^{\infty }}:{P_{XY}}={P_{X}}\times {P_{Y}}\}$. Beyond saying that the joint distribution factorizes into the product of marginals, there is no further structure assumed, making this a rich nonparametric composite class. As before, after the changepoint, $(X,Y)$ become dependent, meaning that $\mathcal{Q}=\{{P_{XY}^{\infty }}:{P_{XY}}\ne {P_{X}}\times {P_{Y}}\}$. For this general nonparametric independence testing problem, [24] construct test martingales for $\mathcal{P}$ that are provably consistent against $\mathcal{Q}$ under minimal, weak assumptions (as before, not requiring any separation). Again as before, (in the testing problem) the power of these tests automatically adapts to the difficulty of the unknown alternative. When plugged into our framework, it delivers a novel e-detector for a change from independence to dependence.
We briefly remark that in the two preceding examples (homogeneity and independence), we can move past the iid assumption. The same methods work even when the distribution is allowed to drift within $\mathcal{P}$ before the changepoint, and drift within $\mathcal{Q}$ after the changepoint. We refer to the original aforementioned papers for details.
Example 5 (Log-concavity).
Here, the data before the changepoint comes from a log-concave distribution (in a general dimension $d\ge 1$), so $\mathcal{P}=\{{\mu ^{\infty }}:\mu \hspace{2.5pt}\text{has a}\text{log-concave Lebesgue density}\}$. This is a rich, nonparametric, shape-constrained class. The post-change class of distributions $\mathcal{Q}$ consists of, for example, any distribution that has a nonzero KL-divergence and Hellinger distance from every distribution in $\mathcal{P}$. For testing $\mathcal{P}$ against $\mathcal{Q}$, [6] show that there exists no nontrivial nonnegative supermartingales, but they design a powerful (and computationally efficient) e-process using universal inference. When plugged into our e-detector, this yields a nontrivial procedure that can detect a deviation from log-concavity.
Example 6 (Symmetry).
Suppose $\mathcal{P}=\{{\mu ^{\infty }}:\mu \hspace{2.5pt}\text{is}\text{symmetric around}\hspace{2.5pt}0\}$ consists of the set of all distributions (in a general dimension $d\ge 1$) that are symmetric around the origin, while $\mathcal{Q}$ consists its complement (that is, distributions which are not symmetric around the origin). [28] characterize all processes that are nonnegative martingales for $\mathcal{P}$. When used with our e-detector, these provide a clean way to detect a change from symmetry to asymmetry.
Example 7 (Change in Mean).
Suppose ${\mathcal{P}^{C}}=\{{\mu ^{\infty }}:{\mathbb{E}_{X\sim \mu }}[X]\le 0,X\hspace{2.5pt}\text{satisfies}\hspace{2.5pt}C\}$ consists of the set of all univariate distributions with mean less than or equal to zero satisfying come constraint C, while ${\mathcal{Q}^{C}}=\{{\mu ^{\infty }}:{\mathbb{E}_{X\sim \mu }}[X]\gt 0,X\hspace{2.5pt}\text{satisfies}\hspace{2.5pt}C\}$ consists of those with positive mean. [11] provides a large variety of nonnegative supermartingales under various conditions C, such as when X are subGaussian, or bounded from above, or bounded from below, or have only two or three moments; see also [45] for heavy-tailed supermartingales. These can be plugged into the e-detector to yield new nonparametric schemes for changes in mean.
Example 8 (Huber-robust Change Detection).
As a final example, suppose we wish to detect a change in mean of heavy-tailed data (as above). But now, suppose that an adversary can also arbitrarily corrupt an ϵ fraction of the data. [46] develop Huber-robust supermartingales for this setting, which can be plugged into e-detectors to yield a valid e-detector in the presence of adversarial corruptions.
Note that in Examples 3, 4, 5 and 6, if $\mathcal{P}$ and $\mathcal{Q}$ were swapped, the testing problem is much harder, and we are not aware of any powerful test or change detection method. However, if one was simply interested in detecting a change in homogeneity or in dependence, i.e. from some distribution in $\mathcal{P}\textstyle\bigcup \mathcal{Q}$ to some other one, there are two possible change detection methods that come to mind. First, an e-detector based on the conformal change detection methods in Example 2 would detect any change from any distribution to any other, though choosing the conformity score may be tricky. As a second and more direct option, one can choose a measure of homogeneity or dependence (like the kernel maximum mean discrepancy or energy distance, or the Hilbert Schmidt independence criterion or distance covariance), construct a confidence sequence for that measure (see [19] for several specific, tight, constructions), and plug it into the recent change detection scheme of [35].
Finally, it is worth noting that it is possible to define e-detectors in cases where there is no common reference measure amongst the pre-change and post-change distributions, and thus no easily-defined likelihood ratio process, and also when there is no nontrivial martingale that can be constructed. This is precisely the utility of e-processes, which are nonparametric and composite generalizations of likelihood ratios. We develop one more interesting nonparametric example (not described above) in the simulations section: detecting change in mean of a bounded random variable.
2.7 Warm-up: Bounds on Worst Average Delays for Baseline E-detectors, when Q is Known
Recall that our objective is to minimize worst average delays ${\mathcal{J}_{L}}({N^{\ast }})$ or ${\mathcal{J}_{P}}({N^{\ast }})$, given above in (1.2) and (1.3) respectively, while controlling the ARL ${\mathbb{E}_{P,\infty }}({N^{\ast }})\ge 1/\alpha $. Note that worst average delays in (1.2) and (1.3) were defined for a fixed pair of pre- and post-change distributions implicitly. In our setting where the pre-change distribution space could be composite, we take an additional supremum over all pre-change distributions when defining both worst average delays for each fixed post-change distribution.
To derive bounds on the worst average delays, we further assume that the post-change observations ${X_{\nu +1}},{X_{\nu +2}},\dots $ are independent of the pre-change observations and form a strongly stationary process. That is, we assume that, for any finite subset $I\subset \mathbb{N}$ and any $j\in \mathbb{N}$, the joint distributions of ${\{{X_{\nu +i}}\}_{i\in I}}$ and ${\{{X_{\nu +i+j}}\}_{i\in I}}$ are equal to each other. About the underlying baseline increment, we further assume that there exist a function f and an integer $m\ge 0$ such that ${L_{n}}=f({X_{n}},{X_{n-1}},\dots ,{X_{n-m}})$ for each n. In this warm-up section, assume that we know the post-change distribution Q. Then, as we shall soon see, an optimal choice for ${L_{n}}$ would set $m=0$, but if m is a strictly positive number then we implicitly assume that we can access m observations ${X_{0}},{X_{-1}},\dots ,{X_{1-m}}$ from the pre-change distribution in order to build sequential change detection procedures. Under these conditions, the following theorem provides analytically more tractable upper bounds on worst average delays for e-SR and e-CUSUM procedures.
Proposition 2.14.
For a given $\alpha \in (0,1)$, let ${N_{\mathrm{SR}}^{\ast }}$ and ${N_{\mathrm{CU}}^{\ast }}$ be e-SR and e-CUSUM procedures using baseline e-detectors. Under the settings described above, their worst average delays are upper bounded as
respectively, where ${N_{c}}$ is the stopping time defined for any $c\gt 1$ as ${N_{c}}:=\inf \{n\ge 1:{\textstyle\sum _{i=1}^{n}}\log {L_{i}}\ge \log (c)\}$, and ${c_{\alpha }}\le 1/\alpha $ is any threshold that ensures the e-CUSUM procedure (2.11) has an ARL no smaller than $1/\alpha $. Furthermore, if the post-change observations are iid and each ${L_{n}}$ is a function of ${X_{n}}$ only (i.e. $m=0$) with ${\mathbb{E}_{0,Q}}\log {L_{1}}\gt 0$, then
The proof can be found in Appendix A.1.
Remark 2.15.
The stopping time ${N_{1/\alpha }}$ delivers a level-α sequential test for the null hypothesis ${H_{0}}:P\in \mathcal{P}$. On the other hand, the stopping time ${N_{{c_{\alpha }}}}$ may not necessarily control the type-1 error by α since the threshold ${c_{\alpha }}$ can be significantly smaller than $1/\alpha $ as discussed earlier.
The expected stopping time in the upper bounds (2.12) and (2.13) depends on the parameter $m\ge 0$. When Q is known, $m=0$ suffices because (2.14) suggests that ${L_{i}}$ should simply be chosen to maximize ${\mathbb{E}_{0,Q}}\log {L_{1}}$, which is identical to the log-optimality criterion used for testing $\mathcal{P}$ against Q (as discussed in many recent works, like [33, 7, 48]).
In applications when Q is unknown, the underlying baseline increment may require a long enough sample history (large m) in order to achieve a reasonably small expected stopping time by “learning” Q or using an empirical distribution plug-in for Q. Then, the above results suggest that a reasonable way to choose the window size m is to pick the one minimizing the upper bound on the worst average delays. However, since the optimal choice of the window size should depend on the unknown post-change Q, it remains difficult to minimize the upper bound directly. In our simulations, we often encounter cases where a larger window size is better. In this case, we would choose a window size as large as possible while keeping the procedure computationally tractable. However, the right way to handle unknown Q is dealt with in detail next.
3 Combining Baseline E-detectors Using the Method of Mixtures
In the previous section, we discussed how one can construct a valid e-detector and derive upper bounds on worst average delays. However, in most composite and nonparametric sequential change detection scenarios, there is no single optimal e-detector but instead there are often several applicable e-detectors to choose from. In this section, we introduce a practicable and computationally efficient strategy to construct a good e-detector for minimizing the upper bound on worst average delays in Proposition 2.14, especially for the upper bound (2.14) in the $m=0$ case.
In detail, suppose we have a set of baseline increments ${\{{L^{\lambda }}\}_{\lambda \in \Pi }}$ parametrized by $\lambda \in \Pi $. Then, under the additional condition assumed in the second part of Proposition 2.14 (namely, that $m=0$ and the post-change observations are from an iid sequence), an ideal choice of the parameter ${\lambda ^{\mathrm{op}}}$ for a post-change distribution Q is given by
minimizing the first term of the upper bound in (2.14), which often becomes a leading term especially for small enough α. In turn, this term is inversely proportional to
where the second argument $\mathcal{P}$ in $D(Q||\mathcal{P})$ explicitly refers to the dependency of the baseline increment ${L^{{\lambda ^{\mathrm{op}}}}}$ to the class of pre-change distributions $\mathcal{P}$. For the rest of the paper, we will assume that the set of baseline increments, ${\{{L^{\lambda }}\}_{\lambda \in \Pi }}$ is rich enough such that $D(Q||\mathcal{P})\gt 0$ for all $Q\in \mathcal{Q}$. As we observe later, in many canonical cases, we have $D(Q||\mathcal{P})={\inf _{P\in \mathcal{P}}}\mathrm{KL}(Q||P)$ where $\mathrm{KL}(Q||P)$ is the Kullback-Leibler (KL) divergence from Q to P.
(3.1)
\[ {\lambda ^{\mathrm{op}}}(Q)=\underset{\lambda \in \Pi }{\operatorname{arg\,max}}{\mathbb{E}_{0,Q}}\log {L_{1}^{(\lambda )}},\]Generally, computing ${\lambda ^{\mathrm{op}}}$ is not feasible since it depends on the unknown post-change distribution Q. Next, we show how to build a mixture of baseline e-detectors that can detect the changepoint nearly as quickly as the one with ${\lambda ^{\mathrm{op}}}$, when known lower and upper bounds ${\lambda _{L}}$ and ${\lambda _{U}}$ on ${\lambda ^{\mathrm{op}}}$ are available.
Notice that an average of e-detectors is also a valid e-detector, in the sense of satisfying condition (2.1). Therefore, for any mixing distribution W supported on $[{\lambda _{L}},{\lambda _{U}}]$, we can define mixtures of e-SR and e-CUSUM procedures by following stopping times:
where ${c_{\alpha }}\gt 1$ is a fixed constant which controls the ARL for some $\alpha \in (0,1)$. Since the mixture of e-CUSUM procedure is based on a valid e-detector, we can always set the threshold ${c_{\alpha }}$ to be equal to $1/\alpha $ as same as the threshold of the mixture of e-SR procedures.
3.1 Computational and Analytical Aspects of Mixtures of Baseline E-detectors
Though any mixing distribution yields a valid e-detector, for computational efficiency, we only consider discrete mixtures where the support of mixing distribution has at most countably many elements. To be specific, let ${\{{\omega _{k}}\}_{k\ge 1}}$ be a set of nonnegative mixing weights with ${\textstyle\sum _{k\ge 1}}{\omega _{k}}=1$ and let ${\{{\lambda _{k}}\}_{k\ge 1}}$ be the corresponding supporting set. For ease of notation, we denote ${L^{({\lambda _{k}})}}:=L(k)$ for each $k\ge 1$. Based on the set of nonnegative mixing weights and the corresponding set of baseline increments, we define mixtures of SR and CUSUM e-detectors as ${M_{0}^{\mathrm{mSR}}}={M_{0}^{\mathrm{mCU}}}:=1$, and for each $n\in \mathbb{N}$,
Let $K:=|\{k:{\omega _{k}}\gt 0\}|$ be the number of nonzero mixing weights.
Finite Mixtures If $K\lt \infty $, we may for simplicity assume that the first K weights ${\omega _{1}},\dots ,{\omega _{K}}$ are the only nonzero values. In this case, we can compute mixtures of SR and CUSUM e-detectors by
where ${M_{n}^{\mathrm{SR}}}(k)$ and ${M_{n}^{\mathrm{CU}}}(k)$ are computed recursively as
with ${M_{0}^{\mathrm{SR}}}(k)={M_{0}^{\mathrm{CU}}}(k)=0$ for each $k\in [K]$ and $n\in \mathbb{N}$. Therefore, if each computation of ${L_{n}}(k)$ has constant time and space complexities, then the evaluation of mixtures of SR and CUSUM e-detectors at each time n requires $O(K)$ time and space complexity.
(3.7)
\[\begin{aligned}{}{M_{n}^{\mathrm{mSR}}}& ={\sum \limits_{k=1}^{K}}{\omega _{k}}{M_{n}^{\mathrm{SR}}}(k),\hspace{5pt}\text{and}\hspace{5pt}{M_{n}^{\mathrm{mCU}}}={\sum \limits_{k=1}^{K}}{\omega _{k}}{M_{n}^{\mathrm{CU}}}(k),\end{aligned}\]
Infinite Mixtures, Scheduling Functions and Adaptive Re-weighting If $K=\infty $ or if K is to be chosen adaptively as an increasing function of n we modify our strategy as follows. We first choose an increasing function $K:\mathbb{N}\to \mathbb{N}$, and let ${K^{-1}}:\mathbb{N}\to \mathbb{N}$ be the generalized inverse function of K defined by ${K^{-1}}(k):=\inf \{j\ge 1:K(j)\ge k\}$ for each $k\in \mathbb{N}$. Note that ${K^{-1}}$ is also an increasing function. We call such function K as a scheduling function. We intentionally overload notation: in what follows, $K(n)$ plays the same role as the constant K in the case of finite support. Note that ${K^{-1}}(k)\le n$ for any $k\le K(n)$; we will use this simple fact below when defining nested summations.
Based on a scheduling function K and its generalized inverse ${K^{-1}}$, we define adaptive SR and CUSUM e-detectors, ${M_{n}^{\mathrm{aSR}}}$ and ${M_{n}^{\mathrm{aCU}}}$, respectively, as
where each ${\gamma _{j}}:=1/{\textstyle\sum _{k=1}^{K(j)}}{\omega _{k}}\ge 1$ is the adaptively re-weighting factor at time j, ensuring that the mixing weights always sum to one at each time. Here, we restrict not only the space over the index k from $[1,\infty ]$ to $[1,K(n)]$ but also the space over the index j from $[1,n]$ to $[{K^{-1}}(k),n]$. This choice makes it possible to compute both ${M_{n}^{\mathrm{aSR}}}$ and ${M_{n}^{\mathrm{aCU}}}$ efficiently since each ${M_{n}^{\mathrm{SR}}}(k)$ and ${M_{n}^{\mathrm{CU}}}(k)$ have following recursive representations
for each $n\ge {K^{-1}}(k)$ and ${M_{n}^{\mathrm{aSR}}}(k)={M_{n}^{\mathrm{aCU}}}(k)=0$ for all $n=0,1,\dots ,{K^{-1}}(k)-1$. Therefore, if each computation of ${L_{n}}(k)$ has constant time and space complexities then the computations of adaptive SR and CUSUM e-detectors at each time n have $O(K(n))$ time and space complexities as well. For the purpose of implementing an online algorithm, we are typically interested in the case $K(n)=O(\log (n))$.
Remark 3.2.
Both mixtures of SR and CUSUM e-detectors can be viewed as special cases of their adaptive counterparts where the scheduling function K is understood as a constant function. In this case, we have ${\gamma _{j}}={\textstyle\sum _{k=1}^{K}}{\omega _{k}}=1$ for each j, and thus ${M_{n}^{\mathrm{aSR}}}={M_{n}^{\mathrm{mSR}}}$ and ${M_{n}^{\mathrm{aCU}}}={M_{n}^{\mathrm{mCU}}}$ for each $n\in \mathbb{N}$.
Unlike finite mixtures, the mixing distribution deployed in the adaptive SR and CUSUM e-detectors vary over time. Hence, we cannot simply apply Proposition 2.3 to check whether this adaptive scheme yields valid e-detectors. The following proposition formally states the validity of adaptive SR and CUSUM e-detectors. The proof can be found in Appendix A.2.
Now, based on ${M_{n}^{\mathrm{aCU}}}$ and ${M_{n}^{\mathrm{aSR}}}$, the adaptive e-SR and e-CUSUM procedures are defined by the stopping times:
where $\alpha \in (0,1)$ is a fixed constant and ${c_{\alpha }}$ is a positive value controlling ARL of the adaptive e-CUSUM procedure by $1/\alpha $. Similar to the usual e-CUSUM procedure case we discussed before, we can always set ${c_{\alpha }}=1/\alpha $. In this case, from the fact ${N_{\mathrm{aSR}}^{\ast }}\le {N_{\mathrm{aCU}}^{\ast }}$, which is implied by ${M_{n}^{\mathrm{aSR}}}\ge {M_{n}^{\mathrm{aCU}}}$, we have
where the last inequality comes from Theorem 2.4 with the fact that ${M^{\mathrm{aSR}}}$ is a valid e-detector. However, the threshold ${c_{\alpha }}$ for the adaptive e-CUSUM procedure can be chosen to be a significantly smaller value if we have enough knowledge about the pre-change distribution, as discussed in Section 2.5.
(3.16)
\[ {\mathbb{E}_{P,\infty }}{N_{\mathrm{aCU}}^{\ast }}\ge {\mathbb{E}_{P,\infty }}{N_{\mathrm{aSR}}^{\ast }}\ge 1/\alpha ,\]3.2 Worst Average Delay Analysis for Adaptive Mixtures of E-detectors
We now derive general upper bounds on worst average delays of the adaptive e-SR and e-CUSUM procedures. As we did before, we further assume that post-change observations ${X_{\nu +1}},{X_{\nu +2}},\dots $ are independent of the pre-change observations and form a strong stationary process. Also, we further assume that there exist a function ${f_{k}}$ and an integer $m\ge 0$ such that ${L_{n}}(k)={f_{k}}({X_{n}},{X_{n-1}},\dots ,{X_{n-m}})$ for each k and n. Again, if m is a strictly positive number then we implicitly assume that there exist m observations ${X_{0}},{X_{-1}},\dots ,{X_{1-m}}$ from the pre-change distribution we can use to build sequential change detection procedures. Under this additional condition for worst average delay analysis, the following theorem provides analytically more tractable upper bounds on worst average delays for ${N_{\mathrm{aSR}}^{\ast }}$ and ${N_{\mathrm{aCU}}^{\ast }}$.
Theorem 3.4.
Under additional conditions described above, worst average delays for ${N_{\mathrm{aSR}}^{\ast }}$ and ${N_{\mathrm{aCU}}^{\ast }}$ can be upper bounded as follows:
where, for $j\in \mathbb{N}$ and $c\gt 0$, ${N_{c}}(j)$ is the stopping time
Here, ${c_{\alpha }}$ is the same threshold used to build the adaptive e-CUSUM procedure in (3.15). Note that for mixtures of the SR and CUSUM e-detectors where the scheduling function K is a constant function, the stopping times in the upper bounds do not depend on the index j, and thus the upper bounds can be reduced as
where, for $c\gt 0$, ${N_{c}}$ is the stopping time
(3.19)
\[\begin{aligned}{}{N_{c}}(j)& :=\inf \Bigg\{n\ge 1:{\sum \limits_{k=1}^{K(j)}}{\omega _{k}}{\prod \limits_{i=1}^{n}}{L_{i}}(k)\ge c\Bigg\}.\end{aligned}\]The proof of upper bounds on worst average delays can be found in Appendix A.2.
Unlike the baseline e-detector case, however, due to mixing weights, it is nontrivial to get further simplified upper bounds on worst average delays as we did in Section 2.7. Next, we present specific adaptive e-SR and e-CUSUM procedures based on exponential baseline increments where we can compute both procedures efficiently and derive upper bounds on worst average delays in explicit forms.
4 Exponential Baseline E-detectors and Their Mixtures
Building upon recent advances in time uniform concentration inequalities and sequential testing developed in [11] and [36], below we consider an exponential structure on baseline e-detectors. We show that, in this setting, it is possible to approximate the “oracle” e-SR and e-CUSUM procedures based on the knowledge of the optimal (but unknown) ${\lambda ^{\mathrm{op}}}$ by adaptive procedures built using a mixture of carefully chosen set of baseline increments ${\{{L^{{\lambda _{k}}}}\}_{k\ge 1}}$ with mixing weights ${\{{\omega _{k}}\}_{k\ge 1}}$.
To be specific, assume there exists an extended real-valued convex function ψ on $\mathbb{R}$ that is finite and strictly convex on a set $\Pi \subset \mathbb{R}$ containing 0 in its interior ${\Pi ^{\mathrm{o}}}$. Furthermore, assume ψ is continuously differentiable on ${\Pi ^{\mathrm{o}}}$ with $\nabla \psi (0)=0=\psi (0)$. Then define the “exponential baseline increment” as follows.
Definition 4.1 (Exponential baseline increment).
Above, s and v are mnemonics for sum and variance. For each $Q\in \mathcal{Q}$, define
where we assume that all expectations are finite. The following proposition provides an explicit expression for $D(Q||\mathcal{P}):={\max _{\lambda \in \Pi }}{\mathbb{E}_{0,Q}}\log {L_{1}^{(\lambda )}}$ and a sufficient condition to have $D(Q||\mathcal{P})\gt 0$ when the underlying baseline increments have the form specified in (4.1).
(4.2)
\[ \mu (Q)\hspace{-0.1667em}:=\hspace{-0.1667em}{\mathbb{E}_{0,Q}}s({X_{1}}),\hspace{2.5pt}{\sigma ^{2}}:={\mathbb{E}_{0,Q}}v({X_{1}})\hspace{2.5pt}\text{and}\hspace{2.5pt}{\Delta ^{\mathrm{op}}}(Q):=\frac{\mu (Q)}{{\sigma ^{2}}(Q)},\]Proposition 4.2.
For a fixed $Q\in \mathcal{Q}$, suppose there exist ${\lambda ^{\mathrm{op}}}\in {\Pi ^{\mathrm{o}}}$ such that ${\Delta ^{\mathrm{op}}}(Q)=\nabla \psi ({\lambda ^{\mathrm{op}}})$. Then,
where ${\psi ^{\ast }}$ is the convex conjugate of ψ. Thus, if ${\Delta ^{\mathrm{op}}}(Q)\ne 0$, we have $D(Q||\mathcal{P})\gt 0$.
(4.3)
\[ D(Q||\mathcal{P})={\mathbb{E}_{0,Q}}\log {L_{1}^{({\lambda ^{\mathrm{op}}})}}={\psi ^{\ast }}\big({\Delta ^{\mathrm{op}}}(Q)\big){\sigma ^{2}}(Q)\ge 0,\]The proof of Proposition 4.2 can be found in Appendix B.1. For the rest of the section, we assume that
\[\begin{aligned}{}& \big\{\lambda \in \mathbb{R}:{\Delta ^{\mathrm{op}}}(Q)=\nabla \psi (\lambda ),Q\in \mathcal{Q}\big\}\subset \Pi ,\\ {} & {\Delta ^{\mathrm{op}}}(Q)\ne 0,\hspace{2.5pt}\hspace{2.5pt}\forall Q\in \mathcal{Q}.\end{aligned}\]
Then, Proposition 4.2 implies $D(Q||\mathcal{P})\gt 0$ for all $Q\in \mathcal{Q}$. Also, for ease of notation, we will drop the dependency of Q from related parameters and simply write ${\lambda ^{\mathrm{op}}},\mu ,{\sigma ^{2}}$ and Δ.The exponential structure of the baseline increment in (4.1) results in a simple form of ${\lambda ^{\mathrm{op}}}$ such that
where the second equality comes from the fact that $\lambda =\nabla {\psi ^{\ast }}\circ \nabla \psi (\lambda )$ for each $\lambda \in \Pi $. Although ${\lambda ^{\mathrm{op}}}$ still depends on the unknown post-change distribution Q via ${\Delta ^{\mathrm{op}}}$, in many cases, we can find upper and lower bounds on ${\Delta ^{\mathrm{op}}}$. In this section, we explain how to use the knowledge of the range of ${\Delta ^{\mathrm{op}}}$ to build a mixture of exponential baseline e-detectors that has explicit upper bounds on worst average delays.
(4.4)
\[ {\lambda ^{\mathrm{op}}}={(\nabla \psi )^{-1}}\big({\Delta ^{\mathrm{op}}}\big)=\nabla {\psi ^{\ast }}\big({\Delta ^{\mathrm{op}}}\big),\]4.1 Separated Pre- and Post-change Distributions
Suppose we have knowledge of upper and lower bounds on the parameter ${\Delta ^{\mathrm{op}}}$ given in (4.2), i.e. we know a pair $({\Delta _{L}},{\Delta _{U}})$ such that ${\Delta _{L}}\lt {\Delta ^{\mathrm{op}}}\lt {\Delta _{U}}$. It then follows that ${\lambda _{L}}\lt {\lambda ^{\mathrm{op}}}\lt {\lambda _{U}}$, where ${\lambda _{L}}=\nabla {\psi ^{\ast }}({\Delta _{L}})$, ${\lambda ^{\mathrm{op}}}=\nabla {\psi ^{\ast }}({\Delta ^{\mathrm{op}}})$ and ${\lambda _{U}}=\nabla {\psi ^{\ast }}({\Delta _{U}})$. To simplify presentation, we only consider the one-sided and well-separated case: $0\lt {\lambda _{L}}\lt {\lambda _{U}}$.
Let $1/\alpha $ be the target level of the ARL control for a fixed $\alpha \in (0,1)$. Let ${\{L(k)\}_{k\in [K]}}$ and ${\{{\omega _{k}}\}_{k\in [K]}}$ be K exponential baseline increments and mixing weights whose specific values will be defined later in this subsection. Since each ${L_{n}}(k)$ is a function of the n-th observation ${X_{n}}$ for each $k\in [K]$, Theorem 3.4 implies that, if the post-change observations form a strong stationary process then the worst average delays for mixtures of e-SR and e-CUSUM procedures, ${N_{\mathrm{mSR}}^{\ast }}$ and ${N_{\mathrm{mCU}}^{\ast }}$ can be upper bounded by ${\mathbb{E}_{0,Q}}{N_{1/\alpha }}$ and ${\mathbb{E}_{0,Q}}{N_{{c_{\alpha }}}}$, respectively, where ${N_{c}}$ is the stopping time defined in (3.22). Furthermore, as we can always set the threshold for the e-CUSUM procedure to be ${c_{\alpha }}\le 1/\alpha $, we have ${\mathbb{E}_{0,Q}}{N_{{c_{\alpha }}}}\le {\mathbb{E}_{0,Q}}{N_{1/\alpha }}$. Therefore, in this subsection, we construct a set of baseline increments for which we can derive a tight bound on ${\mathbb{E}_{0,Q}}{N_{1/\alpha }}$.
Algorithm 1 describes our methodology for computing mixtures of e-SR procedures in detail. The inputs to the algorithm are the upper and lower bounds ${\Delta _{U}}$ and ${\Delta _{L}}$ on ${\Delta ^{\mathrm{op}}}$ and the maximal number of baselines processes ${K_{\max }}$. Mixture of e-CUSUM procedures can be executed similarly by replacing Line 10 by
Also, for the mixture of e-CUSUM procedures, we can replace the threshold $1/\alpha $ with a smaller value ${c_{\alpha }}$ if we have enough information about the pre-change distribution. For both e-SR and e-CUSUM, at each time n, updates of mixtures of e-detectors have $O({K_{\alpha }})$ time and space complexities, which do not depend on n.
(4.5)
\[ {M_{n}^{\mathrm{CU}}}(k)\gets \exp \big\{{\lambda _{k}}s({X_{n}})-{\psi _{k}}v({X_{n}})\big\}\cdot \max \big\{{M_{n-1}^{\mathrm{CU}}}(k),1\big\}.\]Algorithm 1 relies critically on the function computeBaseline in Line 2, which returns a set of parameters and weights to compute a mixture of e-detectors along with a threshold value ${g_{\alpha }}\gt 0$ that will appear in the upper bound on worst average delays given in Theorem 4.3 that will be explained below. The details of computeBaseline are fairly technical and are given in Algorithm 3 in Appendix B.1.
In the main result of this section, we provide bounds on ARL and worst average delays for the mixtures of e-CP procedures obtained with Algorithm 1 that is a function of the parameter ${\lambda ^{\mathrm{op}}}$ and the threshold value ${g_{\alpha }}$. The proof can be found in Appendix B.1.
Theorem 4.3.
Let ${N_{\mathrm{mSR}}^{\ast }}$ and ${N_{\mathrm{mCU}}^{\ast }}$ be the stopping times corresponding to the mixtures of e-SR and e-CUSUM procedures in Algorithm 1 and its variant, respectively. Then, both procedures control the ARL by $1/\alpha $. If we further assume that the post-change observations ${X_{\nu +1}},{X_{\nu +2}},\dots $ are iid samples from a post-change distribution Q, then the worst average delays for ${N_{\mathrm{mSR}}^{\ast }}$ and ${N_{\mathrm{mCU}}^{\ast }}$ can be bounded as
The same bound holds also for ${\mathcal{J}_{P}}({N_{\mathrm{mSR}}^{\ast }})$ and ${\mathcal{J}_{P}}({N_{\mathrm{mCU}}^{\ast }})$.
(4.6)
\[ \begin{aligned}{}& \max \big\{{\mathcal{J}_{L}}\big({N_{\mathrm{mSR}}^{\ast }}\big),{\mathcal{J}_{L}}\big({N_{\mathrm{mCU}}^{\ast }}\big)\big\}\\ {} & \le \frac{{g_{\alpha }}}{D(Q||\mathcal{P})}+\frac{{\mathbb{V}_{0,Q}}[\log {L_{1}^{({\lambda ^{\mathrm{op}}})}}]}{{[D(Q||\mathcal{P})]^{2}}}+1.\end{aligned}\]In Proposition B.2 in Appendix B.1, we show that if the number of baseline processes ${K_{\max }}$ in Algorithm 1 is chosen large enough, then the quantity ${g_{\alpha }}$ returned by computeBaseline is at most
which can be easily evaluated numerically. Expression (B.15) in Appendix B.1 provides a precise formula for how large ${K_{\max }}$ needs to be in order for the above bound to be in effect. In most practical cases, ${K_{\max }}=1000$ is a large enough choice. Also, in many canonical examples we will present later, if we choose large enough ${K_{\max }}$ satisfying the condition (B.15) then the first term $\frac{{g_{\alpha }}}{D(Q||\mathcal{P})}$ of the upper bound of worst average delays in Theorem 4.3 become a leading term. In this case, from the inequality (4.7), we can check that this leading term is $O(\log (1/\alpha )/D(Q||\mathcal{P}))$ as $\alpha \to 0$.
(4.7)
\[ \underset{\eta \gt 1}{\inf }\eta \bigg[\log (1/\alpha )+\log \bigg(1+\bigg\lceil {\log _{\eta }}\frac{{\psi ^{\ast }}({\Delta _{U}})}{{\psi ^{\ast }}({\Delta _{L}})}\bigg\rceil \bigg)\bigg],\]Remark 4.4.
If there is only one pre-change distribution P and one post-change distribution Q, both from a natural univariate exponential family, then their likelihood ratio forms an exponential baseline increment. In this case, the above upper bound becomes $O(\log (1/\alpha )/\mathrm{KL}(Q||P))$ as $\alpha \to 0$, matching the rate of the known lower bounds [18].
The bound on worst average delays in Theorem 4.3 is obtained by analyzing an auxiliary stopping time
Using the same arguments as in the proof of Proposition 2.14, we immediately have that if the post-change observations are iid from Q, then for any $g\gt 1$,
The bound (4.6) is finally established by showing that the stopping time ${\bar{N}_{{g_{\alpha }}}}$ obtained by using the threshold ${g_{\alpha }}$ produced by Algorithm 3 is a deterministic upper bound to the stopping times ${N_{{c_{\alpha }}}}$ and ${N_{1/\alpha }}$ corresponding to mixtures of SR and CUSUM e-detectors. In detail, it holds that for any stream of observations ${X_{1}},{X_{2}},\dots $ ,
This nontrivial result is formally stated in Lemma B.1 in Appendix B.1. Its proof leverages geometric arguments used in [36, Theorem 2] to analyze sequential generalized likelihood ratio tests.
(4.8)
\[ {\bar{N}_{g}}:=\inf \Bigg\{n\ge 1:\underset{\lambda \in ({\lambda _{L}},{\lambda _{U}})}{\sup }{\sum \limits_{i=1}^{n}}\log {L_{i}^{(\lambda )}}\ge g\Bigg\},\hspace{1em}g\gt 1.\](4.9)
\[ {\mathbb{E}_{0,Q}}{\bar{N}_{g}}\le \frac{g}{D(Q||\mathcal{P})}+\frac{{\mathbb{V}_{0,Q}}[\log {L_{1}^{({\lambda ^{\mathrm{op}}})}}]}{{[D(Q||\mathcal{P})]^{2}}}+1.\]4.2 Non-separated Pre- and Post-change Distributions
The previous subsection discussed how to build mixtures of e-SR and e-CUSUM procedures with an explicit upper bound on worst average delays when we have known and positive boundary values, ${\lambda _{L}}$ and ${\lambda _{U}}$ on the unknown ${\lambda ^{\mathrm{op}}}$ via the knowledge of ${\Delta _{L}}\lt {\Delta ^{\mathrm{op}}}\lt {\Delta _{U}}$. However, in many cases, we may not be fully certain about the boundary values. In this subsection, we discuss how we can generalize the previous argument to the no separation case whereby we only know the sign of ${\lambda ^{\mathrm{op}}}(\gt 0)$ but do not have specific boundary values.
Recall that, for the well-separated case, we calibrated the mixtures of finitely many exponential baseline e-detectors using the stopping time ${\bar{N}_{{g_{\alpha }}}}$ in (4.8), which is in turn based on the maximum of underlying baseline increments over the known upper and lower bounds of ${\lambda ^{\mathrm{op}}}$. Since we no longer have knowledge of the boundary values ${\lambda _{L}}$ and ${\lambda _{U}}$, we may use similar stopping times where the range of maximum and the threshold slowly increase over time. In this case, we need an infinite sequence of baseline procedures ${\{L(k)\}_{k\in \mathbb{N}}}$ and mixing weights ${\{{\omega _{k}}\}_{k\in \mathbb{N}}}$ to build adaptive e-SR and e-CUSUM procedures.
The bound in Theorem 3.4 along with the fact ${\gamma _{j}}\ge 1$ for all $j\in \mathbb{N}$ implies that, for any given scheduling function $K:\mathbb{N}\to \mathbb{N}$, if the post-change observations form a strong stationary process then worst average delays for adaptive e-SR and e-CUSUM procedures can be upper bounded by ${\min _{j\ge 1}}[{\mathbb{E}_{0,Q}}{N_{1/\alpha }}(j)+j-1]$ and ${\min _{j\ge 1}}[{\mathbb{E}_{0,Q}}{N_{{c_{\alpha }}}}(j)+j-1]$, respectively, where we recall that ${N_{c}}(j)$ is defined for $c\gt 0$ by
Again, since we can set the threshold for the e-CUSUM procedure in such a manner that ${c_{\alpha }}\le 1/\alpha $ (so that ${\mathbb{E}_{0,Q}}{N_{{c_{\alpha }}}}(j)\le {\mathbb{E}_{0,Q}}{N_{1/\alpha }}(j)$), in this subsection, we focus on constructing a set of baseline increments on which we can derive a tight upper bound on ${\min _{j\ge 1}}[{\mathbb{E}_{0,Q}}{N_{1/\alpha }}(j)+j-1]$.
(4.11)
\[ {N_{c}}(j):=\inf \Bigg\{n\ge 1:{\sum \limits_{k=1}^{K(j)}}{\omega _{k}}{\prod \limits_{i=1}^{n}}{L_{i}}(k)\ge c\Bigg\}.\]To derive the set of baseline increments, we use a time-varying boundary function g. Here, we intentionally overload notation: the constant g in the previous subsection for the well-separation case can be viewed as a constant function g in what follows. Let $g:[1,\infty )\to [0,\infty )$ be a nonnegative and nondecreasing continuous function such that the mapping $t\mapsto g(t)/t$ is nonincreasing and ${\lim \nolimits_{t\to \infty }}g(t)/t=0$. For a chosen positive number ${\Delta _{0}}\gt 0$, let
Finally, based on the sequence ${\{{\Delta _{k}}\}_{k\ge 0}}$, define
and set ${\omega _{0}}:={\alpha ^{-1}}{e^{-g({V_{0}})}}1(g({V_{0}})\gt {v_{\min }}{D_{0}})$, ${\omega _{k}}:={\alpha ^{-1}}{e^{-g({V_{0}}{\eta ^{k}})/\eta }}$ for each $k\in \mathbb{N}$ where ${v_{\min }}:={\min _{x}}v(x)$, recalling the function v from Definition 4.1.
\[ {D_{0}}:={\psi ^{\ast }}({\Delta _{0}})\hspace{2.5pt}\text{and}\hspace{2.5pt}{V_{0}}:=\inf \{t\ge 1:{D_{0}}\ge g(t)/t\}.\]
Now, for any fixed $\eta \gt 1$ and $j\in \mathbb{N}$, define ${\Delta _{1}}\gt {\Delta _{2}}\gt \cdots \hspace{0.1667em}$ as positive solutions of the equations
(4.12)
\[ {\psi ^{\ast }}({\Delta _{k}})=\frac{g({V_{0}}{\eta ^{k}})}{{V_{0}}{\eta ^{k}}},\hspace{2.5pt}\hspace{2.5pt}k=0,1,2,\dots .\]Based on the quantities defined above, we can construct the stopping time ${N_{1/\alpha }}(j)$ for each j. The following lemma shows that we can upper bound the stopping time ${N_{1/\alpha }}(j)$ with another stopping time ${\bar{N}_{g}}(j)$ from which we can derive an explicit upper bound on its expected stopping time.
Lemma 4.5.
For any fixed $j\ge 1$, ${\Delta _{0}}\gt 0$, and tuning parameter $\eta \gt 1$, let ${N_{1/\alpha }}(j)$ be the stopping time based on the parameters defined above. Then, we have
where ${\bar{N}_{g}}(j)$ is a stopping time defined by
Note that the chosen set of weights ${\{{\omega _{k}}\}_{k\ge 0}}$ yields valid adaptive e-SR and e-CUSUM procedures if
Once the above condition is satisfied, we can use the worst average delay analysis in Section 3.2 with the bound in Lemma 4.5 to get an explicit upper bound on the worst average delay of the adaptive e-SR and e-CUSUM procedures.
(4.15)
\[ {e^{-g({V_{0}})}}1\big(g({V_{0}})\gt {v_{\min }}{D_{0}}\big)+{\sum \limits_{k=1}^{\infty }}{e^{-g({V_{0}}{\eta ^{k}})/\eta }}\le \alpha .\]In detail, let ${j^{\mathrm{op}}}$ be the smallest integer satisfying ${\lambda _{K({j^{\mathrm{op}}})}}\lt {\lambda ^{\mathrm{op}}}$ and set ${K^{\mathrm{op}}}:=K({j^{\mathrm{op}}})$. If we also have ${\lambda ^{\mathrm{op}}}\lt {\lambda _{0}}$ then Lemma 4.5 implies
where the stopping time ${N_{\mathrm{op}}}$ is defined by
and the expectation ${\mathbb{E}_{0,Q}}{N_{\mathrm{op}}}$ is typically on the order of $g({V_{0}}{\eta ^{{K^{\mathrm{op}}}}})/D(Q||\mathcal{P})$. Based on this observation, in the rest of this subsection, we introduce a practical and interpretable way to choose a boundary function g and related tuning parameters which minimize the leading term $g({V_{0}}{\eta ^{{K^{\mathrm{op}}}}})$ while satisfying the condition (4.15) on the set of mixing weights.
(4.16)
\[ {\mathbb{E}_{0,Q}}{N_{1/\alpha }}\big({j^{\mathrm{op}}}\big)\le {\mathbb{E}_{0,Q}}{\bar{N}_{g}}\big({j^{\mathrm{op}}}\big)\le {\mathbb{E}_{0,Q}}{N_{\mathrm{op}}},\](4.17)
\[ {N_{\mathrm{op}}}:=\inf \Bigg\{n\ge 1:{\sum \limits_{i=1}^{n}}\log {L_{i}^{({\lambda ^{\mathrm{op}}})}}\ge g\big({V_{0}}{\eta ^{{K^{\mathrm{op}}}}}\big)\Bigg\},\]First note that, although we have no bounds on ${\Delta ^{\mathrm{op}}}$ in the no separation case, we can still choose ${\Delta _{L}}$ and ${\Delta _{0}}$ with ${\Delta _{L}}\lt {\Delta _{0}}$ as tuning parameters that represent our initial guess on the range of the unknown ${\Delta ^{\mathrm{op}}}$. Since it is possible that the unknown parameter ${\Delta ^{\mathrm{op}}}$ of the post-change distribution is outside of the boundary $({\Delta _{L}},{\Delta _{0}})$, instead of assigning the entire α to the inside of the guessed interval, we split it into two parts by $r\alpha $ and $(1-r)\alpha $, respectively where $r\in (0,1)$ is another tuning parameter called the importance weight. Roughly speaking, larger r implies we make a higher bet on that the unknown ${\Delta ^{\mathrm{op}}}$ is inside of our chosen boundaries $({\Delta _{L}},{\Delta _{0}})$.
Now, given tuning parameters ${\Delta _{L}},{\Delta _{0}}$ and r, we compute the set of $\{{g_{r\alpha }},{K_{L}},\eta \}$ by executing the function computeBaseline, just like in Algorithm 1, except that α is replaced by $r\alpha $. Then, we can extend the boundary function g to accommodate the case in which the unknown ${\Delta ^{\mathrm{op}}}$ is not inside the initial interval we had guessed. To be specific, we use the boundary function
where ${V_{0}}:={g_{r\alpha }}/{D_{0}}$ and $s\gt 1$ is a constant obtained as the solution of the equation
Note that the right hand side of the above equation is approximately equal to $(1-r)\alpha {e^{{g_{r\alpha }}/\eta }}$. Therefore,
(4.18)
\[ t\in [1,\infty )\mapsto g(t):={g_{r\alpha }}+s\eta \log \bigg(1+{\log _{\eta }}\bigg(\frac{t}{{V_{0}}{\eta ^{{K_{L}}}}}\vee 1\bigg)\bigg),\](4.19)
\[ \begin{aligned}{}\zeta (s)-1& :={\sum \limits_{k=1}^{\infty }}\frac{1}{{(1+k)^{s}}}\\ {} & ={e^{{g_{r\alpha }}/\eta }}\big[\alpha -\big\{{e^{-{g_{r\alpha }}}}1({g_{r\alpha }}\gt {D_{0}})+{K_{L}}{e^{-{g_{r\alpha }}/\eta }}\big\}\big].\end{aligned}\]In Algorithm 2, we provide the detailed steps for the adaptive e-SR procedure based on the boundary function in (4.18). The algorithm can be easily modified for the adaptive e-CUSUM procedure by replacing the update in Line 15 with the rule
Also, for the adaptive e-CUSUM procedure, we can replace the threshold $1/\alpha $ with a smaller value ${c_{\alpha }}$ if we have enough information about the pre-change distribution.
(4.21)
\[ {M_{n}^{\mathrm{CU}}}(k)\gets \exp \big\{{\lambda _{k}}s({X_{n}})-{\psi _{k}}v({X_{n}})\big\}\cdot \max \big\{{M_{n-1}^{\mathrm{CU}}}(k),\gamma \big\}.\]In term of computational complexity, in Algorithm 2 we set the scheduling function $K:\mathbb{N}\to \mathbb{N}$ as
where $m\ge 1$ is a tuning parameter. Therefore, for both adaptive e-SR and e-CUSUM procedures, updates of statistics have $O(m{\log _{\eta }}n)$ time and space complexities at each time n. Although it is not a fully online algorithm, logarithm time and space complexities make it feasible to run adaptive e-SR and e-CUSUM procedures in most practical online settings.
From Section 3.1, we know that both procedures control the ARL by $1/\alpha $. The following theorem introduces explicit bounds on the worst average delays for both procedures.
Corollary 4.6.
Let ${N_{\mathrm{aSR}}^{\ast }}$ and ${N_{\mathrm{aCU}}^{\ast }}$ be stopping times corresponding to the adaptive e-SR procedures in Algorithm 2 and its and e-CUSUM variant, respectively. Then, both procedures control ARL by $1/\alpha $. If we further assume that post-change observations ${X_{\nu +1}},{X_{\nu +2}},\dots $ are iid samples from a post-change distribution then the worst average delays for ${N_{\mathrm{aSR}}^{\ast }}$ and ${N_{\mathrm{aCU}}^{\ast }}$ can be upper bounded as
Note that $\eta ,s\gt 1$ and $r\in (0,1)$ do not depend on the unknown ${\Delta ^{\mathrm{op}}}$.
(4.23)
\[ \begin{aligned}{}& \max \big\{{\mathcal{J}_{L}}\big({N_{\mathrm{aSR}}^{\ast }}\big),{\mathcal{J}_{L}}\big({N_{\mathrm{aCU}}^{\ast }}\big)\big\}\\ {} & \le \left\{\begin{array}{l@{\hskip10.0pt}l}\frac{{g_{r\alpha }}}{D(Q||\mathcal{P})}\frac{{\psi ^{\ast }}({\Delta ^{\mathrm{op}}})}{{\psi ^{\ast }}({\Delta _{0}})}\hspace{1em}\\ {} +\frac{{\mathbb{V}_{0,Q}}[\log {L_{1}^{({\lambda _{0}})}}]}{{[D(Q||\mathcal{P})]^{2}}}{[\frac{{\psi ^{\ast }}({\Delta ^{\mathrm{op}}})}{{\psi ^{\ast }}({\Delta _{0}})}]^{2}}+1\hspace{1em}& \textit{if}\hspace{2.5pt}{\Delta ^{\mathrm{op}}}\ge {\Delta _{0}}\\ {} \frac{{g_{r\alpha }}}{D(Q||\mathcal{P})}+\frac{{\mathbb{V}_{0,Q}}[\log {L_{1}^{({\lambda ^{\mathrm{op}}})}}]}{{[D(Q||\mathcal{P})]^{2}}}+1\hspace{1em}& \textit{if}\hspace{2.5pt}{\Delta ^{\mathrm{op}}}\hspace{-0.1667em}\in \hspace{-0.1667em}({\Delta _{L}},{\Delta _{0}})\\ {} \frac{{g_{r\alpha }}+s\eta \log (1+{K^{\mathrm{op}}}-{K_{L}})}{D(Q||\mathcal{P})}\hspace{1em}\\ {} +\frac{{\mathbb{V}_{0,Q}}[\log {L_{1}^{({\lambda ^{\mathrm{op}}})}}]}{{[D(Q||\mathcal{P})]^{2}}}\hspace{1em}\\ {} +{[\frac{{\psi ^{\ast }}({\Delta _{L}})}{{\psi ^{\ast }}({\Delta ^{\mathrm{op}}})}\frac{{g_{r\alpha }}+s\eta \log (1+{K^{\mathrm{op}}}-{K_{L}})}{{g_{r\alpha }}}]^{1/m}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{1em}& \textit{if}\hspace{2.5pt}{\Delta ^{\mathrm{op}}}\le {\Delta _{L}}.\end{array}\right.\end{aligned}\]5 Application to Real Data and Simulation Study
5.1 Bernoulli Random Variables with Dependent, Time-varying Means
Winning Rates of the Cavaliers To illustrate how sequential change detection procedures based on e-detectors work, we revisit the example of the Cleveland Cavaliers, an American professional basketball team introduced in Section 1.2. Instead of using Plus-Minus stats, in this example, we are monitoring the performance of the Cavaliers by keeping track of wins and losses over all the games. Let ${X_{1}},{X_{2}},\dots \in \{0,1\}$ be the sequence of win indicators during 2010–11 to 2017–18 regular seasons, where ${X_{i}}=1$ if the Cavaliers won game i. Though Figure 2 presents monthly and seasonal averages for the purpose of visualization, we use the underlying binary sequence to build a sequential change detection procedure.
Modeling Winning Probabilities as a Dependent Sequence of Bernoullis To detect a significant improvement of the performance of the Cavaliers, we assume that before an unknown changepoint $\nu \in \mathbb{N}\cup \{\infty \}$, the conditional average of winning probability given the sample history is less than or equal to ${p_{0}}:=0.49$. That is, under any pre-change distribution P we have ${p_{n}}:={\mathbb{E}_{P,\infty }}[{X_{n}}\mid {\mathcal{F}_{n-1}}]\le {p_{0}}$. (For simplicity, $\mathcal{F}$ is taken to be the natural filtration of the data.) Thus, the pre-change class of distributions is
where we parameterize each distribution P over binary sequences by the sequence of conditional probabilities.
Our objective is to build mixtures of e-SR and e-CUSUM procedures tuned to quickly detect any significantly improved win rate larger than ${q_{0}}:=0.51$ after the changepoint. This can be modeled by assuming that after some changepoint ν, the distribution Q is such that ${\mathbb{E}_{P,\nu ,Q}}[{X_{n}}\mid {\mathcal{F}_{n-1}},n\gt \nu ]:={q_{n}}\ge {q_{0}}$. Thus, we may think of the post-change class of distributions as being
In particular, this formalization allows for the winning probabilities to fluctuate over time before and after the changepoint (accounting for factors like form, injuries, etc.).
Deriving Exponential Baseline Processes For each $\lambda \gt 0$, define a baseline increment process ${L^{(\lambda )}}:={\{{L_{n}^{(\lambda )}}\}_{n\ge 1}}$ as
where ${\psi _{B}}(\lambda ):=\log (1-{p_{0}}+{p_{0}}{e^{\lambda }})-\lambda {p_{0}}$ is the Bernoulli cumulant generating function. Note that each ${L^{(\lambda )}}$ is a valid baseline increment as it satisfies the inequality (2.6) in Definition 2.8. That is, under any pre-change distribution P, we have
\[\begin{aligned}{}& {\mathbb{E}_{P,\infty }}\big[{L_{n}^{(\lambda )}}\mid {\mathcal{F}_{n-1}}\big]\\ {} & ={\mathbb{E}_{P,\infty }}\big[\exp \big\{\lambda {X_{n}}-\log \big(1-{p_{0}}+{p_{0}}{e^{\lambda }}\big)\big\}\mid {\mathcal{F}_{n-1}}\big]\\ {} & =\frac{{\mathbb{E}_{P,\infty }}[{e^{\lambda {X_{n}}}}\mid {\mathcal{F}_{n-1}}]}{1-{p_{0}}+{p_{0}}{e^{\lambda }}}=\frac{1-{p_{n}}+{p_{n}}{e^{\lambda }}}{1-{p_{0}}+{p_{0}}{e^{\lambda }}}\le 1,\hspace{2.5pt}\hspace{2.5pt}\forall \lambda \ge 0.\end{aligned}\]
To derive exponential baseline processes, we first consider a simplified post-change distribution Q where each post-change observation is identically distributed with ${\mathbb{E}_{0,Q}}[{X_{1}}]:=q\ge {q_{0}}\gt {p_{0}}$. In this case, the optimal choice of $\lambda \ge 0$ given by
Since the baseline increment has the exponential structure, by Proposition 4.2, we have that
where $\mathrm{KL}(q||{p_{0}})$ is the Kullback-Leibler (KL) divergence of Bernoulli distributions with parameters q and ${p_{0}}$ written as
for $q,{p_{0}}\in (0,1)$. The appearance of the KL divergence in (5.3) is not a coincidence as the baseline increment can be viewed as a re-parametrized likelihood ratios between two Bernoulli processes. However, the simple geometric structure of the baseline increment make it possible to utilize a prior knowledge about the post-change distribution via Algorithm 1 and 2.
(5.2)
\[ {\lambda ^{\mathrm{op}}}:=\underset{\lambda \ge 0}{\operatorname{arg\,max}}{\mathbb{E}_{0,Q}}\exp \big\{\lambda ({X_{1}}-{p_{0}})-{\psi _{B}}(\lambda )\big\}.\](5.3)
\[ D(Q||\mathcal{P}):={\mathbb{E}_{0,Q}}\log {L_{1}^{({\lambda ^{\mathrm{op}}})}}={\psi _{B}^{\ast }}(q-{p_{0}})=\mathrm{KL}(q||{p_{0}}),\]For instance, suppose we know upper and lower bounds of conditional means of the post-change distribution as ${q_{n}}\in ({q_{L}},{q_{U}}),\forall n\gt \nu $. Let ${N_{\mathrm{mSR}}^{\ast }}$ and ${N_{\mathrm{mCU}}^{\ast }}$ be stopping times of mixtures of e-SR and e-CUSUM procedures in Algorithm 1. In this case, derived sequential change detection procedures do not rely on a specific choice of a post-change distribution $Q\in \mathcal{Q}$. However, these procedures can still perform almost as well as the one optimized to a specific choice of the post-change distribution within the same range $({q_{L}},{q_{U}})$. Typically, if the post-change observations are iid samples from a post-change distribution Q with ${\mathbb{E}_{0,Q}}[{X_{1}}]:=q\in ({q_{L}},{q_{U}})$, then by Theorem 4.3, the worst average delays have the following explicit bound:
\[\begin{aligned}{}& \max \big\{{\mathcal{J}_{L}}\big({N_{\mathrm{mSR}}^{\ast }}\big),{\mathcal{J}_{L}}\big({N_{\mathrm{mCU}}^{\ast }}\big)\big\}\\ {} & \le \frac{{g_{\alpha }}}{\mathrm{KL}(q||{p_{0}})}+\frac{q(1-q){[\log (\frac{1-{p_{0}}}{{p_{0}}}\frac{q}{1-q})]^{2}}}{{[\mathrm{KL}(q||{p_{0}})]^{2}}}+1.\end{aligned}\]
Typically for small $\alpha \ll 1$, from Proposition B.2, we can simplify the above upper bound as
\[\begin{aligned}{}& \max \big\{{\mathcal{J}_{L}}\big({N_{\mathrm{mSR}}^{\ast }}\big),{\mathcal{J}_{L}}\big({N_{\mathrm{mCU}}^{\ast }}\big)\big\}\\ {} & \lessapprox \frac{{\inf _{\eta \gt 1}}\eta [\log (1/\alpha )+\log (1+\lceil {\log _{\eta }}\frac{\mathrm{KL}({q_{U}}||{p_{0}})}{\mathrm{KL}({q_{L}}||{p_{0}})}\rceil )]}{\mathrm{KL}(q||{p_{0}})},\end{aligned}\]
which matches the rate of the worst average delays, $O(\log (1/\alpha )/\mathrm{KL}(q||{p_{0}}))$ of the oracle sequential change detection procedure as $\alpha \to 0$.Figure 2
Left: Monthly win rates of the Cavaliers from 2010–11 to 2017–18 seasons (the raw data is Bernoulli, which is harder to visualize). Each red line corresponds to the seasonal average. Right: Paths of log e-detectors (SR: red; CUSUM: green). The horizontal line is the threshold (common to both procedures) equal to $\log (1/\alpha )$, ensuring that the ARL is at least $1/\alpha ={10^{3}}$, larger than the number of games in 12 seasons (82 per season). The e-SR procedure detects a changepoint during the 2014–15 season.
Implementation of Algorithm 1 and Its Results The lower bound ${q_{L}}$ can be chosen as ${q_{0}}=0.51$ since it is the minimum winning rate we consider as a significant improvement from before the changepoint, when the rates are upper bounded by ${p_{0}}=0.49$. We can also safely assume that the win rate cannot be too high given the competitiveness of the NBA, so that the improved win rates cannot be larger than 0.9. In our framework, these considerations can be encoded by setting ${\Delta _{L}}:={q_{0}}-{p_{0}}=0.02$ and ${\Delta _{U}}:=0.41$ as input parameters of Algorithm 1. As in Section 1.2, we set $\alpha :={10^{-3}}$ to ensure that the ARL is at least $1/\alpha :={10^{3}}$, which is more than the total number of games over 12 years of regular seasons. Finally, we set the maximum number of baselines ${K_{\max }}:=1000$. In fact, the computeBaseline function of Algorithm 3 returns only 69 baseline processes, and thus the resulting mixtures of e-SR and e-CUSUM procedures of Algorithm 1 can be computed efficiently in an online fashion.
The right plot in Figure 2 presents the log e-detector values using mixtures of e-SR (red) and e-CUSUM (green) procedures. Although there were a few months in which monthly win rates were higher than ${p_{0}}$, overall log e-detectors remained at a stable level over the first four seasons. However, after the 2014–15 season starts, the log e-detectors increase rapidly and both procedures detect a changepoint during the 2014–15 season, which is the season that marked the return of LeBron James to the Cavaliers.
5.2 Mean-shift Detection in General Bounded Random Variables
Plus-Minus of the Cavaliers Revisited We return to the Cavaliers 2011–2018 example from Section 1.2. Let ${\widetilde{X}_{1}},{\widetilde{X}_{2}},\dots \hspace{0.1667em}$ be the sequence of Plus-Minus stats from each game. We assume that the average Plus-Minus of the team is less than or equal to ${\mu _{\lt }}:=-1$ before the changepoint (if any), while after the changepoint it is greater than ${\mu _{\gt }}:=1$. Here, the gap $|{\mu _{\gt }}-{\mu _{\lt }}|$ between averages of Plus-Minus in pre- and post-changes refers to the degree of improvement we consider as significant.
For convenience, we first normalize the observed sequence. We assume that the absolute value of each Plus-Minus is bounded by 80, meaning that no team beats another by over 80 points (such an extreme game has never happened in NBA history). Accordingly, define the normalized Plus-Minus, ${X_{n}}:=({\widetilde{X}_{n}}+80)/160\in [0,1]$ for each n. Then, the pre-change observations have conditional mean at most $m:=({\mu _{\lt }}+80)/160=0.494$ and the minimum gap to detect is equal to $\delta :=|{\mu _{\gt }}-{\mu _{\lt }}|/160=0.0125$.
Modeling Plus-Minus Stats as a Bounded Sequence with Time-varying, Dependent Means After the normalization above, the Plus-Minus stats form sequence of bounded random variables ${X_{1}},{X_{2}},\dots $ on $[0,1]$. Each observation may have different distribution (due to seasonal effects, injuries, form, etc.), but we assume that all observations before an unknown changepoint ν have a mean less than or equal to a known boundary $m\in (0,1)$, when conditioned on the past sample history. That is, under any pre-change distribution P, we have ${\mu _{n}}:={\mathbb{E}_{P,\infty }}[{X_{n}}\mid {\mathcal{F}_{n-1}}]\le m$, $\forall n\ge 1$. In other words, we use
where other characteristics about P (outside of its sequence of conditional means) are irrelevant. But after the changepoint, all observations have (conditional) mean larger than the boundary m with the minimum gap equal to δ. Thus,
To build an e-SR or e-CUSUM procedure, we need to choose a baseline increment. To derive it, we first consider a simplified setting where both pre- and post-change observations are independently and identically distributed with ${\mathbb{E}_{P,\infty }}[X]\le m$ and ${\mathbb{E}_{0,Q}}[X]\ge m+\delta $, respectively. In this simplified case, we simply refer P and Q to marginal pre- and post-change distributions and $\mathcal{P}$ and $\mathcal{Q}$ to their collections. Then, define ${\mathrm{KL}_{\inf }}(Q;m):={\inf _{P\in \mathcal{P}}}\mathrm{KL}(Q,P)$ to be the smallest KL divergence between Q and $\mathcal{P}$. It is known (see, e.g., [9, 10]) that ${\mathrm{KL}_{\inf }}$ has the following variational representation:
Accordingly, for each $\lambda \in (0,1)$, define the baseline increment ${L^{\lambda }}:={\{{L_{n}}\}_{n\ge 1}}$ as
for each $n\in \mathbb{N}$. Though the baseline increment above has been derived in the simplified iid setting, it can be checked that ${L^{\lambda }}$ is also a valid baseline increment for the general time-varying, dependent means case since it is nonnegative whenever ${X_{n}},m\in [0,1]$ as assumed in our setup, and for each pre-change distribution $P\in \mathcal{P}$, we have
(5.5)
\[ \begin{aligned}{}{\mathrm{KL}_{\inf }}(Q,m)& =\underset{\lambda \in (0,1)}{\sup }{\mathbb{E}_{0,Q}}\log \bigg(1+\lambda \bigg(\frac{X}{m}-1\bigg)\bigg)\\ {} & =:D(Q||\mathcal{P}).\end{aligned}\]
\[\begin{aligned}{}{\mathbb{E}_{P,\infty }}\big[{L_{n}^{\lambda }}\mid {\mathcal{F}_{n-1}}\big]& =1+\lambda \bigg(\frac{{\mathbb{E}_{P,\infty }}[{X_{n}}\mid {\mathcal{F}_{n-1}}]}{m}-1\bigg)\le 1,\end{aligned}\]
where the inequality comes from the condition ${\mu _{n}}\le m$ for any pre-change distribution.Interestingly, the baseline increments in (5.6) correspond to rescaled increments of the capital process used in [48] to design test martingales for confidence sequences of means of bounded random variables. Though the expressions are essentially identical, ours was obtained via a variational representation of the KL divergence between distributions of bounded random variables, while the derivation presented in [48] is based on a betting interpretation of hypothesis testing.
For any $Q\in \mathcal{Q}$, let ${\lambda ^{\mathrm{op}}}$ be the optimal choice of $\lambda \in [0,1]$ given by
Unfortunately, it is typically difficult to compute the optimal ${\lambda ^{\mathrm{op}}}$ since it depends on the unknown post-change distribution Q in a complicated way. In this case, we use a sub-exponential lower bound from [5, 12], given by
where ${\psi _{E}}(\lambda ):=-\log (1-\lambda )-\lambda $ for $\lambda \in (0,1)$. For each $\lambda \in (0,1)$, the process ${\widetilde{L}^{\lambda }}$ is itself a valid exponential baseline increment with $s(x):=x/m-1$ and $v(x):={(x/m-1)^{2}}$.
(5.7)
\[ {\lambda ^{\mathrm{op}}}=\underset{\lambda \in [0,1])}{\operatorname{arg\,max}}{\mathbb{E}_{0,Q}}\log \bigg(1+\lambda \bigg(\frac{X}{m}-1\bigg)\bigg).\](5.8)
\[ \begin{aligned}{}{\widetilde{L}_{n}^{\lambda }}& :=\exp \bigg\{\lambda \bigg(\frac{{X_{n}}}{m}-1\bigg)-{\psi _{E}}(\lambda ){\bigg(\frac{{X_{n}}}{m}-1\bigg)^{2}}\bigg\}\\ {} & \le 1+\lambda \bigg(\frac{{X_{n}}}{m}-1\bigg)={L_{n}^{\lambda }},\end{aligned}\]The lower bound in (5.8) also implies the lower bound
where ${\psi _{E}^{\ast }}(u):=u-\log (1+u)$ is the convex conjugate of ${\psi _{E}}$, while $\mu ,{\sigma ^{2}}$ and ${\Delta ^{\mathrm{op}}}$ from Section 4 are:
(5.9)
\[ {\mathrm{KL}_{\inf }}(Q,m)\ge \underset{\lambda \in [0,1]}{\sup }\big\{\lambda \mu -{\psi _{E}}(\lambda ){\sigma ^{2}}\big\}={\sigma ^{2}}{\psi _{E}^{\ast }}\big({\Delta ^{\mathrm{op}}}\big),\]
\[\begin{aligned}{}\mu & :={\mathbb{E}_{0,Q}}s(X)=\frac{{\mathbb{E}_{0,Q}}X-m}{m},\\ {} {\sigma ^{2}}& :={\mathbb{E}_{0,Q}}v(X)=\frac{{\mathbb{E}_{0,Q}}{(X-m)^{2}}}{{m^{2}}},\\ {} {\Delta ^{\mathrm{op}}}& :=\frac{\mu }{{\sigma ^{2}}}=\frac{m[{\mathbb{E}_{0,Q}}X-m]}{{\mathbb{E}_{0,Q}}{(X-m)^{2}}}.\end{aligned}\]
Noting that ${\psi _{E}^{\ast }}(u)\approx {u^{2}}/2$ for small u, we see that for small ${\Delta ^{\mathrm{op}}}\ll 1$, one has
Note that the oracle ${\Delta ^{\mathrm{op}}}$ depends on the unknown post-change distribution only via first and second moments. Therefore, in contrast to the original set of baseline increments ${\{{L^{\lambda }}\}_{\lambda \in (0,1)}}$, the exponential baseline increments ${\{{\widetilde{L}^{\lambda }}\}_{\lambda \in (0,1)}}$ that lower bound them allow us to more easily set a range $({\Delta _{L}},{\Delta _{U}})$ to build mixtures of the e-SR and e-CUSUM procedures. For example, if we assume that the post-change distribution has mean at least $m+\delta $ for a positive δ then we can upper and lower bound ${\Delta ^{\mathrm{op}}}$ by
Now, given ${\Delta _{L}}$ and ${\Delta _{U}}$, we can use Algorithm 1 to run the mixture of e-SR or e-CUSUM procedure to detect the changepoint based on the exponential baseline increments ${\{{\widetilde{L}^{\lambda }}\}_{\lambda \in (0,1)}}$. It is also straightforward to build the corresponding mixtures of e-SR and e-CUSUM procedures for the original baseline increment ${\{{L^{\lambda }}\}_{\lambda \in (0,1)}}$ which is always more sample-efficient.
Figure 3
Left: E-detectors (SR: red; CUSUM: green) over eight seasons (82 games per season). Right: Logarithm of e-detectors against date (the sharp rise of e-SR is simply due to the log scale). In both plots, horizontal lines are thresholds equal to $1/\alpha $ (left) and $\log (1/\alpha )$ (right) controlling the ARL by $1/\alpha ={10^{3}}$. The e-SR procedure detects a change during the 2014–15 season, while e-CUSUM takes longer (as expected).
Implementation of Algorithm 1 and Its Results Recall that in the plus-minus stats running example, we use pre-change mean $m=0.494$ and the minimum gap $\delta =0.0125$, which bounds ${\Delta ^{\mathrm{op}}}$ by ${\Delta _{L}}:=0.024$ and ${\Delta _{U}}:=\frac{m(1-m)}{{\delta ^{2}}}=1600$. As before, we choose $\alpha ={10^{-3}}$ to make the ARL larger than 12 regular seasons and set the maximum number of baselines ${K_{\max }}=1000$. Based on these parameters, we can build mixtures of e-SR and e-CUSUM procedures. Though the difference between ${\Delta _{L}}$ and ${\Delta _{U}}$ may seem to be large, the actual number of baselines returned by the function computeBaseline in Algorithm 1 is 190, which is small enough to update the procedure efficiently on the fly.
Figure 3 shows e-detectors (left) and their logarithms (right). The horizontal line corresponds to the detection boundary given by $1/\alpha $ (left) and $\log {\alpha ^{-1}}$ (right). Similar to the winning rate example, the log e-detectors remained stable during the first four regular seasons, although the difference between SR and CUSUM e-detectors is larger than before. After 2014–15 season started, both e-detectors increased rapidly, and the e-SR procedure detects a changepoint during the 2014–15 season, but e-CUSUM detects the changepoint only in the following season (as expected, since both procedures use the same threshold).
5.3 Simulation-based Comparison with Parametric Methods
In the Bernoulli example of Section 5.1, we showed that, in the simple i.i.d Bernoulli setup, our mixtures of e-SR and e-CUSUM procedures match the rate of the worst average delays $O(\log (1/\alpha )/\mathrm{KL}(q||{p_{0}}))$ of the oracle sequential change detection procedure as $\alpha \to 0$. In this subsection, we conduct a simulation study to compare the efficiency of our e-SR procedure with the oracle CUSUM procedure with the exact threshold [21, 31], given by
where ${p^{\ast }}$ is the true post-change distribution parameter (hence the oracle designation) and ${c_{\alpha }^{\ast }}$ is the value of the threshold so that the ARL is exactly $1/\alpha $. It is well known that the oracle CUSUM procedure (with the appropriate choice of the stopping threshold that controls ARL exactly) minimizes the worst average delay.
(5.12)
\[ \begin{aligned}{}{N_{\mathrm{CU}}^{\ast }}& :=\inf \Bigg\{n\ge 1:\underset{0\le j\lt n}{\max }{\sum \limits_{i=j}^{n}}\log \frac{{f_{{p^{\ast }}}}({X_{i}})}{{f_{{p_{0}}}}({X_{i}})}\ge {c_{\alpha }^{\ast }}\Bigg\},\end{aligned}\]We also compare our method with a version of the GLR procedure based on the stopping time
where each ${\widehat{p}_{j:n}}$ is the MLE of the post-change parameter and the exact threshold ${c_{\alpha }}$ is tuned to control ARL exactly at $1/\alpha $ (this is typically only possible in such simple parametric settings, either by analytic derivations or simulations). Unlike the oracle CUSUM procedure, the GLR procedure does not have an iterative update rule, as we need to recompute the MLE of the post-change parameter at each time. As a result, its computational cost at time n is $O(n)$, which makes an online implementation very costly. In practice, we may want to use a window-limited GLR procedure to overcome the computational challenge. However, in our study, we deploy the GLR procedure to avoid the additional challenge of picking a window size.
(5.13)
\[ \begin{aligned}{}{N_{\mathrm{GL}}^{\ast }}& :=\inf \Bigg\{n\ge 1:\underset{0\le j\lt n}{\max }\underset{p}{\sup }{\sum \limits_{i=j}^{n}}\log \frac{{f_{p}}({X_{i}})}{{f_{{p_{0}}}}({X_{i}})}\ge {c_{\alpha }}\Bigg\}\\ {} & =\inf \Bigg\{n\ge 1:\underset{0\le j\lt n}{\max }{\sum \limits_{i=j}^{n}}\log \frac{{f_{{\widehat{p}_{j:n}}}}({X_{i}})}{{f_{{p_{0}}}}({X_{i}})}\ge {c_{\alpha }}\Bigg\},\end{aligned}\]Figure 4
Average detection delay for each changepoint $\nu =0,100,\dots ,500$ (each experiment has exactly one changepoint at ν). Three of the methods use an exact threshold calculated via simulation (only possible in this simple, parametric example). Only the Oracle CUSUM method knows the post-change distribution. Even though e-SR uses the conservative $\log (1/\alpha )$ threshold, its detection delay is excellent, often even better than the Oracle CUSUM method (which has optimal average worst-case (across ν) delay).
Figure 5
Pre-change false alarm rates of detection methods for each changepoint $\nu =0,100,\dots ,500$ (each experiment has exactly one changepoint at ν). Three of the methods use an exact threshold calculated via simulation (only possible in this simple, parametric example). Only the Oracle CUSUM method knows the post-change distribution exactly. e-SR has the smallest false alert ratio (defined in the text).
Simulation Details Throughout this simulation, we draw pre-change observations as iid Bernoulli random variables with ${p_{0}}=0.5$, and post-change observations using ${p_{1}}=0.6$. For non-oracle methods, we will only assume that the post-change parameter is known to be in the interval $[0.51,0.99]$. The e-SR and e-CUSUM procedures in Algorithm 1 will use this range to set ${\Delta _{L}}:=0.01$ and ${\Delta _{U}}:=0.49$. For the GLR procedure, the MLE of the post-change parameter is
where ${\bar{X}_{j:n}}$ is the sample average over last $n-j+1$ observations. Our ARL target is equal to $1/\alpha :=500$, and each simulation is repeated 5000 times to estimate average delays. The time of the changepoint ν varies in $\{0,100,\dots ,500\}$. For simplicity, each run will end no later than time $n=1000$.
For the oracle CUSUM, GLR, and e-CUSUM procedures (but not e-SR), we use the same simulation setup to find the exact threshold value that controls the ARL exactly $1/\alpha $ for each method. For the e-SR procedure, we simply use the universal threshold of $\log (1/\alpha )$ to demonstrate its efficiency. In particular, practitioners are not required to resort to expensive simulation to identify a good threshold value.
Figure 4 shows the average delays of oracle CUSUM, GLR, e-CUSUM and e-SR procedures for each changepoint $\nu \in \{0,100,\dots ,500\}$ (each experiment has only one change at ν). The vertical bar on each point represents 95% confidence interval of the average delay. As the theory guarantees, the oracle CUSUM procedure with an exact threshold results in the smallest worst average delay ($91.3\pm 1.7$), while surprisingly the GLR procedure with an exact threshold shows the largest worst average delay ($123.7\pm 2.7$) despite its high computational complexity. The two e-detectors perform reasonably well, and the e-SR detector in particular performs quite favorably overall despite using its conservative $\log (1/\alpha )$ threshold.
As the value of the changepoint approaches the ARL target of 500, the average delays tend to decrease sharply for e-SR procedure, even falling below the oracle CUSUM method. This is plausibly because e-SR sums the underlying e-processes, in contrast to the other CUSUM-style procedures which take the maximum of the underlying e-processes. We may also intuitively expect the oracle CUSUM delay to be relatively flat across ν because it is known to be minimax optimal, in the sense of minimizing the worst case delay, and minimax procedures often have constant “risk profiles”. Proving these fine-grained behaviors is beyond the scope of the current paper.
Figure 5 illustrates the “pre-change false alarm rate”: the fraction of simulation runs in which the detection procedures stopped before the changepoint at ν (if a change occurs at $\nu =0$, then it is zero by definition). This is not a common metric, since we provably control ARL at the target level. However, it is an interesting metric, so we plot it. As the time of the true changepoint ν becomes closer to the ARL target of 500, false alarm rates increase across all methods. We notice that the e-SR procedure with the $\log (1/\alpha )$ threshold results in the smallest false alarm rates in most cases.
6 Discussion
6.1 Game-theoretic Interpretation of an E-detector
We briefly mention here a game-theoretic interpretation of an e-detector along the lines of the game-theoretic interpretations of martingales and supermartingales as the wealth of a gambler playing a fair game (well known since the time of [42]). We first summarize the game-theoretic interpretation of a $\mathcal{P}$-e-process, as described in [29].
The standard game-theoretic setup of [33] involves three players: a forecaster, a skeptic, and reality. The forecaster claims at the beginning that $\mathcal{P}$ is a plausible model for the yet-to-be-observed data; meaning that the observations are in accordance with (or generated by) some $P\in \mathcal{P}$. The skeptic plays (in parallel) a family of games indexed by $P\in \mathcal{P}$ against nature and begins with one dollar in each game. The objective of the skeptic in the P-th game is to sequentially test whether P is a good explanation for the data by betting against P. At each time step, the skeptic places fair bets (relative to P, in the P-th game) about the next outcome. Then nature reveals the next outcome, and the skeptic’s wealth in every game is updated. The magnitude of the skeptic’s wealth in the P-th game is direct evidence against P being a good explanation; the higher the wealth, the more unlikely the data came from P. Thus in each game, the gambler places different bets, but nature’s moves (the outcomes) are identical across all games. The skeptic’s overall evidence against $\mathcal{P}$ is measured by their worst wealth across all the games. If this evidence exceeds $1/\alpha $, it means that the skeptic multiplied their initial capital by at least $1/\alpha $ in every game, and if we reject $\mathcal{P}$ when this happens, Ville’s inequality implies that we have a valid level-α sequential test.
Since our e-detectors are constructed to be cumulative sums of e-processes started at consecutive times, their game-theoretic interpretation builds on the aforementioned one. Informally, the forecaster not only claims that the data sequence follows $\mathcal{P}$ from the start, but that this will not change after some amount of time. The skeptic now wishes to detect a change, if one occurs, as soon as possible. To accomplish this task, the skeptic is provided with one extra dollar every day that they invest (using a $\mathcal{P}$ e-process) into testing whether the data from that day onwards is still explained well by $\mathcal{P}$. E-detectors use the wealth in all these games (one against each $P\in \mathcal{P}$, starting at each time) as a measure of evidence against the forecaster’s claims. The SR e-detector uses the sum (across time) of the minimum wealth (across P at each time), though it could use the amount that this wealth exceeds n, which is the total dollar amount invested up to time n. The CUSUM e-detector uses the max-min wealth; the maximum (across time) of the minimum wealth (across P). These are only two ways of constructing e-detectors, and we leave other constructions to future work.
6.2 Viewing Lorden’s Reduction to Sequential Testing as an E-detector
Lorden [18] proposed a simple method to construct a change detection method with ARL control via a reduction to sequential testing. We describe this below, first defining a sequential test formally.
A sequential test ϕ is a mapping from increasing amounts of data to a sequence of zeros and ones, where a one represents a rejection of the null hypothesis, and a zero means “continue collecting data”. Formally, define the decision at time t as ${\phi _{t}}:{\mathcal{X}^{t}}\to \{0,1\}$, and let $\phi :={\{{\phi _{t}}\}_{t\ge 0}}$ be the collection of such decisions made one at a time based on the first t datapoints, with ${\phi _{0}}=0$ by default. The sequence of tests ϕ is called a level-α sequential test for $\mathcal{P}$ if
\[ \underset{P\in \mathcal{P}}{\sup }P\big(\exists t\ge 1:{\phi _{t}}({X_{1}},\dots ,{X_{t}})=1\big)\le \alpha ,\]
i.e. if the probability of ever falsely rejecting the null is at most α. By convention, if ${\phi _{t}}=1$, we set ${\phi _{s}}=1$ for $s\gt t$. This is equivalent to requiring that, for each $P\in \mathcal{P}$,
Let ${\phi ^{(j)}}$ denote a sequential test is started at time j; that is, for ${\phi ^{(j)}}$, the first observation is actually ${X_{j}}$ (and not ${X_{1}}$), but the test itself can depend on the first $j-1$ observations (for example, we can choose our betting strategy based the first $j-1$ points, even though our betting score will only be evaluated from time j onwards). Note that ${\phi ^{(1)}}$ is simply a standard sequential test as defined above. Ideally, these tests are powerful against alternative $\mathcal{Q}$.
Lorden’s change detection procedure is simple and works as follows. At each time t, start a new sequential test ${\phi ^{(t)}}$, in addition to the ones that are already running. In other words, consider a sequence of level-α sequential tests ${\phi ^{(1)}},{\phi ^{(2)}},\dots $ , starting at consecutive times. Lorden declares a change if any of those sequential tests rejects the null $\mathcal{P}$:
\[\begin{aligned}{}& {N^{\text{Lorden}}}:=\inf \Big\{n\ge 1:\underset{1\le j\le n}{\max }{\phi _{n}^{(j)}}({X_{n-j+1}},\dots ,{X_{n}})=1\Big\}.\end{aligned}\]
Lorden proved that this method controls the ARL at $1/\alpha $ if the data are iid, and if the same test ${\phi ^{(j)}}$ is deployed at each j (i.e. apart from the delayed start, the tests are identical).We first observe that Lorden’s method is a special case of an e-detector. Indeed, with each level-α sequential test $\phi \equiv {\{{\phi _{t}}\}_{t\ge 0}}$, we can associate an e-process ${\Lambda _{t}^{\text{Lorden}}}:=\frac{\mathbf{1}({\phi _{t}}=1)}{\alpha }=\frac{{\phi _{t}}}{\alpha }$. Note that Λ only takes on two values: $0,1/\alpha $, and when it reaches the latter, it stays there. Furthermore, note that $\mathbb{E}[{\Lambda _{\tau }^{\text{Lorden}}}]\le 1$ for any stopping time τ, which makes it an e-process as claimed.
Last, note that if we form a “Lorden e-detector” using either the SR or CUSUM methods in (2.5), then both e-detectors start out at 0, and either e-detector jumps to level $1/\alpha $ if and only if one of the (delayed start) sequential tests rejects the null, and further our e-detector declares a changepoint at exactly the same instant that Lorden’s does. Thus, Lorden’s procedure can be subsumed within our e-detector framework without any loss of generality or performance.
There are two benefits to viewing Lorden’s method as an e-detector. First, we can dispense with both the aforementioned conditions that Lorden required to prove ARL control: the iid assumption, and the condition that the underlying tests ${\phi ^{(j)}}$ are identical across j. Indeed, our main result guarantees ARL control for any e-detector no matter what the underlying ${e_{j}}$-processes are, or whether the data are iid or not.
Second, this viewpoint allows us to see why e-detectors could have much smaller detection delay than Lorden’s method (that is, Lorden’s e-detector). In Lorden’s e-detector, there is no sharing of evidence across different ${e_{j}}$-processes: each sequential test acts alone without help from the others, and we need a single ${e_{j}}$-process to reach $1/\alpha $ before we can declare a change. When a general (say SR-type) e-detector crosses $1/\alpha $, the reason it does so will usually be because of a collaborative effect across various e-processes (caused by the nontrivial sum of ${e_{j}}$-processes in the definition of the e-detector), none of which have yet individually reached $1/\alpha $. This will happen much sooner than any individual one reaches that threshold, causing an earlier detection than Lorden’s method. In fact, every level-α sequential test in some sense must be based on threshold an e-process at level $1/\alpha $ [28] and using our e-detector with those underlying e-processes will be more statistically efficient (shorter delay) than directly using Lorden’s reduction.
For the sake of future reference, we summarize the above observations below in a “generalized Lorden’s lemma”, whose proof follows immediately from the properties of an e-detector and the discussion above.
Lemma 6.1 (Generalized Lorden’s Lemma).
Suppose the data initially come from a distribution in the pre-change class $\mathcal{P}$ and, if a change occurs, they later come from a distribution in the post-change class $\mathcal{Q}$ (note the lack of any iid assumption). For each j, let ${\phi ^{(j)}}$ denote a (one-sided) level-α sequential test for $\mathcal{P}$ against $\mathcal{Q}$ that is started at time j (but need not be identical or related in any way to any other ${\phi ^{(k)}}$, for $k\ne j$). If we declare a change at the first time when any one of these sequential tests rejects the null, the resulting change detection procedure has ARL at most $1/\alpha $. Further, this generalization of Lorden’s change detector is a special case of an e-detector.
6.3 Future Directions
There remain a whole host of follow-up directions; we mention only a few below. First, our sequential change detection framework can be straightforwardly generalized to the multi-stream setting where we are monitoring a large number of data streams. In the classical parametric setting, minimum or summation of local CUSUM statistics for multi-stream data were proposed and their asymptotic optimality was studied [8, 20]. Since either minimum or scaled summation (average) of e-detectors also forms a valid e-detector, we can apply the framework in this paper to the multi-stream setting seamlessly. It is interesting to investigate how the framework can be even further generalized to structural multi-stream settings [50, 51, 2].
The kernel sequential change detection is an important class of sequential change detection methods [3, 16]. It is an interesting open direction how to instantiate existing kernel-based methods into our general framework to make it possible to analyze kernel sequential change detection algorithms in a nonasymptotic way. As it currently stands, neither framework is more general than the other, because the kernel methods often assume iid data before the changepoint, while we abstain from such strong assumptions.
Last, throughout this paper, we have only focused on detecting whether a changepoint happened or not but have not dealt with inferential questions surrounding when the change occurred. Future work could study how to perform such inference with e-detectors in our nonparametric settings, either online or post-hoc.
7 Summary
We have presented a general framework for sequential change detection based on a new concept called e-detectors. The proposed framework is nonparametric as it does not rely on a parametric assumption on the data-generating distribution (though, when such assumptions are made, we recover well-known parametric methods as special cases). Also, the framework comes with nonasymptotic guarantees, since every component of the framework can be chosen and analyzed explicitly without any asymptotic approximations. By introducing additional structures such as baseline increments and exponential e-detectors on the top of the general framework, we can construct computationally and statistically efficient online algorithms that have explicit upper bounds on worst average delays. Finally, through examples involving Bernoulli and bounded random variables, we explained how one can apply the presented framework in practical settings, with NBA data serving as a running case study.