The biggest risk in new drug development is either unaware of, or under-estimate the potential risks in designing clinical trials. Among all challenges in drug development, the most critical one is about finding the appropriate dose(s) for the study drug in treating patients. Designing dose finding clinical trials involves in many potential risks. In practice, most of the expensive failures in drug development originated not from “We did not know”, rather, the mistake is “We thought we knew”. In other words, greatest risks came from lack of awareness of underlying assumptions. This manuscript attempts to discuss some of these risks and make recommendations to reduce risks in the design of dose finding clinical trials. This is not a complete list of risks, but it is only starting the discussion.
Dose proportionality is an essential aspect of pharmacokinetics (PK). We aim to enhance the efficiency of PK studies by incorporating interim analyses and utilizing data from past trials to increase precision and enable early termination of studies if applicable. In this paper, we extend the multisource exchangeability model (MEM) to the setting with correlated data with interim analyses. Simulation results indicate that the MEM estimators are efficient even with smaller sample sizes, although smaller sample sizes may have higher mean square error (MSE) and bias due to early stopping and more liberal data borrowing from non-exchangeable supplementary sources. Our recommendation is to use a constrained MEM approach when considering small sample sizes, with additional caution needed around the equivalence boundary to better control the inflated type I error rate, bias, and MSE. This research extends the application of MEMs from linear regression models to settings with correlated data using linear mixed effects regression models. It also innovatively applies MEMs to equivalence testing in the context of dose proportionality studies, thereby enhancing their efficiency.
MCP-Mod (Multiple Comparison Procedure-Modelling) is an efficient statistical method for the analysis of Phase II dose-finding trials, although it requires specialised expertise to pre-specify plausible candidate models along with model parameters. This can be problematic given limited knowledge of the agent/compound being studied, and misspecification of candidate models and model parameters can severely degrade its performance. To circumvent this challenge, in the work, we introduce LiMAP-curvature, a Bayesian model-free approach for the detection of the dose-response signal in Phase II dose-finding trials. LiMAP-curvature is built upon a Bayesian hierarchical framework incorporating information about the total curvature of the dose-response curve. Through extensive simulations, we show that LiMAP-curvature has comparable performance to MCP-Mod if the true underlying dose-response model is included in the candidate model set of MCP-Mod. Otherwise, LiMAP-curvature can achieve performance superior to that of MCP-Mod, especially when the true dose-response model drastically deviates from candidate models in MCP-Mod.
We study local change point detection in variance using generalized likelihood ratio tests. Building on [24], we utilize the multiplier bootstrap to approximate the unknown, non-asymptotic distribution of the test statistic and introduce a multiplicative bias correction that improves upon the existing additive version. This proposed correction offers a clearer interpretation of the bootstrap estimators while significantly reducing computational costs. Simulation results demonstrate that our method performs comparably to the original approach. We apply it to the growth rates of U.S. inflation, industrial production, and Bitcoin returns.
Historically, the primary objective of Phase I clinical trials has been to pick an optimal dose in terms of patient safety, referred to as the maximum tolerated dose (MTD). Most of these trials recommend a “one-size-fits-all” dose for the patient population being studied, while also solely focusing on short-term adverse events. Often patient heterogeneity exists so that group-specific dose selection is of interest. To address the issue of patient heterogeneity, several dose-finding methods have been proposed, including the shift model framework based on the Continual Reassessment Method (CRM). Additionally, for many cancer therapies, relevant toxicities may be defined by participants experiencing adverse events at any point over a long evaluation window, resulting in pending outcomes when new participants need to be assigned a dose. By leveraging the CRM, the time-to-event continual reassessment method (TITE-CRM) provides a feasible approach for addressing this issue. Motivated by a Phase I trial involving radiotherapy that included two patient groups conducted at the University of Virginia, we have developed a hybrid design that combines elements from the TITE-CRM and the shift model framework. This approach helps address patient heterogeneity and late-onset toxicity simultaneously. We illustrate how to perform a dose-finding trial using the proposed design, and compare its operating characteristics to other suggested methods in the field by conducting a simulation study. The shift model TITE-CRM is shown to be a practical design with good operating characteristics in regard to selecting the correct MTD in each group. An R package is also being developed, allowing investigators to provide group-specific MTD recommendations by applying the proposed design, in addition to providing operating characteristics for custom simulation settings.
The high cost of drug development and the relatively low success rates of phase III clinical trials highlight the need for improved and reasonably sized phase II trial designs, especially when responses observed in treatment and control could not lead to a clear-cut decision warranting further studies. To this end, we propose a three-outcome dual-criterion randomized (TDR) trial design, which implements inconclusive region sculpting using boundaries defined by both statistically significant differences between treatment and control as well as the clinical relevance of treatment responses. We provide statistical justifications for the TDR design in both one-stage and two-stage trial settings. Additionally, we evaluate its operating characteristics through a comparison with existing designs. The proposed design is shown able to achieve sample size saving and type II error reduction while controlling the type I error at a marginal cost of power reduction. Lastly, robustness under various deviations from the assumed control response rate is also demonstrated.
In oncology therapy development, Simon’s two-stage design is commonly employed in early-phase clinical trials to assess the preliminary efficacy of a single dose, typically the maximum tolerable dose (MTD) or the maximum assessed dose (MAD). In this design, a dose may be terminated at the first stage if the anti-tumor activity is insufficient or it may proceed to the second stage for further evaluation with more subjects. To enhance the design for better benefit-risk profile dose selection and to meet the increasing needs for study designs that explore dose-response relationships, we extend Simon’s two-stage design to evaluate two doses and to include early termination for success in addition to futility. The proposed method derives decision rules and sample sizes for optimal study designs that minimize the expected or overall sample sizes while controlling type I error and meeting desired power.
Observations of groundwater pollutants, such as arsenic or Perfluorooctane sulfonate (PFOS), are riddled with left censoring. These measurements have an impact on the health and lifestyle of the populace. Left censoring of these spatially correlated observations is usually addressed by applying Gaussian processes (GPs), which have theoretical advantages. However, this comes with a challenging computational complexity of $\mathcal{O}({n^{3}})$, impractical for large datasets. Additionally, a sizable proportion of the left-censored data creates further bottlenecks since the likelihood computation now involves an intractable high-dimensional integral of the multivariate Gaussian density. In this article, we tackle these two problems simultaneously by approximating the GP with a Gaussian Markov random field (GMRF) approach that exploits an explicit link between a GP with Matérn correlation function and a GMRF using stochastic partial differential equations (SPDEs). We introduce a GMRF-based measurement error into the model, which alleviates the likelihood computation for the censored data, drastically improving the computational speed while maintaining admirable accuracy. Our approach demonstrates robustness and substantial computational scalability compared to state-of-the-art methods for censored spatial responses across various simulation settings. Finally, the fit of this fully Bayesian model to the concentration of PFOS in groundwater available at 24,959 sites across California, where 46.62% responses are censored, produces prediction surface and uncertainty quantification in real-time, thereby substantiating the applicability and scalability of the proposed method. Code for implementation is made available via GitHub.
Up-and-Down designs (UDDs) are ubiquitous for dose-finding in a wide variety of scientific, engineering, and clinical fields. They are defined by a few simple rules that generate a random walk around the target percentile. UDDs’ combination of robust, tractable behavior, straightforward usage, and good dose-finding performance, has won the trust of practitioners and their consulting analysts across fields and continents. In contrast, in recent decades the statistical dose-finding design field has turned a cold shoulder towards UDDs, and it is quite possible that many younger dose-finding methods researchers are not even aware of this design approach.
We present a concise overview of UDDs and their current state-of-the-art methodology, with references for further inquiry. We also revisit the performance comparison between UDDs and novel, more complicated design approaches such as the Continual Reassessment Method and the Bayesian Optimal Interval design, which we group under the term “Aim-for-Target” designs. UDDs fare very well in the comparison, particularly in terms of robustness to sources of variability.