The New England Journal of Statistics in Data Science logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 1, Issue 3 (2023)
  4. Particle Swarm Optimization for Finding ...

Particle Swarm Optimization for Finding Efficient Longitudinal Exact Designs for Nonlinear Models
Volume 1, Issue 3 (2023), pp. 299–313
Ping-Yang Chen   Ray-Bing Chen   Weng Kee Wong  

Authors

 
Placeholder
https://doi.org/10.51387/23-NEJSDS45
Pub. online: 10 August 2023      Type: Methodology Article      Open accessOpen Access
Area: Biomedical Research

Accepted
13 May 2023
Published
10 August 2023

Abstract

Designing longitudinal studies is generally a very challenging problem because of the complex optimization problems. We show the popular nature-inspired metaheuristic algorithm, Particle Swarm Optimization (PSO), can find different types of optimal exact designs for longitudinal studies with different correlation structures for different types of models. In particular, we demonstrate PSO-generated D-optimal longitudinal studies for the widely used Michaelis-Menten model with various correlation structures agree with the reported analytically derived locally D-optimal designs in the literature when there are only 2 observations per subject, and their numerical D-optimal designs when there are 3 and 4 observations per subject. We further show the usefulness of PSO by applying it to generate new locally D-optimal designs to estimate model parameters when there are 5 or more observations per subject. Additionally, we find various optimal longitudinal designs for a growth curve model commonly used in animal studies and for a nonlinear HIV dynamic model for studying T-cells in AIDS subjects. In particular, c-optimal exact designs for estimating one or more functions of model parameters (c-optimality) were found, along with other types of multiple objectives optimal designs.

1 Introduction

Longitudinal studies are common in clinical studies and involve multiple measurements of the subject over time. For example, subjects in a clinical trial are recruited and randomized into treatment groups and responses from each patient are observed multiple times over a user-selected period of time. There is a huge literature on analyzing longitudinal models using different methods but design issues for such studies lagged. What seems to be generally known is that having excessive time points does not necessarily improve the quality of the statistical inference and having too few observations per patient to facilitate calculation may not meet the scientific requirements in the study.
Given a statistical model and a design criterion, design issues for a longitudinal study concern the optimal number of time points to observe the outcomes, where the time points are over the study period and the number of replicates at each time point. For making statistical inference as accurately as possible, these decisions have to be made judiciously and at minimum cost. Analytical approaches are difficult because they invariably involve number-theory to solve the optimization problem. Even when it is possible to derive the results mathematically, they can be limiting because they are valid to a single statistical model and does not apply to even a slightly changed model. Software and efficient computational methods are therefore useful for answering such design questions.
We propose using nature-inspired metaheuristic algorithms to tackle these challenging questions. This class of algorithms is commonly used in engineering and computer science to tackle hard to solve optimization problems but surprisingly under-used in mainstream statistical research. We focus on particle swarm optimization (PSO), which is a popular member of this class of algorithms and apply it to design longitudinal studies with different mean functions and various correlation structures. Such algorithms typically have variants, which are enhancements motivated by different desires to improve certain aspects of its performance. We also compare results from PSO with its various enhancements and determine whether the latter provide markedly superior results than the results from the original PSO.
To fix the idea, we discuss the proposed methodology for the 2-parameter nonlinear Michaelis-Menten model which is widely used to study enzyme-substrate dose relationship in kinetic biological systems. Its simplicity and usefulness make it one of the most widely used models across many disciplines, such as, in agriculture [37], biochemistry [19], biology [4], environmental study [36]. [35] notes that the Michaelis-Menten model is a special case of the 3-parameter logistic model, which is more widely used in practice.
In its simplest form, the Michaelis-Menten model for a chemical reaction is
\[ E(\upsilon )=\eta (\theta ,t)=\frac{at}{b+t},\hspace{2.5pt}t\in T,\hspace{2.5pt}\theta ={(a,b)^{\top }},\]
where υ is the observed velocity of the reaction when the substrate concentration applied is $t\in T$, where S is the user-selected range from which concentration is selected. The mean response is a nonlinear function $\eta (\theta ,t)$ with 2 parameters a and b. The parameter b is called the Michaelis-Menten constant, which controls the rate of the reaction and so between the two parameters, it is the more biologically meaningful parameter. The maximum velocity attainable is a, which is reached when the substrate concentration is increased without bound. Errors in this model are assumed to be independent with mean 0 and constant variance ${\sigma ^{2}}$.
This model can also be used to study the growth curves of animals or subjects by taking repeated measurements on the subjects [17]. Suppose that there are n subjects, and for each subject, m repeated observations are allowed over a period of time. [7] addressed design issues when the responses are correlated using the following model
(1.1)
\[\begin{aligned}{}\upsilon & =\eta (\theta ,{t_{i,j}})+{\epsilon _{i,j}}\\ {} & =\frac{a{t_{i,j}}}{b+{t_{i,j}}}+{\epsilon _{i,j}},\hspace{2.5pt}j=1,\dots ,m,\hspace{2.5pt}i=1,\dots ,n,\end{aligned}\]
where the error terms ${\epsilon _{i,j}}$ are normally distributed, each with mean 0 and constant variance and ${\epsilon _{i,j}}$ and ${\epsilon _{{i^{\prime }},{j^{\prime }}}}$ are independent if $i\ne {i^{\prime }}$, and each pair of ${\epsilon _{i,j}}$ and ${\epsilon _{i,{j^{\prime }}}}$ has the correlation coefficient ${\lambda ^{\mid {t_{i,j}}-{t_{i,{j^{\prime }}}}\mid }}$, $j,{j^{\prime }}=1,\dots ,m$. Here $\lambda \in (0,1)$ and ${t_{i,{j^{\prime }}}}\in S$. Given n and an optimality criterion, design issues for the model concern how to optimally select m, the number of time points and the values of the s to observe outcomes so that the criterion is optimized. A typical criterion is to estimate the parameters a and b or some function thereof as accurately as possible.
Throughout, we assume the following setup. We are given a total of N observations for the study and a regression model defined on a given design space S. Given an optimality criterion, finding an optimal exact design means finding the optimal value of the criterion by choosing the optimal number of points k to observe the response, where the time points and the number of replicates ${n_{i}}$ at each time point ${t_{i}}$ so that ${n_{1}}+{n_{2}}+\cdots +{n_{k}}=N$. A complicating feature of such an optimization problem is that there is neither a general theory for finding an optimal exact design nor a theoretical tool for confirming the optimality of an exact design. Consequently, optimal exact designs are rarely studied in the literature, especially for nonlinear models. As an early example, [3] found D-optimal exact designs numerically for homoscedastic polynomial models of low degrees.
Our goal is to investigate the effectiveness of using a nature-inspired metaheuristic algorithms, such as particle swarm optimization (PSO), for finding D-optimal exact designs for the Michaelis-Menten model and related models with correlated errors, c-optimal designs for estimating a function of the parameters in a growth curve model, and a maximin optimal exact design for optimizing efficiencies in a HIV dynamic model. We show our results agree with the theoretical designs in [7], which are only available for small values of m and n and able to find the optimum in all problems quite quickly. However, there is no general theory for finding optimal exact designs or confirming optimality of an exact design, which may, in part, explain why exact designs are much less studied than approximate designs, where there is a general framework and general methods for finding and conforming optimality of an approximate design when the criterion is convex functional.
The rest of the article is organized as follows. Section 2 briefly reviews particle swarm optimization. Section 3 applies PSO to search for locally D-optimal exact designs for the Michaelis-Menten model with various correlation structures. The Michaelis-Menten model is a nonlinear model, and, as we explain below, its information matrix and hence the design criterion depends on unknown model parameters that we wish to estimate. The simplest design strategy is to use results from a pilot study or similar studies to have a good guess of the true parameters that we wish to estimate. These nominal values, which may be also available from experts, are then substituted into the design criterion and the problem then involves optimizing the design parameters only. We next discuss a few PSO enhancements in Section 3.2 and compare their performance for finding locally D-optimal designs for the Michaelis-Menten model with various error structures. Additionally, we apply PSO to find locally c-optimal designs for estimating one or more functions for a generalized version of the Michaelis-Menten model for studying growth rates in Section 4. Section 5 considers an alternative to locally optimal design and constructs maximin locally exact designs for a longitudinal study to estimate parameters for a HIV dynamic model. Section 6 concludes with a short discussion.

2 Particle Swarm Optimization

Recently a class of algorithms called nature-inspired metaheuristic algorithms has proved very popular in the optimization literature. [31, 32] provided reasons for the rapid rise and interest in these algorithms. Early users of such algorithms to find optimal exact designs for linear models include [12] who used an annealing algorithm to search for optimal designs for linear regression models, and [20] who used a genetic algorithm to construct exact D-optimal designs. Of particular note is the particle swarm optimization (PSO) introduced by [8] for tackling optimization problems. PSO is increasingly used across disciplines to solve hard optimization problems. PSO is essentially assumptions free and it searches in a simple and effective way. For example, unlike many algorithms, PSO does not require the objective function to be differentiable or convex and can solve non-convex high-dimensional optimization problems.
PSO is a metaheuristic optimization algorithm inspired from the way animals, such as birds and fishes, search for food. The birds fly continuously in the sky to look for food on the ground. Each has its own perception where the food is (local optimum) but it communicates with the rest and collectively the flock decides where the food is (global optimum). Accordingly, each bird flies toward the global optimum in the direction of the local optimum (not giving up completely where it thinks the food is). Birds are referred as particles and each bird represents a candidate solution to the optimization problem. Velocities and locations of each bird are adjusted at each iteration and if and when the flock converges, the perceived global optimum is found. In order to efficiently identify the optimal points, we initiate a flock of birds in the pre-defined search space. Let $X(k)$ be the locations of particles at the k-th iteration. Define ${t_{L}}(k-1)$ to be the locations with the best objective function values discovered by each particle before the k-th iteration, and ${t_{G}}(k-1)$ to be the locations of the best value found by the whole swarm before the k-th iteration. At the k-th iteration, the particles are updated by
(2.1)
\[ X(k)=X(k-1)+V(k),\]
and
(2.2)
\[\begin{aligned}{}V(k)& =wV(k-1)+{c_{1}}{R_{1}}\otimes [{t_{L}}(k-1)-X(k-1)]\\ {} & \hspace{1em}+{c_{2}}{R_{2}}\otimes [{t_{G}}(k-1)-X(k-1)],\end{aligned}\]
where $V(k)$ is the velocity of the particle. There are several parameters in (2.2). The inertia weight represents how active the birds are and is denoted by w. This parameter may be chosen to be a positive constant but more typically its value changes over iteration and eventually decrease to 0. The parameters ${c_{1}}$ and ${c_{2}}$ are two positive constants which are recommended to be 2, and ${R_{1}}$ and ${R_{2}}$ are two random vectors whose components are independently drawn from the uniform variate on $[0,1]$. In practice, the number of iterations and the swarm size are the most influential parameters in PSO. Large swarm size gives a better vision on the search area such that PSO could achieve the global optimum with a higher chance, and the more iterations allows the particles having search experience due to random perturbation. More details on PSO and the related metaheuristic optimization algorithms are available in [34].
For an individual patient with $n=1$ and $N=m$, we determine the optimal N time points to observe responses from the patient. We allow replications, i.e. the N time points need not be distinct. To search for a N-point optimal exact design, we search for an optimal $N\times 1$ vector, $({t_{1}},{t_{2}},\dots ,{t_{N}})$ and allow some of the ${t_{i}}$’s may be the same. Here N is user-specified and the objective function is the design criterion $\Phi (\cdot )$, which may not be convex function. A key challenge is the dimension of the problem when N is large. When there are multiple optimal solutions, and we only report one of them. Hence at the beginning of the PSO, we randomly generate initial particles (designs) on the design space. Then at each iteration, we update the particles (designs) based on the (2.1) and (2.2). We also watch out for those that flew outside the search boundaries and when this happens, we need to adjust those particles to make sure they are in the design space properly. The hope is that after a number of iterations, the particles will converge to a point and this point is supposedly the global best solution or the optimal design we are after. Algorithm 1 below is a pseudo code of the PSO algorithm for finding an optimal design.
nejsds45_g001.jpg
Algorithm 1
PSO for optimal design search problem.
In the next section and beyond, we apply PSO-based algorithms to find various types of optimal exact designs for models with uncorrelated errors and errors with various correlation structures and compare the designs. Throughout, the hardware we used is a PC with 3.50 GHz Intel(R) Core(TM) i7-4770K CPU.

3 Locally D-Optimal Exact Designs for the Michaelis-Menten Model

Suppose in a longitudinal study, we have n subjects and each is observed at m time points ${t_{ij}},j=1,\dots ,m$ from a time interval T. We assume T, m and n are given. The normalized information matrix is proportional to
\[ M(\xi ,{\theta _{0}})=\frac{1}{N}{\sum \limits_{i=1}^{n}}{f_{i}^{\top }}{\Sigma _{i}^{-1}}{f_{i}},\]
where $N=nm$, ${\theta _{0}^{\top }}=(a,b,{\sigma ^{2}},\lambda )$ and ${f_{i}}$ is the $m\times 2$ matrix with the j-th row containing the derivative of ${f_{i}}$, which is
\[ \left(\begin{array}{c@{\hskip10.0pt}c}\frac{{t_{i,j}}}{b+{t_{i,j}}}& -\frac{a{t_{i,j}}}{{b+{t_{i,j}}^{2}}}\end{array}\right),\hspace{2.5pt}j=1,\dots ,m.\]
The $m\times m$ correlation matrix of the m responses from the ith subject is ${\Sigma _{i}}$ and its $(j,k)$-th element is
\[ {\sigma ^{2}}\left({\lambda ^{\mid {t_{i,j}}-{t_{i,k}}\mid }}\right),\hspace{2.5pt}j,k=1,\dots ,m.\]
Thus, given a fixed parameters ${\theta _{0}}={(a,b,{\sigma ^{2}},\lambda )^{\top }}$, the locally D-optimal design for estimating the model parameters as accurately as possible is the one that maximizes
\[ \Phi (\xi ,{\theta _{0}})=\log |M(\xi ,{\theta _{0}})|\]
over all designs ξ on T. Following the argument in [7], we may, without loss of generality, set $T=[0,1]$. For the Michaelis-Menten model, locally D-optimal designs do not depend on a and ${\sigma ^{2}}$ and so, we choose $a={\sigma ^{2}}=1$. We also assume that the experimental conditions for each subject are the same and each subject is observed at the same set of time points, ${t_{1j}}={t_{2j}}=\cdots ={t_{nj}}$, for $j=1,\dots ,m$. Thus, to search for the locally D-optimal exact design via PSO, we can set $n=1$, i.e. $N=1\times m=m$, and the target design is represented as an $m\times 1$ vector.
We first consider the case when we have 2 observations from a single subject and the design questions are which two time points to take observations and how the spread out the observations between the two points. [7] theoretically identified that the locally D-optimal exact design is supported at two points, u and 1, when $b\ge \frac{1}{3}$, and the design point u solves the equation
(3.1)
\[ \frac{b-(2b+1)u}{u(1-u)(b+u)}=\frac{\log (\lambda ){\lambda ^{2(1-u)}}}{1-{\lambda ^{2(1-u)}}}.\]
The solution has no closed form when $b\lt 1/3$. When there are 3 or 4 observations to be taken from each subject, locally D-optimal exact designs cannot be described analytically and only numerical results were provided. The numerical approach they used is to check all possible designs using a pre-specified grid.
We next use Algorithm 1 to systematically find the locally D-optimal exact designs. We used 256 particles and set the maximum number of iterations equals to 500. The PSO parameters ${c_{1}}$ and ${c_{2}}$ were set to their default values with both equal to 2. For the inertia weight, w, we started from 0.95 and linearly decreased it to 0.4 in the first 350 iterations and then fixed w as 0.4 for the remaining iterations. We implemented the algorithm and found PSO-generated D-optimal designs for several values of b and λ, i.e. $b=0.2,0.7,1.2,1.7,2.2,2.7$ and $\lambda =0.1,0.5,0.9$. We first consider the cases when the numbers of observations per subject were $m=2,3,4$ and a direct application of PSO shows the generated designs coincided with the analytical designs in [7] when $m=2$ and $b\ge 1/3$. The PSO-generated designs when we have 3 or 4 observations per subject also agree with those numerically found optimal designs from [7]. For space consideration, we do not display them. We next use Algorithm 1 to search for D-optimal designs when there are more observations per subject. Table 1 displays the PSO-generated optimal exact designs for 5 and 6 observations per subject. Unlike the global numerical search approach in [7], we are not constrained by the grid size imposed on the experimental region, S, and PSO can still identify the best designs efficiently. The CPU time required by our Algorithm 1 to find the optimal design is short. On average, our MATLAB codes take around 4.11, 5.54, 6.72, 7.83 and 8.71 seconds to find the optimal designs when there are 2, 3, 4, 5 and 6 observations, respectively.
In summary, Algorithm 1 seems efficient for finding locally D-optimal exact designs for the Micahelis-Menten model. Tables 2 to 4 show PSO-generated designs have more design points as the value of m is increased. To conserve space, we consider particular cases of ${(\lambda ,b)^{\top }}$ when λ is fixed at 0.1, 0.5 and 0.9, and $b=0.5,1.0,1.5,2.0$ and 2.5.
Table 1
PSO-generated 5 and 6-point locally D-optimal exact designs for the Michaelis-Menten model with autocorrelated errors on the design interval $[0,1]$.
Parameters ${\xi _{5}^{\ast }}$ ${\xi _{6}^{\ast }}$
$\lambda =0.1$ $b=0.2$ 0 0.0361 0.1085 0.5361 1 0 0.0274 0.0724 0.1761 0.5813 1
0.7 0 0.0968 0.2493 0.5397 1 0 0.0764 0.1848 0.3546 0.6593 1
1.2 0 0.1322 0.3113 0.5761 1 0 0.1047 0.2375 0.4134 0.6649 1
1.7 0 0.1565 0.3506 0.6089 1 0 0.1242 0.2708 0.4499 0.6838 1
2.2 0 0.1738 0.3768 0.6316 1 0 0.1382 0.2937 0.4747 0.6996 1
2.7 0 0.1866 0.3955 0.6475 1 0 0.1488 0.3103 0.4924 0.7117 1
$\lambda =0.5$ $b=0.2$ 0 0.0323 0.0935 0.4020 1 0 0.0235 0.0595 0.1303 0.4399 1
0.7 0 0.0719 0.1774 0.3646 1 0 0.0563 0.1318 0.2424 0.4410 1
1.2 0 0.1042 0.2482 0.4752 1 0 0.0810 0.1843 0.3252 0.5423 1
1.7 0 0.1278 0.2965 0.5422 1 0 0.0996 0.2227 0.3824 0.6085 1
2.2 0 0.1452 0.3302 0.5845 1 0 0.1135 0.2500 0.4209 0.6493 1
2.7 0 0.1585 0.3549 0.6131 1 0 0.1242 0.2707 0.4487 0.6764 1
$\lambda =0.9$ $b=0.2$ 0 0.0317 0.0918 0.3778 1 0 0.0235 0.0598 0.1330 0.4156 1
0.7 0 0.0700 0.1720 0.3523 1 0 0.0547 0.1274 0.2332 0.4211 1
1.2 0 0.1010 0.2409 0.4624 1 0 0.0784 0.1784 0.3146 0.5260 1
1.7 0 0.1241 0.2893 0.5329 1 0 0.0968 0.2168 0.3733 0.5983 1
2.2 0 0.1418 0.3241 0.5781 1 0 0.1108 0.2447 0.4140 0.6426 1
2.7 0 0.1549 0.3492 0.6086 1 0 0.1216 0.2655 0.4429 0.6729 1
Table 2
PSO-generated 8-point locally D-optimal exact designs for the Michaelis-Menten model with autocorrelated errors on the design interval $[0,1]$.
Parameters ${\xi _{8}^{\ast }}$
$\lambda =0.1$; $b=0.5$ 0.0000 0.0416 0.0963 0.1728 0.2910 0.4961 0.7608 1.0000
1.0 0.0000 0.0662 0.1463 0.2454 0.3720 0.5412 0.7683 1.0000
1.5 0.0000 0.0826 0.1770 0.2862 0.4152 0.5729 0.7732 1.0000
2.0 0.0000 0.0942 0.1976 0.3124 0.4425 0.5946 0.7807 1.0000
2.5 0.0000 0.1030 0.2126 0.3311 0.4617 0.6106 0.7881 1.0000
$\lambda =0.5$; $b=0.5$ 0.0000 0.0322 0.0725 0.1255 0.2013 0.3288 0.6296 1.0000
1.0 0.0000 0.0499 0.1095 0.1825 0.2763 0.4055 0.6091 1.0000
1.5 0.0000 0.0644 0.1392 0.2283 0.3376 0.4783 0.6753 1.0000
2.0 0.0000 0.0754 0.1614 0.2614 0.3805 0.5272 0.7190 1.0000
2.5 0.0000 0.0839 0.1783 0.2859 0.4110 0.5604 0.7470 1.0000
$\lambda =0.9$; $b=0.5$ 0.0000 0.0314 0.0707 0.1222 0.1956 0.3185 0.6043 1.0000
1.0 0.0000 0.0481 0.1052 0.1751 0.2645 0.3872 0.5808 1.0000
1.5 0.0000 0.0623 0.1348 0.2212 0.3276 0.4653 0.6606 1.0000
2.0 0.0000 0.0734 0.1573 0.2554 0.3727 0.5186 0.7117 1.0000
2.5 0.0000 0.0818 0.1743 0.2804 0.4046 0.5544 0.7432 1.0000
Table 3
PSO-generated 9-point locally D-optimal exact designs for the Michaelis-Menten model with autocorrelated errors on the design interval $[0,1]$.
Parameters ${\xi _{9}^{\ast }}$
$\lambda =0.1$; $b=0.5$ 0.0000 0.0360 0.0814 0.1415 0.2262 0.3572 0.5655 0.7931 1.0000
1.0 0.0000 0.0575 0.1252 0.2064 0.3059 0.4314 0.5950 0.7998 1.0000
1.5 0.0000 0.0719 0.1526 0.2440 0.3489 0.4720 0.6206 0.8033 1.0000
2.0 0.0000 0.0822 0.1713 0.2686 0.3763 0.4979 0.6392 0.8087 1.0000
2.5 0.0000 0.0900 0.1849 0.2861 0.3955 0.5161 0.6532 0.8144 1.0000
$\lambda =0.5$; $b=0.5$ 0.0000 0.0277 0.0613 0.1034 0.1590 0.2391 0.3761 0.6676 1.0000
1.0 0.0000 0.0434 0.0938 0.1538 0.2272 0.3210 0.4495 0.6497 1.0000
1.5 0.0000 0.0558 0.1193 0.1928 0.2798 0.3858 0.5209 0.7069 1.0000
2.0 0.0000 0.0654 0.1388 0.2221 0.3182 0.4317 0.5700 0.7479 1.0000
2.5 0.0000 0.0729 0.1536 0.2437 0.3459 0.4637 0.6029 0.7741 1.0000
$\lambda =0.9$; $b=0.5$ 0.0000 0.0271 0.0598 0.1008 0.1548 0.2324 0.3647 0.6435 1.0000
1.0 0.0000 0.0417 0.0900 0.1473 0.2172 0.3061 0.4276 0.6174 1.0000
1.5 0.0000 0.0540 0.1155 0.1868 0.2713 0.3745 0.5071 0.6918 1.0000
2.0 0.0000 0.0637 0.1352 0.2167 0.3111 0.4233 0.5613 0.7408 1.0000
2.5 0.0000 0.0710 0.1499 0.2386 0.3397 0.4570 0.5970 0.7705 1.0000
Table 4
PSO-generated 10-point locally D-optimal exact designs for the Michaelis-Menten model with autocorrelated errors on the design interval $[0,1]$.
Parameters ${\xi _{10}^{\ast }}$
$\lambda =0.1$; $b=0.5$ 0.0000 0.0316 0.0704 0.1195 0.1849 0.2775 0.4187 0.6172 0.8168 1.0000
1.0 0.0000 0.0507 0.1093 0.1779 0.2595 0.3586 0.4821 0.6390 0.8235 1.0000
1.5 0.0000 0.0636 0.1340 0.2124 0.3008 0.4016 0.5190 0.6592 0.8262 1.0000
2.0 0.0000 0.0729 0.1511 0.2355 0.3275 0.4291 0.5434 0.6754 0.8305 1.0000
2.5 0.0000 0.0798 0.1634 0.2517 0.3459 0.4478 0.5602 0.6874 0.8349 1.0000
$\lambda =0.5$; $b=0.5$ 0.0000 0.0245 0.0535 0.0886 0.1328 0.1916 0.2774 0.4272 0.7008 1.0000
1.0 0.0000 0.0383 0.0821 0.1330 0.1932 0.2667 0.3603 0.4881 0.6842 1.0000
1.5 0.0000 0.0493 0.1046 0.1673 0.2396 0.3247 0.4276 0.5575 0.7335 1.0000
2.0 0.0000 0.0578 0.1218 0.1932 0.2738 0.3664 0.4748 0.6056 0.7716 1.0000
2.5 0.0000 0.0644 0.1349 0.2125 0.2988 0.3961 0.5073 0.6376 0.7959 1.0000
$\lambda =0.9$; $b=0.5$ 0.0000 0.0239 0.0521 0.0863 0.1292 0.1861 0.2690 0.4128 0.6771 1.0000
1.0 0.0000 0.0368 0.0788 0.1274 0.1847 0.2545 0.3430 0.4633 0.6493 1.0000
1.5 0.0000 0.0477 0.1012 0.1620 0.2320 0.3146 0.4150 0.5426 0.7179 1.0000
2.0 0.0000 0.0562 0.1185 0.1882 0.2673 0.3585 0.4659 0.5968 0.7646 1.0000
2.5 0.0000 0.0627 0.1315 0.2076 0.2928 0.3893 0.5005 0.6319 0.7924 1.0000

3.1 PSO-Generated Locally D-Optimal Exact Designs for Other Correlation Structures

In this section, we use PSO to find the 2-, 3- and 4-point D-optimal designs for the nonlinear Michaelis-Menten model on the domain $x\in [0,1]$ under various correlation structures. Following [38], we find optimal exact designs when observations have one of the following correlation structures:
Exponential function:
\[ C({t_{i}},{t_{j}},\lambda )=\exp \left\{-\lambda |{t_{i}}-{t_{j}}|\right\},\hspace{2.5pt}\lambda \gt 0,\]
Triangular function:
\[ C({t_{i}},{t_{j}},\lambda )=\max \{0,1-\lambda |{t_{i}}-{t_{j}}|\},\hspace{2.5pt}\lambda \gt 0,\]
Gaussian function:
\[ C({t_{i}},{t_{j}},\lambda )=\exp \left\{-\lambda {({t_{i}}-{t_{j}})^{2}}\right\},\hspace{2.5pt}\lambda \gt 0,\]
Rational function:
\[ C({t_{i}},{t_{j}},\lambda )={\left(1+\lambda |{t_{i}}-{t_{j}}|\right)^{-1/2}},\hspace{2.5pt}\lambda \gt 0.\]
Table 5 shows the resulting D-optimal exact designs under the Exponential correlation structure with $\lambda \in \{1,2,5\}$. For all cases, the upper limit of the design space is a support point of the D-optimal design. When λ is small and we are looking for a 3- and 4-point D optimal exact design, we observe that the minimal support point is at the lower limit of design space, and, it becomes larger when $\lambda =5$. In addition, we notice that for each value of λ, the values of the support points in the middle increase as b increases.
Table 5
PSO-generated 2, 3 and 4-point locally D-optimal exact designs for the Michaelis-Menten model with Exponential correlation structure on the design interval $[0,1]$.
Parameters ${\xi _{2}^{\ast }}$ ${\xi _{3}^{\ast }}$ ${\xi _{4}^{\ast }}$
$\lambda =1.0$; $b=0.5$ 0.2735 1.0000 0.0000 0.1390 1.0000 0.0000 0.0802 0.2322 1.0000
1.0 0.3768 1.0000 0.0000 0.2283 1.0000 0.0000 0.1341 0.3599 1.0000
1.5 0.4308 1.0000 0.0000 0.2854 1.0000 0.0000 0.1712 0.4347 1.0000
2.0 0.4637 1.0000 0.0000 0.3238 1.0000 0.0000 0.1975 0.4814 1.0000
2.5 0.4859 1.0000 0.0000 0.3510 1.0000 0.0000 0.2169 0.5129 1.0000
$\lambda =2.0$; $b=0.5$ 0.2579 1.0000 0.0000 0.1526 1.0000 0.0000 0.0935 0.2843 1.0000
1.0 0.3497 1.0000 0.0000 0.2509 1.0000 0.0000 0.1541 0.4104 1.0000
1.5 0.3974 1.0000 0.0000 0.3088 1.0000 0.0000 0.1928 0.4737 1.0000
2.0 0.4267 1.0000 0.0000 0.3456 1.0000 0.0000 0.2189 0.5113 1.0000
2.5 0.4464 1.0000 0.0000 0.3707 1.0000 0.0000 0.2376 0.5362 1.0000
$\lambda =5.0$; $b=0.5$ 0.2502 1.0000 0.2060 0.5193 1.0000 0.1788 0.3800 0.7108 1.0000
1.0 0.3340 1.0000 0.2597 0.5433 1.0000 0.2250 0.4281 0.7068 1.0000
1.5 0.3761 1.0000 0.2882 0.5675 1.0000 0.2473 0.4502 0.7060 1.0000
2.0 0.4015 1.0000 0.3058 0.5834 1.0000 0.2609 0.4639 0.7085 1.0000
2.5 0.4184 1.0000 0.3178 0.5945 1.0000 0.2701 0.4734 0.7115 1.0000
Table 6 displays the resulting D-optimal exact designs when errors have a Triangular correlation structure and the values of λ are 1, 2 and 5. For all cases, the upper limit of the design space is a support point of the D-optimal exact design. When λ is small,i.e. $\lambda =1$ or 2 and a 3- or 4-point D-optimal design is sought using PSO, we found that its smallest support point is at the lower limit of the design space. When $\lambda =5$, the D-optimal exact design is no longer supported at the lower limit of the design space, and for the 4-point design, the larger of the two middle points, interestingly, appears to be unaffected by the b values.
Table 6
PSO-generated locally 2, 3 and 4-point D-optimal exact designs for the Michaelis-Menten model with the Triangular correlation structure on the design interval $[0,1]$.
Parameters ${\xi _{2}^{\ast }}$ ${\xi _{3}^{\ast }}$ ${\xi _{4}^{\ast }}$
$\lambda =1.0$; $b=0.5$ 0.2725 1.0000 0.0000 0.1340 1.0000 0.0000 0.0755 0.2136 1.0000
1.0 0.3820 1.0000 0.0000 0.2192 1.0000 0.0000 0.1265 0.3383 1.0000
1.5 0.4403 1.0000 0.0000 0.2753 1.0000 0.0000 0.1627 0.4166 1.0000
2.0 0.4760 1.0000 0.0000 0.3139 1.0000 0.0000 0.1888 0.4673 1.0000
2.5 0.5000 1.0000 0.0000 0.3417 1.0000 0.0000 0.2083 0.5019 1.0000
$\lambda =2.0$; $b=0.5$ 0.2500 1.0000 0.0000 0.1340 1.0000 0.0000 0.1054 0.6054 1.0000
1.0 0.3333 1.0000 0.2560 0.7560 1.0000 0.0000 0.1506 0.6506 1.0000
1.5 0.3750 1.0000 0.2798 0.7798 1.0000 0.0000 0.1742 0.6742 1.0000
2.0 0.4000 1.0000 0.2918 0.7918 1.0000 0.0000 0.1889 0.6889 1.0000
2.5 0.4167 1.0000 0.2987 0.7987 1.0000 0.0000 0.1987 0.6987 1.0000
$\lambda =5.0$; $b=0.5$ 0.2500 1.0000 0.1973 0.3973 1.0000 0.1838 0.3838 0.8000 1.0000
1.0 0.3333 1.0000 0.2659 0.4659 1.0000 0.2446 0.4446 0.8000 1.0000
1.5 0.3750 1.0000 0.3015 0.5015 1.0000 0.2770 0.4770 0.8000 1.0000
2.0 0.4000 1.0000 0.3233 0.5233 1.0000 0.2962 0.4962 0.8000 1.0000
2.5 0.4167 1.0000 0.3378 0.5378 1.0000 0.3101 0.5101 0.8000 1.0000
Table 7 shows the resulting D-optimal exact designs when errors have a Gaussian correlation structure and the parameter values for λ are 7, 8 and 9. For all cases, the upper limit of the design space is a support point of the D-optimal exact design. If we want a D-optimal exact design with 3 or 4 points, we found that PSO gave only a two-points design at the lower and upper limits of the design space. This suggests that the design requires replications at the two points.
Table 7
PSO-generated 2, 3 and 4-point locally D-optimal exact designs for the Michaelis-Menten model with the Gaussian correlation structure on the design interval $[0,1]$.
Parameters ${\xi _{2}^{\ast }}$ ${\xi _{3}^{\ast }}$ ${\xi _{4}^{\ast }}$
$\lambda =7.0$; $b=0.5$ 0.2503 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0007 1.0000
1.0 0.3352 1.0000 0.0000 0.0000 1.0000 0.0000 0.0002 0.0002 1.0000
1.5 0.3794 1.0000 0.0000 0.0000 1.0000 0.0000 0.0036 1.0000 1.0000
2.0 0.4071 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0002 1.0000
2.5 0.4263 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 1.0000 1.0000
$\lambda =8.0$; $b=0.5$ 0.2501 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0006 1.0000
1.0 0.3342 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0004 1.0000
1.5 0.3772 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 1.0000 1.0000
2.0 0.4038 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 1.0000 1.0000
2.5 0.4219 1.0000 0.3822 1.0000 1.0000 0.0000 0.0000 1.0000 1.0000
$\lambda =9.0$; $b=0.5$ 0.2500 1.0000 0.0000 0.0000 1.0000 0.0000 0.0003 0.0003 1.0000
1.0 0.3337 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0002 1.0000
1.5 0.3761 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.5868 1.0000
2.0 0.4020 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.5958 1.0000
2.5 0.4196 1.0000 0.4181 1.0000 1.0000 0.3920 0.9995 1.0000 1.0000
Table 8 shows the resulting D-optimal exact designs when errors have a Rational correlation structure and the parameter values for λ are 1, 2 and 5. For all cases, the upper limit of the design space is a support point of the D-optimal design. We observe from the table that the minimal support point of a D-optimal exact design with 3 or 4 points is at the lower limit of design space and the middle support points become larger as the value of the parameter b increases.
Table 8
PSO-generated locally 2, 3 and 4-point D-optimal exact designs for the Michaelis-Menten model with the Rational correlation structure on the design interval $[0,1]$.
Parameters ${\xi _{2}^{\ast }}$ ${\xi _{3}^{\ast }}$ ${\xi _{4}^{\ast }}$
$\lambda =1.0$; $b=0.5$ 0.2821 1.0000 0.0000 0.1426 1.0000 0.0000 0.0824 0.2376 1.0000
1.0 0.3893 1.0000 0.0000 0.2341 1.0000 0.0000 0.1384 0.3650 1.0000
1.5 0.4444 1.0000 0.0000 0.2914 1.0000 0.0000 0.1766 0.4378 1.0000
2.0 0.4778 1.0000 0.0000 0.3294 1.0000 0.0000 0.2034 0.4828 1.0000
2.5 0.5000 1.0000 0.0000 0.3561 1.0000 0.0000 0.2230 0.5130 1.0000
$\lambda =2.0$; $b=0.5$ 0.2713 1.0000 0.0000 0.1553 1.0000 0.0000 0.0920 0.2637 1.0000
1.0 0.3709 1.0000 0.0000 0.2526 1.0000 0.0000 0.1548 0.3901 1.0000
1.5 0.4220 1.0000 0.0000 0.3096 1.0000 0.0000 0.1953 0.4562 1.0000
2.0 0.4531 1.0000 0.0000 0.3459 1.0000 0.0000 0.2226 0.4959 1.0000
2.5 0.4739 1.0000 0.0000 0.3707 1.0000 0.0000 0.2420 0.5222 1.0000
$\lambda =5.0$; $b=0.5$ 0.2605 1.0000 0.0000 0.1856 1.0000 0.0000 0.1184 0.3114 1.0000
1.0 0.3520 1.0000 0.0000 0.2854 1.0000 0.0000 0.1913 0.4259 1.0000
1.5 0.3986 1.0000 0.0000 0.3380 1.0000 0.0000 0.2326 0.4807 1.0000
2.0 0.4268 1.0000 0.0000 0.3700 1.0000 0.0000 0.2587 0.5127 1.0000
2.5 0.4457 1.0000 0.0000 0.3915 1.0000 0.0000 0.2766 0.5337 1.0000

3.2 PSO Variants and Their Performance Relative to PSO

PSO is a nature-inspired metaheuristic algorithm, and like all such algorithms, does not guarantee that it finds the global optimum and sometimes it does not perform well. For a well-known algorithm, like PSO, there have been many improvements been made to the original version to enhance its performance in various ways. These are enhancements or modified PSO algorithms, called variants, aim to make the origin PSO more effective in different ways, such as, making it converge faster, better control of particles that fly out of range or how to more cleverly bring them to the region of interest. Some are modified to solve special types of optimization problems and many are improved ways to tune the PSO parameters for accelerated convergence.
There are probably 20–30 or more PSO variants now and it is good practice to compare the performance of PSO with some of them. We select the following variants to compare because they seem to be popular variants of PSO: Guarantee Convergence PSO [30], Quantum PSO [24, 23], Locally Convergent Rotationally Invariant PSO [2] and Competitive Swarm Optimization [5]. For space consideration, we do not provide details for these variants and refer the interested reader to the cited references.
We compare these 5 PSO-based algorithms using different setups of the design problem for the Michaelis-Menten model. The model has one of the 5 types of correlation functions in Section 3.1 and we seek exact D-optimal designs with $N=4,12$ and 20. Each algorithm is run 100 times, and for each run, all algorithms use the same initial swarm. Table 9 summarizes the D-criterion values obtained by the 5 algorithms after 100 replications. All algorithms have similar performance except that, for Gaussian correlation function, the QPSO algorithm finds a better design.
Table 9
Performances of 5 PSO variants for finding locally D-optimal exact designs for the Michaelis-Menten model with various correlation structures.
$N=4$ $N=12$ $N=20$
Corr. Algorithm Min. Mean Max. Min. Mean Max. Min. Mean Max.
AR PSO −5.180 −5.180 −5.180 −5.076 −5.074 −5.074 −5.069 −5.069 −5.068
GCPSO −5.180 −5.180 −5.180 −5.076 −5.074 −5.074 −5.070 −5.069 −5.068
QPSO −5.180 −5.180 −5.180 −5.074 −5.074 −5.074 −5.069 −5.068 −5.068
LcRiPSO −5.180 −5.180 −5.180 −5.075 −5.074 −5.074 −5.070 −5.069 −5.068
CSO −5.180 −5.180 −5.180 −5.076 −5.074 −5.074 −5.071 −5.070 −5.069
EXP PSO −5.655 −5.655 −5.655 −5.542 −5.540 −5.540 −5.536 −5.534 −5.534
GCPSO −5.655 −5.655 −5.655 −5.542 −5.540 −5.540 −5.535 −5.534 −5.534
QPSO −5.655 −5.655 −5.655 −5.541 −5.540 −5.540 −5.535 −5.534 −5.534
LcRiPSO −5.655 −5.655 −5.655 −5.541 −5.540 −5.540 −5.536 −5.535 −5.534
CSO −5.655 −5.655 −5.655 −5.542 −5.541 −5.540 −5.537 −5.536 −5.535
TRI PSO −5.868 −5.868 −5.868 −5.772 −5.771 −5.771 −5.767 −5.766 −5.766
GCPSO −5.868 −5.868 −5.868 −5.772 −5.771 −5.771 −5.767 −5.766 −5.766
QPSO −5.868 −5.868 −5.868 −5.771 −5.771 −5.771 −5.766 −5.766 −5.766
LcRiPSO −5.868 −5.868 −5.868 −5.772 −5.771 −5.771 −5.767 −5.766 −5.766
CSO −5.868 −5.868 −5.868 −5.772 −5.771 −5.771 −5.768 −5.767 −5.766
GAU PSO −6.048 −6.047 −5.893 −4.906 −1.195 14.635 −4.470 −4.170 −4.101
GCPSO −6.048 −6.044 −5.846 −5.093 −1.395 14.351 −4.281 −4.109 −0.149
QPSO −6.048 –5.987 –5.293 –4.636 0.604 15.803 −4.320 –4.079 1.675
LcRiPSO −6.048 −6.046 −5.877 −4.904 −1.874 12.766 −4.334 −4.187 −2.603
CSO −6.048 −6.048 −6.048 −5.267 −4.695 12.133 −4.329 −4.236 −4.157
RAT PSO −4.370 −4.370 −4.370 −4.255 −4.253 −4.253 −4.248 −4.247 −4.246
GCPSO −4.370 −4.370 −4.370 −4.253 −4.253 −4.253 −4.248 −4.247 −4.246
QPSO −4.370 −4.370 −4.370 −4.254 −4.253 −4.253 −4.247 −4.247 −4.246
LcRiPSO −4.370 −4.370 −4.370 −4.254 −4.253 −4.253 −4.248 −4.247 −4.247
CSO −4.370 −4.370 −4.370 −4.255 −4.253 −4.253 −4.249 −4.248 −4.247
Fedorov’s type of algorithms are commonly used to find optimal designs and it is interesting to compare their performance relative to that from metaheuristic algorithms. To compare their performances, we fix, as an example, (b, $\lambda {)^{\top }}={(1.7,0.5)^{\top }}$ and apply the algorithms to find D-optimal designs with different number of support points, say, $m=8,10$ and 20. Table 10 shows the results of comparing two PSO-type of algorithms with two types of Fedorov’s exchange-type algorithms: Fed from Fedorov’s exchange algorithm [10] and mFed from modified Fedorov’s exchange algorithm [6]. We observe that the PSO-type algorithms are comparable and slightly outperform the latter two Fedorov-type algorithms in terms of the design criterion $\log |M({\xi ^{\ast }},\theta )|$ values. For space consideration, we show the results for the autoregressive correlation structure, but results are similar for other correlation structures.
Table 10
Performances of PSO Variants versus exchange algorithms for finding locally D-optimal exact designs for the Michaelis-Menten model with autoregressive correlation structure.
$N=8$ $N=10$ $N=20$
Corr. Algorithm Min. Median Max. Min. Median Max. Min. Median Max.
AR PSO −4.779 −4.773 −4.773 −4.495 −4.493 −4.492 −3.699 −3.698 −3.697
QPSO −4.774 −4.773 −4.773 −4.496 −4.493 −4.492 −3.699 −3.698 −3.697
Fed −5.026 −4.791 −4.773 −4.702 −4.524 −4.494 −3.811 −3.724 −3.699
m-Fed −4.808 −4.785 −4.773 −4.512 −4.502 −4.493 −3.704 −3.701 −3.697

4 Optimal Exact Designs for Fitting a Growth Curve

A growth curve describes the change of an outcome over time. To estimate a growth curve model, we take multiple observations at different time points from the same subject and so the observed responses are correlated.
A generalized version of the Michaelis-Menten model to study the biomass W of an animal over time was introduced by [17]. At time t, the model is as follows:
(4.1)
\[ W(t;\theta )=\frac{{W_{0}}{K^{h}}+{W_{f}}{t^{h}}}{{K^{h}}+{t^{h}}},t\gt 0,\theta ={({W_{0}},{W_{f}},K,h)^{\top }}\]
where ${W_{0}}$ and ${W_{f}}$ are the zero- and infinite-time values of the biomass, respectively. If $K\gt 0$, it is the time when half-maximal growth is achieved. When $h=1$ and ${W_{0}}=0$, model (4.1) reduces to the usual Michaelis-Menten model.
We consider the data set from [17] consisting of 17 weight (kg) records of one castrated male Percheron horse from birth to 220 weeks of age. Its growth pattern has been described by the generalized Michaelis-Menten model with nominal values ${\theta _{0}}={({W_{0}},{W_{f}},K,h)^{\top }}={(85.50,731.00,56.70,1.39)^{\top }}$ in [17]. Using ${\theta _{0}}$, they found a locally D-optimal exact design, ${\xi _{D}^{\ast }}$ for estimating parameters in (4.1) by maximizing the determinant of the Fisher information matrix. For model (4.1), the elements in the covariance matrix are ${\Sigma _{ij}}={\sigma ^{2}}\exp \{(-\lambda |{t_{i}}-{t_{j}}|\}$ and
\[\begin{aligned}{}{f_{i}}\hspace{0.1667em}=\hspace{0.1667em}& \left[\frac{{k^{h}}}{{k^{h}}+{t_{i}^{h}}},\frac{{t_{i}^{h}}}{{k^{h}}+{t_{i}^{h}}},\frac{h{k^{h-1}}{W_{0}}}{{k^{h}}+{t_{i}^{h}}}-\frac{h{k^{h-1}}({k^{h}}{W_{0}}+{t_{i}^{h}}{W_{f}})}{{({k^{h}}+{t_{i}^{h}})^{2}}},\right.\\ {} & \frac{({k^{h}}{W_{0}}\log k+{t_{i}^{h}}{W_{f}}\log {t_{i}})}{{k^{h}}+{t_{i}^{h}}}\\ {} & -{\left.\frac{({k^{h}}\log k+{t_{i}^{h}}\log {t_{i}})({k^{h}}{W_{0}}+{t_{i}^{h}}{W_{f}})}{{({k^{h}}+{t_{i}^{h}})^{2}}}\right]^{\top }}.\end{aligned}\]

4.1 Single-Objective Optimal Designs

To fix ideas, we assume that ${(\lambda ,{\sigma ^{2}})^{\top }}={(0.5,1.0)^{\top }}$ and suppose we are interested in finding D-optimal exact designs with sample sizes $N=4,10$ and 17. Table 11 displays the PSO-generated D-optimal exact designs found with 256 particles and 200 iterations. The logarithm of their D-optimal criterion values are 10.919, 13.984 and 15.599, respectively.
Table 11
PSO-generated locally D-optimal exact designs for the horse example.
N ${\xi _{D}^{\ast }}$
4 0.00 19.69 74.63 220.00
10 0.00 6.21 17.10 25.59 46.38 73.65 86.38 101.81 209.47 220.00
17 0.04 5.72 13.23 19.82 24.47 31.20 39.45 50.83 65.11 77.33
91.57 101.87 113.10 159.22 204.19 211.12 220.00
Model (4.1) is quite common in animal growth modelling because it has the “Growth Parameters” which are functions of θ. In [17], they were interested to estimate five growth parameters; two components in θ, and, ${W_{0}}$ and ${W_{f}}$ representing the initial weight and the final weight of the animal, respectively. The maximum growth rate indicates the slope at the point of inflection where the animal grows the fastest and it is defined by
\[ \Delta {W_{max}}=\frac{dW}{dt}{\bigg|_{t={t^{\ast }}}}\hspace{2.5pt}\hspace{2.5pt}\text{where}\hspace{2.5pt}\hspace{2.5pt}\frac{{d^{2}}W}{d{t^{2}}}{\bigg|_{t={t^{\ast }}}}=0.\]
For $h\gt 1$, ${t^{\ast }}=K{\left(\frac{h-1}{h+1}\right)^{1/h}}$ and hence
\[ \Delta {W_{max}}(\theta )=\frac{h({W_{f}}-{W_{0}}){\left(\frac{h-1}{h+1}\right)^{1-1/h}}}{K{\left(\frac{2h}{h+1}\right)^{2}}}.\]
The average growth rate during postnatal life,
\[\begin{aligned}{}\langle \Delta W\rangle (\theta )& =\frac{1}{{W_{f}}-{W_{0}}}{\int _{{W_{0}}}^{{W_{f}}}}\frac{dW}{dt}\hspace{2.5pt}dW\\ {} & =\frac{1}{{W_{f}}-{W_{0}}}{\int _{0}^{\infty }}{\left(\frac{dW}{dt}\right)^{2}}dt\\ {} & =\frac{({W_{f}}-{W_{0}})({h^{2}}-1)\pi \csc (\pi /h)}{6K{h^{2}}}.\end{aligned}\]
It is also interesting to estimate the time at which 50% of the final weight is achieved, ${t_{50}}$. By substituting W with $0.5{W_{f}}$ on the left hand side of (4.1), we have
\[ {t_{50}}(\theta )={\left(\frac{{K^{h}}({W_{f}}-2{W_{0}})}{{W_{f}}}\right)^{1/h}}.\]
The design criterion for estimating a model parameter is the asymptotic variance of the estimated parameter and we find an exact design ${\xi _{c}^{\ast }}$ that minimizes the criterion over all possible exact designs on the design space. The resulting optimal designs are c-optimal and they minimize
(4.2)
\[ \underset{\xi }{\min }\hspace{2.5pt}{c^{\top }}{M^{-1}}(\xi ,\theta )c,\]
where ${c^{\top }}={e_{1}^{\top }}=(1,0,0,0)$ for estimating ${W_{0}}$ and ${c^{\top }}={e_{2}^{\top }}=(0,1,0,0)$ for estimating ${W_{f}}$.
More generally, to estimate a function $g(\theta )$ of the model parameters, we find an exact design that minimizes the asymptotic variance of the estimated function. By the Delta method, this variance is proportional to
(4.3)
\[ \text{var}\left[g(\hat{\theta })\right]={\left[{\nabla _{\theta }}g(\theta )\right]^{\top }}{M^{-1}}(\xi ,\theta )\left[{\nabla _{\theta }}g(\theta )\right].\]
Hence, for estimating $\Delta {W_{max}}$, $\langle \Delta W\rangle $ and ${t_{50}}$, their asymptotic variances, are respectively given by
\[\begin{array}{r}\displaystyle \begin{aligned}{}& {\nabla _{\theta }}\left(\Delta {W_{max}}\right)=\frac{{h^{2}}-1}{4hK}{\left(\frac{h+1}{h-1}\right)^{1/h}}\cdot \\ {} & \hspace{2em}{\left[-1,1,-\frac{{W_{f}}-{W_{0}}}{K},\frac{{W_{f}}-{W_{0}}}{{h^{2}}}\left(h+\log \frac{h-1}{h+1}\right)\right]^{\top }},\end{aligned}\\ {} \displaystyle \begin{aligned}{}& {\nabla _{\theta }}\left(\langle \Delta W\rangle \right)=\frac{({h^{2}}-1)\pi }{6{h^{2}}K}\csc \left(\frac{\pi }{h}\right)\cdot \\ {} & \hspace{0.1667em}{\left[-1,1,-\frac{{W_{f}}\hspace{0.1667em}-\hspace{0.1667em}{W_{0}}}{K},\frac{{W_{f}}\hspace{0.1667em}-\hspace{0.1667em}{W_{0}}}{{h^{4}}({h^{2}}\hspace{0.1667em}-\hspace{0.1667em}1)}\left(2h\hspace{0.1667em}+\hspace{0.1667em}({h^{2}}\hspace{0.1667em}-\hspace{0.1667em}1)\pi \cot \left(\frac{\pi }{h}\right)\right)\right]^{\top }}\hspace{-0.1667em}\end{aligned}\end{array}\]
and
\[\begin{aligned}{}{\nabla _{\theta }}\left({t_{50}}\right)=& \left[-\frac{2{t_{50}}}{h({W_{f}}-2{W_{0}})},\frac{2{K^{h}}{W_{0}}{t_{50}^{1-h}}}{h{W_{f}^{2}}},\frac{{t_{50}}}{K},\right.\\ {} & {\left.\frac{{t_{50}}}{h}\log \left(\frac{K}{{t_{50}}}\right)\right]^{\top }}.\end{aligned}\]
Using ${\theta _{0}^{\top }}=({W_{0}},{W_{f}},K,h)=(85.50,731.00,56.70,1.39)$, we have ${\nabla _{{\theta _{0}}}}{\left(\Delta {W_{max}}\right)^{\top }}=(-7.03,7.03,-80.06,-993.65)$, ${\nabla _{{\theta _{0}}}}{\left(\langle \Delta W\rangle \right)^{\top }}=(-3.73,3.73,-42.43,253.32)$ and ${\nabla _{{\theta _{0}}}}{\left({t_{50}}\right)^{\top }}=(-0.12,0.01,0.83,6.46)$. The correlation structure is autoregressive with ${(\lambda ,{\sigma ^{2}})^{\top }}={(0.5,1.0)^{\top }}$.
We implemented PSO, with 256 particles and 200 iterations, to search for the exact c-optimal designs for estimating ${W_{0}}$, ${W_{f}}$, $\Delta {W_{max}}$, $\langle \Delta W\rangle $ and ${t_{50}}$. Table 12 displays the c-optimal exact designs for different sample sizes $N=4,10$ and 17.
Table 12
PSO-generated locally c-optimal exact designs for the horse example.
N ${\xi _{{W_{0}}}^{\ast }}$
4 0.00 45.18 45.18 148.60
10 0.00 1.78 3.96 6.81 26.76 33.37 94.72 104.66 112.02 220.00
17 0.01 1.44 3.02 5.30 7.70 29.58 35.65 39.87 90.73 95.92
105.07 127.63 127.93 133.01 193.43 209.98 215.55
N ${\xi _{{W_{f}}}^{\ast }}$
4 0.00 14.38 75.22 220.00
10 0.00 15.50 21.37 70.52 78.21 85.44 93.23 205.94 213.65 220.00
17 0.08 13.02 16.50 20.45 22.14 24.51 67.10 71.56 79.51 84.84
87.40 95.21 103.50 202.47 208.64 215.18 220.00
N ${\xi _{\Delta {W_{max}}}^{\ast }}$
4 0.00 41.70 50.95 220.00
10 0.00 2.30 5.40 27.51 33.13 38.42 43.83 49.89 56.57 216.66
17 0.05 1.75 4.10 6.78 9.49 23.56 27.49 30.98 33.43 37.40
41.51 45.79 48.45 54.31 59.58 70.93 196.03
N ${\xi _{\langle \Delta W\rangle }^{\ast }}$
4 3.03 63.77 73.70 220.00
10 0.00 3.16 6.42 10.10 49.62 56.88 63.05 69.34 77.32 218.88
17 0.12 4.31 9.71 12.37 39.24 45.39 50.81 56.63 60.76 63.16
68.32 77.52 83.38 92.41 195.56 207.90 217.80
N ${\xi _{{t_{50}}}^{\ast }}$
4 0.00 80.64 90.01 220.00
10 0.01 13.92 59.86 66.57 74.02 81.02 89.28 205.78 214.52 220.00
17 0.37 13.77 18.20 52.09 60.03 65.82 72.88 83.66 91.80 97.65
104.36 112.58 195.40 202.98 210.25 215.42 220.00

4.2 Multi-Objective Optimal Designs

Sometimes, there are two or more objectives in the study. For example, we may want to estimate the individual parameters and functions of the model parameters simultaneously, such as in our case, ${W_{0}}$, ${W_{f}}$, $\Delta {W_{max}}$, $\langle \Delta W\rangle $ and ${t_{50}}$. Let ${C^{\top }}=[{e_{1}};{e_{2}};{\nabla _{{\theta _{0}}}}\left(\Delta {W_{max}}\right);{\nabla _{{\theta _{0}}}}\left(\langle \Delta W\rangle \right);{\nabla _{{\theta _{0}}}}\left({t_{50}}\right)]$ and let ${\xi _{L}^{\ast }}$ be the $L-$ optimal design that minimizes
\[ \text{tr}\left\{{C^{T}}{M^{-1}}(\xi )C\right\}\]
over all designs on the design interval. For the horse example, using the same set of nominal values, Table 13 presents the PSO-generated L-optimal designs with $N=4,10$ and 17 observations.
Table 13
PSO-generated locally linear exact designs for the horse example.
N ${\xi _{L}^{\ast }}$
4 0.00 34.38 73.62 220.00
10 0.01 2.75 6.66 25.40 33.40 43.65 53.72 61.71 71.08 220.00
17 0.07 3.08 6.27 7.86 22.47 30.73 35.35 38.24 45.10 50.29
58.74 69.75 83.15 95.63 173.97 212.07 217.16
Another approach is to construct a maximin design that maximizes the minimum efficiencies of the design across the five criteria. Recalling that if ${\xi _{c}^{\ast }}$ is the c-optimal design, the c-efficiency of a design ξ is given by
(4.4)
\[ {\text{Eff}_{c}}(\xi )=\frac{{c^{\top }}{M^{-1}}({\xi _{c}^{\ast }},\theta )c}{{c^{\top }}{M^{-1}}(\xi ,\theta )c},\]
and the minimization is over all plausible values of θ in some given compact parameter space. Operationally, the maximin design is defined by
(4.5)
\[\begin{aligned}{}& {\xi _{MM}^{\ast }}=\text{arg}\\ {} & \underset{\xi }{\max }\hspace{2.5pt}\underset{\theta }{\min }\big\{{\text{Eff}_{{W_{0}}}}(\xi ),{\text{Eff}_{{W_{f}}}}(\xi ),{\text{Eff}_{\Delta {W_{max}}}}(\xi ),{\text{Eff}_{\langle \Delta W\rangle }}(\xi ),\\ {} & \hspace{2em}\hspace{2em}\hspace{2.5pt}{\text{Eff}_{{t_{50}}}}(\xi )\big\}.\end{aligned}\]
The search for the maximin design is a two-step task. First, for each value of $\theta \in \Theta $, where Θ is a known set containing all plausible values of θ, we use PSO to find the locally c-optimal designs, ${\xi _{{W_{0}}}^{\ast }}$, ${\xi _{{W_{f}}}^{\ast }}$, ${\xi _{\Delta {W_{max}}}^{\ast }}$, ${\xi _{\langle \Delta W\rangle }^{\ast }}$ and ${\xi _{{t_{50}}}^{\ast }}$ for the five criteria. Table 12 presents these locally c-optimal designs in the horse example for selected sample sizes of $N=4,10$ and 17. The c-optimal designs are then used in(4.5) to compute the C-efficiencies before using another PSO to search for the optimal maximin design, ${\xi _{MM}^{\ast }}$.
Table 14
PSO-generated maximin optimal exact designs for the horse example.
N ${\xi _{MM}^{\ast }}$
4 0.00 14.78 61.44 220.00
10 0.01 4.32 21.96 29.37 55.60 66.19 74.61 84.06 212.45 220.00
17 0.00 3.54 17.92 25.37 30.51 35.90 48.38 55.14 65.46 71.51
80.07 83.88 87.71 199.29 209.10 213.49 219.22
Table 14 displays the PSO-generated optimal maximin designs for three sample sizes: $N=4,10$ and 17 for the horse example. How does the optimal maximin designs compare with the locally optimal designs? A direct calculation shows the $c-$-efficiencies of the 4-point optimal maximin design for estimating ${W_{0}},{W_{f}},\Delta {W_{max}},\langle \Delta W\rangle $ and ${t_{50}}$ are $99.99\% $, $93.48\% $, $68.07\% $, $68.07\% $ and $78.23\% $, respectively. This 4-point design has higher c-efficiencies for estimating ${W_{0}}$ and ${W_{f}}$ and lower c-efficiencies for estimating $\Delta {W_{max}}$ and $\langle \Delta W\rangle $. If we increase the number of support points required of the exact optimal designs, we observe that the estimation performances of the exact maximin design of the growth parameters become more balanced. The corresponding c-efficiency values of the $10-$point optimal maximin design are $84.22\% $, $77.09\% $, $77.09\% $, $84.43\% $ and $77.23\% $, respectively, and the corresponding results for the 17-point optimal design are $83.04\% $, $88.23\% $, $82.22\% $, $82.35\% $, and, $83.79\% $, respectively.

5 Optimal Designs for Estimating parameters in a HIV Dynamic Model

Human immunodeficiency viruses (HIV) are a subgroup of retrovirus that can cause infection, resulting in progressive failure of the immune system. The presence of viruses is usually detected when the CD4 cell counts are low. Monitoring the cell counts continuously is key to understanding disease progression. Various statistical models have been proposed to model CD4 cell count longitudinally. A common model is to assume a fixed effect model defined on a time line in hours over a pre-specified time interval, say, $t\in T=[0,6\frac{11}{12}]=[0,6.917]$,
(5.1)
\[\begin{aligned}{}& E({Y_{j}}\mid {V_{0}},c,\delta ,{t_{j}})=\eta ({V_{0}},c,\delta ,{t_{j}})\\ {} & =\log {V_{0}}+\\ {} & \log \left[\frac{{c^{2}}}{{(c-\delta )^{2}}}{e^{-\delta {t_{j}}}}-\frac{{c^{2}}-{(c-\delta )^{2}}}{{(c-\delta )^{2}}}{e^{-c{t_{j}}}}-\frac{c\delta }{c-\delta }t{e^{-c{t_{j}}}}\right].\end{aligned}\]
The above time interval is taken from Table 1 in [13] that describes HIV RNA copies per milliliter plasma and time is measured since treatment initiation and the predictor, t, is time after pharmacologic delay. Frequently interest is in estimating one or more of the parameters in the model parameters since not every parameter has a meaningful biological interpretation. When all parameters are interesting, the D-criterion discussed previously is an appropriate design objective; otherwise, if only a subset of the parameters is interesting, the design criterion is c-optimality. In the above model, we are interested to find an optimal design to estimate ${\theta ^{T}}=(\log {V_{0}},\log c,\log \delta )$ and optimal designs to best estimate each of its components. Additionally, we also find a maximin optimal design goal that maximizes the lowest efficiency among the three efficiencies for estimating each of the three parameters.
For such studies, there is usually a standard protocol recommended by physicians for implementation. The protocol specifies how many times and when to sample blood to determine cd4 counts and for other laboratory tests. However, the protocol is usually not based on statistical considerations. We now apply PSO to find various optimal designs and ascertain how efficient is the protocol design under various scenarios, including situations where there are multiple objectives and there are different emphases in each of the study objectives.
We compute various optimal exact designs and evaluate the efficiencies of the recommended or protocol design or baseline design for such a study. For a given number of points, uniform designs have them evenly distributed on the time interval T. For example, such a design with 8 observations is ${\xi _{8}}=\{0.000,0.917,1.917,2.917,3.917,4.917,5.917,6.917\}$. A direct calculation shows its Fisher information matrix is proportional to
(5.2)
\[ M(\xi ,\theta )=\frac{1}{N}{F^{\top }}F\]
where the ${(ij)^{th}}$ element of the matrix F is ${F_{ij}}(t)=\partial \eta ({\theta _{i}},t)/\partial {\theta _{j}}$ and $\eta (\theta ,t)$ is the mean response at time $t\in T$. For estimating all parameters in the model, D-optimality is commonly used and is defined by the determinant of $M(\xi ,\theta )$. Because the information matrix contains unknown parameters, we replace θ by its nominal values before we maximize the determinant using PSO. Using nominal values ${\theta ^{\top }}={(\log {V_{0}},\log c,\log \delta )^{\top }}=(11.0,1.1,-1.0)$, the resulting PSO-generated locally D-optimal design with 8 observations is ${\xi _{D}}=\left\{0.000,0.000,0.000,2.083,2.083,6.917,6.917,6.917\right\}$. This implies that the implemented design requires 3 replicates at 0, 2 replicates at 2.08 and 3 replicates at 6.92.
To estimate $\log c$ or $\log \delta $, we find an exact design that minimizes the asymptotic variance of the estimated function. Setting ${c^{\top }}=(0,1,0)$ to estimate $\log c$, ${c^{\top }}=(0,0,1)$ to estimate $\log \delta $ in (4.2) and using the given nominal values for the model parameters, the PSO-generated designs for estimating $\log c$- and $\log \delta $ are
\[ {\xi _{\log c}}\hspace{0.1667em}=\hspace{0.1667em}\{0.000,0.000,0.000,2.113,2.113,2.113,2.113,6.917\}\]
and
\[ {\xi _{\log \delta }}\hspace{0.1667em}=\hspace{0.1667em}\{0.000,1.923,1.923,1.923,1.923,6.917,6.917,6.917\}.\]
These designs can be implemented in practice as discussed for the PSO-generated 8-point D-optimal exact design. When there are multiple objectives in the study, one considers them simultaneously at the design stage. For example, when we wish to estimate θ, $\log c$ and $\log \delta $, we want a design that provides high efficiencies under all three criteria. One way is to use a maximin optimality criterion and find an exact design ${\xi _{MM}}$ that maximizes the minimal efficiency across the three criteria:
(5.3)
\[\begin{aligned}{}& \min \bigg\{{\bigg\{\frac{\det M(\xi ,\theta )}{\det M({\xi _{D}},\theta )}\bigg\}^{\frac{1}{3}}},\hspace{0.1667em}\\ {} & \frac{{\mathbf{e}_{2}^{\top }}{M^{-1}}({\xi _{\log c}},\theta ){\mathbf{e}_{2}}}{{\mathbf{e}_{2}^{\top }}{M^{-1}}(\xi ,\theta ){\mathbf{e}_{2}}},\hspace{0.1667em}\frac{{\mathbf{e}_{3}^{\top }}{M^{-1}}({\xi _{\log \delta }},\theta ){\mathbf{e}_{3}}}{{\mathbf{e}_{3}^{\top }}{M^{-1}}(\xi ,\theta ){\mathbf{e}_{3}}}\bigg\}.\end{aligned}\]
The first of the three terms in the outer curly brackets represents the D-efficiency (D-eff) of the design ξ and the last two terms represent the c-efficiencies (c-eff) of the design ξ of estimating $\log c$ and $\log \delta $. In practice, designs with high efficiencies are sought. If the efficiency of a design ξ is 0.5 or $50\% $, this means that the design ξ needs to be replicated twice to perform as well as the optimal design.
The search for the locally maximin optimal design is a two-step approach. First, we identify the three locally D- and c-optimal designs required to compute the efficiency values in (5.3). Second, we use PSO to search for the maximin optimal design that maximizes (5.3). In particular, the criterion is the minimal value among three efficiencies and thus at each iteration, PSO needs to find the optimal design, compute these three relative efficiencies first and then identify the minimal value as the criterion value.
nejsds45_g002.jpg
Algorithm 2
The PSO algorithm for searching maximin optimal designs.
For the given set of nominal values, Algorithm 2 produced the 8-point PSO-generated maximin design: ${\xi _{MM}}=\left\{0.000,0.000,1.847,1.847,1.847,1.849,6.917,6.917\right\}$, implying that the implemented design requires 2 replicates at 0, 4 replicates at 1.85 and 2 replicates at 6.92.
Table 15 displays the various relative efficiencies among the protocol design, ${\xi _{8}}$, the locally D-optimal design, ${\xi _{D}}$, the locally c-optimal designs, ${\xi _{\log c}}$ and ${\xi _{\log \delta }}$ and the locally maximin optimal design, ${\xi _{MM}}$.
The protocol design ${\xi _{8}}$ has low c-efficiencies for estimating $\log c$ and $\log \delta $, averaging about $45\% $ and substantially higher D-efficiency, about $72\% $ for estimating all parameters in the model. The locally D-optimal design has at least 67% c-efficiency, which is acceptable for estimating the individual parameters $\log c$ and $\log \delta $. When one of the two model parameters log c or log δ is of interest, the c-efficiency of the optimal design for estimating the other parameter is not high; Table 15 shows they are $48.33\% $ and $54.25\% $, suggesting that they are not robust to mis-specification in the optimality criterion. This is in contrast to the maximin optimal design, which has at least $81.31\% $-efficiency across all three criteria. This suggests that when we are unsure at the onset which of the parameters are more interesting to estimate, it may be desirable to implement a maximin optimal design.
Table 15
Comparisons among the competing designs.
Criterion Design D-eff. (%) $\log c$-eff. (%) $\log \delta $-eff. (%)
Baseline ${\xi _{8}}$ 72.21 44.96 46.94
D-optimal ${\xi _{D}}$ 100.00 69.63 67.88
$\log c$-optimal ${\xi _{\log c}}$ 87.35 100.00 48.33
$\log \delta $-optimal ${\xi _{\log \delta }}$ 87.04 54.25 100.00
maximin optimal ${\xi _{MM}}$ 95.37 81.31 81.31

6 Conclusions

Optimal exact designs are rarely reported in the literature because they are harder to find and study analytically even when the sample size is small. It is essentially a number-theoretical problem, where invariably, solutions, have to be derived for each problem and frequently, for different values of N as well. Further, the derivation of the analytical optimal design for a model is inapplicable to a slightly changed model, and the few algorithms for finding optimal exact designs are restrictive and usually for relatively simple linear models only; see for example, [3].
This paper investigates PSO’s capability to generate a variety of exact designs for various biomedical nonlinear models when errors in a nonlinear model may be correlated and there is one or more design criteria in the study. We implemented PSO using R codes to find them; for example, Algorithm 1 generates the locally D-optimal designs for the Michaelis-Menten model in this paper.
In all cases, PSO was able to successfully and efficiently identify the best designs and they agreed with the theoretical exact D-optimal designs when the latter are available for simpler setups. Unfortunately, there are no theoretical checks whether the PSO-generated optimal exact designs are truly optimal. One way to assess its validity is to compare the optimal exact design with the corresponding optimal approximate design, which can be found using standard algorithms and confirmed theoretically via an equivalence theorem. To this end, we also apply PSO to find the corresponding optimal approximate designs for models studied in the paper and we were able to confirm that both the generated PSO exact designs and the optimal approximate designs are close. For example, for the design problem discussed in the last section, we found the relative D-efficiency is 98.28% and the c-efficiencies for estimating $\log c$ and $\log \delta $ are $99.78\% $, and $94.29\% $, respectively.
Because PSO is a general optimization technique and requires little or no assumption on the problem for it to be applicable, we expect PSO should also perform well in finding other optimal exact designs for different types of models with various correlation structures, including high dimensional models. We also encourage further exploration and application of PSO to solve general optimization problems in statistics.

Acknowledgements

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

[1] 
AbdelAziz, A. M., Alarabi, L., Basalamah, S. and Hendawi, A. (2021). A multi-objective optimization method for hospital admission problem – a case study on COVID-19 patients. Algorithms 14(2) 38. https://doi.org/10.3390/a14020038.
[2] 
Bonyadi, M. R. and Michalewicz, Z. (2014). A locally convergent rotationally invariant particle swarm optimization algorithm. Swarm Intelligence 8(3) 159–198.
[3] 
Boon, J. E. (2007). Generating exact D-optimal designs for polynomial models. Spring Simulation Multiconference 2 121–126.
[4] 
Butler, G. and Wolkowicz, G. (1985). A mathematical model of the chemostat with a general class of functions describing nutrient uptake. SIAM Journal on Applied Mathematics 45(1) 138–151. https://doi.org/10.1137/0145006. MR0775486
[5] 
Cheng, R. and Jin, Y. (2015). A competitive swarm optimizer for large scale optimization. IEEE Transactions on Cybernetics 45(2) 191–204.
[6] 
Cook, R. D. and Nachtsheim, C. J. (1980). A comparison of algorithms for constructing exact D-optimal designs. Technometrics 22(3) 315–324. https://doi.org/10.2307/1267577. MR0653111
[7] 
Dette, H. and Kunert, J. (2014). Optimal designs for the Michaelis-Menten model with correlated observations. Statistics: A Journal of Theoretical and Applied Statistics 48(6) 1254–1267. https://doi.org/10.1080/02331888.2013.839680. MR3269733
[8] 
Eberhart, R. and Kennedy, J. (1995). A new optimizer using particle swarm theory. In Micro Machine and Human Science, 1995. MHS’95., Proceedings of the Sixth International Symposium on 39–43. IEEE.
[9] 
Falco, I. D., Cioppa, A. D., Scafuri, U. and Tarantino, E. (2020). Coronavirus COVID-19 spreading in Italy: optimizing an epidemiological model with dynamic social distancing through Differential Evolution. arXiv preprint arXiv:2004.00553v3.
[10] 
Fedorov, V. (1972). Theory of optimal design. Academic, New York. MR0403103
[11] 
Gomes, D. C. D. S. and Serra, G. L. D. O. (2021). Machine learning model for computational tracking forecasting the COVID-19 dynamic propagation. IEEE Journal of Biomedical and Health Informatics 25(3) 515–622.
[12] 
Haines, L. M. (1987). The application of the annealing algorithm to the construction of exact optimal designs for linear-regression models. Technometrics 29 439–447. MR2637962
[13] 
Han, C., Chaloner, K. and Perelson, A. S. (2002). Bayesian analysis of a population HIV dynamic model. In Case Studies in Bayesian Statistics, editors: Constantine Gatsonis, Robert E. Kass, Alicia Carriquiry, Andrew Gelman, David Higdon, Donna K. Pauler and Isabella Verdinelli, Lecture Notes in Statistics: Vol. VI, 223–337. Springer. https://doi.org/10.1007/978-1-4612-2078-7_10. MR1959658
[14] 
He, S., Peng, Y. and Sun, K. (2020). SEIR modeling of the COVID-19 and its dynamics. Nonlinear Dynamics 101 10, 100071107102005743.
[15] 
Hosseini, E., Ghafoor, K. Z., Sadiq, A. S., Guizani, M. and Emrouznejad, A. (2020). COVID-19 optimizer algorithm, modeling and controlling of coronavirus distribution process. IEEE Journal of Biomedical and Health Informatics 24(10) 2765–2775.
[16] 
Lai, T. L., Choi, K. P., Tong, T. X. and Wong, W. K. (2021). A statistical approach to adaptive parameter tuning in nature-inspired optimization and optimal sequential design of dose-finding Trials. Statistica Sinica. In press. https://doi.org/10.5705/ss.20. MR4338089
[17] 
Lopez, S., France, J., Gerrits, W., Dhanoa, M., Humphries, D. and Dijkstra, J. (2000). A generalized Michaelis-Menten equation for the analysis of growth. Journal of Animal Science 78(7) 1816–1828.
[18] 
Makade, R. G., Chakrabarti, S. and Jamil, B. (2020). Real time estimation and prediction of the mortality caused due to COVID-19 using particle swarm optimization and finding the most influential parameter. Infectious Disease Modelling 5 772–782.
[19] 
Maloney, J. and Heidel, J. (2003). An analysis of a fractal kinetics curve of savageau. Anziam Journal 45 261–269. https://doi.org/10.1017/S1446181100013316. MR2017748
[20] 
Montepiedra, G., Myers, D. and B., Y. A. (1998). Application of genetic algorithms to the construction of exact D-optimal designs. Journal of Applied Statistics 25 817–826.
[21] 
Pinter, G., Felde, I., Mosavi, A., Ghamisi, P. and Gloaguen, R. (2020). COVID-19 pandemic prediction for hungary; a hybrid machine learning approach. MDPI Mathematics 8 890.
[22] 
Singh, D., Kumar, V., Vaishali and Kaur, M. (2020). Classification of COVID-19 patients from chest CT images using multi-objective Differential Evolution-based convolutional neural networks. European Journal of Clinical Microbiology and Infectious Diseases 39 1379–21389.
[23] 
Sun, J., Feng, B. and Xu, W. (2004). Particle swarm optimization with particles having quantum behavior. In Evolutionary Computation, 2004. CEC2004. Congress on 1, 325–331. IEEE.
[24] 
Sun, J., Xu, W. and Feng, B. (2004). A global search strategy of quantum-behaved particle swarm optimization. In Cybernetics and Intelligent Systems, 2004 IEEE Conference on 1, 111–116. IEEE.
[25] 
Tian, Y., Yang, S. and Zhang, X. (2020). An evolutionary multiobjective optimization based fuzzy method for overlapping community detection. IEEE Transactions on Fuzzy Systems 28(11) 2841–2855.
[26] 
Tian, Y., Zhang, X., Wang, C. and Jin, Y. (2018). An evolutionary algorithm for large-scale sparse multiobjective optimization problems. IEEE Transactions on Evolutionary Computation 24(2) 380–393.
[27] 
Tian, Y., Su, X., Su, Y. and Zhang, X. (2020). EMODMI: a multi-objective optimization based method to identify disease modules. IEEE Transactions on Emerging Topics in Computational Intelligence.
[28] 
Tian, Y., Liu, R., Zhang, X., Ma, H., Tan, K. C. and Jin, Y. (2020). A multi-population evolutionary algorithm for solving large-scale multi-modal multi-objective optimization problems. IEEE Transactions on Evolutionary Computation. MR4361281
[29] 
Tong, T. X., Choi, K. P., Lai, T. L. and Wong, W. K. (2021). Stability Bounds and Almost Sure Convergence of Improved Particle Swarm Optimization Methods. Research in Mathematical Sciences. https://doi.org/10.1007/s40687-020-00241-4. MR4257863
[30] 
van den Bergh, F. and Engelbrecht, A. P. (2002). A new locally convergent particle swarm optimiser. In Systems, Man and Cybernetics, 2002 IEEE International Conference on, 3 6. IEEE.
[31] 
Whitacre, J. M. (2011). Recent trends indicate rapid growth of nature-inspired optimization in academia and industry. Computing 93 121–133. https://doi.org/10.1007/s00607-011-0154-z. MR2860177
[32] 
Whitacre, J. M. (2011). Survival of the flexible: explaining the recent popularity of nature-inspired optimization within a rapidly evolving world. Computing 93 135–146. https://doi.org/10.1007/s00607-011-0156-x. MR2860178
[33] 
Wu, M., Hu, W., Hu, J. and Yen, G. G. (2021). Adaptive multiobjective particle swarm optimization based on evolutionary state estimation. IEEE Transactions on Cybernetics 51(7) 3738–3751.
[34] 
Yang, X. S. (2010). Engineering optimization: an introduction with metaheuristic applications. In Particle Swarm Optimization, John Wiley and Sons.
[35] 
Yeap, B. Y., Catalano, P. J., Ryan, L. M. and Davidian, M. (2003). Robust two-stage approach to repeated measurements analysis of chronic ozone exposure in rats. Journal of Agricultural, Biological, and Environmental Statistics 8(4) 438–454.
[36] 
Yu, R. C. and Rappaport, S. M. (1996). Relation between pulmonary clearance and particle burden: a Michaelis-Menten-like kinetic model. Occupational and Environmental Medicine 53(8) 567–572.
[37] 
Yu, X. Z. and Gu, J. D. (2007). Differences in Michaelis-Menten kinetics for different cultivars of maize during cyanide removal. Ecotoxicology and Environmental Safety 67(2) 254–259.
[38] 
Zhigljavsky, A., Dette, H. and Pepelyshev, A. (2010). A new approach to optimal design for linear models with correlated observations. Journal of the American Statistical Association 105 1093–1103. https://doi.org/10.1198/jasa.2010.tm09467. MR2752605
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 Particle Swarm Optimization
  • 3 Locally D-Optimal Exact Designs for the Michaelis-Menten Model
  • 4 Optimal Exact Designs for Fitting a Growth Curve
  • 5 Optimal Designs for Estimating parameters in a HIV Dynamic Model
  • 6 Conclusions
  • Acknowledgements
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

The New England Journal of Statistics in Data Science

  • ISSN: 2693-7166
  • Copyright © 2021 New England Statistical Society

About

  • About journal

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer
Powered by PubliMill  •  Privacy policy