1 Introduction

Generally, it is quite difficult to obtain complete data in social surveys (). Consequently, available data are not only inefficient due to the reduced sample size, but also biased due to the difference between respondents and non-respondents, thus making statistical inference invalid. Since Rubin (), multiple imputation has been known to be the standard method of handling missing data (; ; ; ).

While the theoretical concept of multiple imputation has been around for decades, the implementation is difficult because making a random draw from the posterior distribution is a complicated matter. Accordingly, there are several computational algorithms in software (; ; ). The most traditional algorithm is Data Augmentation (DA) followed by the other two new algorithms, Fully Conditional Specification (FCS) and Expectation-Maximization with Bootstrapping (EMB). Although an abundant literature exists on the comparisons between joint modeling (DA, EMB) and conditional modeling (FCS), no comparisons have been made about the relative superiority between the MCMC algorithms (DA, FCS) and the non-MCMC algorithm (EMB), where MCMC stands for Markov chain Monte Carlo. This study assesses the effects of between-imputation iterations on the performance of the three multiple imputation algorithms, using Monte Carlo experiments.

By way of organization, Section 2 introduces the notations in this article. Section 3 gives a motivating example of missing data analysis in social sciences. Section 4 presents the assumptions of imputation methods. Section 5 shows the traditional methods of handling missing data. Section 6 introduces the three multiple imputation algorithms. Section 7 surveys the literature on multiple imputation. Sections 8 gives the results of the Monte Carlo experiments, showing the impact of between-imputation iterations on multiple imputation. Section 9 concludes with the findings and the limitations in the current research.

2 Notations

D is n × p data, where n is the sample size and p is the number of variables. The distribution of D is multivariate-normal with mean vector μ and variance-covariance matrix Σ, i.e., D ~ Np(μ, Σ), where all of the variables are continuous. Let i refer to an observation index (i = 1, …, n). Let j refer to a variable index (j = 1, …, p). Let D = {Y1, …, Yp}, where Yj is the j-th column in D and Y–j is the complement of Yj, i.e., all columns in D except Yj. Also, let Yobs be observed data and Ymis be missing data: D = {Yobs, Ymis}.

At the imputation stage, there is no concept of the dependent and independent variables, because imputation is not a causal model, but a predictive model (). Therefore, all of the variables are denoted Yj with the subscript j indexing a variable number. However, at the analysis stage, one of the Yj variables is the dependent variable and the remaining Y–j are the independent variables. If the dependent variable is the p-th column in D, then the dependent variable is simply denoted Y and the independent variables are denoted X1, …, Xp–1.

Let R be a response indicator matrix that has the same dimension as D. Whenever D is observed, R = 1; otherwise, R = 0. Note, however, that non-italicized R refers to the R statistical environment. In the multiple imputation context, M refers to the number of imputations and T refers to the number of between-imputation iterations. In general, θ is an unknown parameter vector.

3 Motivating Example: Missing Economic Data

Social scientists have long debated the determinants of economic development across countries (; ; ). Using the data from the Central Intelligence Agency () and Freedom House (), we may estimate a multiple regression model, in which the dependent variable is GDP per capita and the independent variables include social, economic, and political variables. The problem is that the data are incomplete (Table 1), where the median missing rate is 22.4% and the total missing rate is 62.3%.

Table 1

Variables and Missing Rates.

VariablesMissing Rates

GDP per capita (purchasing power parity)0.0%
Freedom House index15.4%
Central bank discount rate32.9%
Life expectancy at birth2.6%
Unemployment rate10.5%
Distribution of family income: Gini index37.3%
Public debt22.4%
Education expenditures24.6%
Taxes and other revenues6.1%
Military expenditures43.0%

Data sources: CIA () and Freedom House ().

Table 2 presents multiple regression models; however, the conclusions are susceptible to how we deal with missing data. The coefficients for central bank and public debt are statistically significant at the 5% error level using incomplete data, while they are not significant using multiply-imputed data. On the other hand, the coefficients for education and military are not significant using incomplete data, while they are significant using multiply-imputed data. Therefore, the issue of missing data is of grave concern in applied empirical research.

Table 2

Multiple Regression Analyses on GDP Per Capita.

Incomplete DataMultiply-Imputed Data

VariablesCoef.Std. Err.Coef.Std. Err.

Intercept–7.3233.953–11.545*3.495
Freedom–0.321*0.127–0.362*0.127
Central Bank0.118* 0.041–0.1070.049
Life Expectancy3.922*0.7944.908*0.655
Unemployment–0.205*0.087–0.214*0.070
Gini0.1140.253–0.0180.363
Public Debt0.198* 0.092–0.0020.093
Education0.0350.1640.488* 0.154
Tax0.357*0.1740.471*0.151
Military0.1230.0850.299*0.109
Number of obs.86228

Note: *significant at the 5% error level. Coef. stands for coefficient. Std. Err. stands for standard error. Since the distributions of these variables are skewed to the right (log-normal), the variables are log-transformed to normalize the distributions.

4 Assumptions of Imputation Methods

Missing data analyses always involve assumptions (). In order to judge the appropriateness of missing data methods, it is vital to comprehend the assumptions for the methods. Imputation involves the following four assumptions. These assumptions will play important roles in simulation studies (Section 8).

4.1 Assumptions of Missing Data Mechanisms

There are three common assumptions of missing data mechanisms in the literature (; ; ). The first assumption is Missing Completely At Random (MCAR), which is Pr(R|D) = Pr(R). If respondents are selected to answer their income values by throwing dice, this is an example of MCAR. The second assumption is Missing At Random (MAR), which is Pr(R|D) = Pr(R|Yobs). If older respondents are more likely to refuse to answer their income values and if the ages of the respondents are available in the data, this is an example of MAR. The third assumption is Not Missing At Random (NMAR), which is Pr(R|D) ≠ Pr(R|Yobs). If respondents with higher values of incomes are more likely to refuse to answer their income values and if the other variables in the data cannot be used to predict which respondents have high amounts of income, this is an example of NMAR.

4.2 Assumption of Ignorability

To be strict, the missing data mechanism is ignorable if both of the following conditions are satisfied: (1) The MAR condition; and (2) the distinctness condition, which stipulates that the parameters in the missing data mechanism are independent of the parameters in the data model ().

However, the MAR condition is said to be more relevant in real data applications (; ). Thus, for all practical purposes, NMAR is Non-Ignorable (NI). The current study assumes that the missing data mechanism is MAR and thus ignorable.

4.3 Assumption of Proper Imputation

Imputation is said to be Bayesianly proper if imputed values are independent realizations of Pr(Ymis|Yobs), which means that successive iterates of Ymis cannot be used because of the correlations between them (). Between-imputation convergence relies on a number of factors, but the fractions of missing information are one of the most influential factors (; ).

van Buuren () introduces a slightly simplified version of proper imputation, which he calls confidence proper. Let θ¯  be the multiple imputation estimate, θ^ be the estimate based on the hypothetically complete data, V¯ be the estimate of the sampling variance of the estimate based on the hypothetically complete data, and V^ be the sampling variance estimate based on the hypothetically complete data. An imputation procedure is said to be confidence proper if all of the following three conditions are satisfied: (1) θ¯  is equal to θ^ when averaged over the response indicators sampled under the assumed response model; (2) V¯ is equal to V^ when averaged over the response indicators sampled under the assumed response model; and (3) the extra inferential uncertainty due to missingness is correctly reflected. In order to check whether an imputation method is confidence proper, van Buuren () recommends to use bias, coverage, and confidence interval length as the evaluation criteria (See Section 8.2).

4.4 Assumption of Congeniality

Congeniality means that the imputation model is equal to the substantive analysis model. It is widely known that the imputation model can be larger than the substantive analysis model, but the imputation model cannot be smaller than the substantive analysis model (; ; ).

5 Traditional Methods of Handling Missing Data

This section introduces listwise deletion, deterministic single imputation, and stochastic single imputation, which are used as baseline methods for comparisons in Section 8.

Listwise deletion (LD), also known as complete-case analysis, throws away any rows that have at least one missing value (; ). Although it is simple and convenient, LD is less efficient due to the reduced sample size and may be biased if the assumption of MCAR does not hold ().

Deterministic single imputation (D-SI) replaces a missing value with a reasonable guess. The most straightforward version calculates predicted scores for missing values based on a regression model (; ). If the goal of analysis is to estimate the mean of an incomplete variable, D-SI produces an unbiased estimate under the assumptions of MCAR and MAR. However, D-SI tends to underestimate the variation in imputed data (). D-SI is available as R-function norm.predict in MICE (), where MICE stands for Multivariate Imputation by Chained Equations.

Stochastic single imputation (S-SI) also utilizes a regression model to predict missing values, but it adds to imputed values random components drawn from the residual distribution (). S-SI is likely to recover the variation of an incomplete variable under the assumptions of MCAR and MAR; thus, compensating for the disadvantage of D-SI (). S-SI is available as R-function norm.nob in MICE ().

However, both D-SI and S-SI tend to underestimate the standard error in imputed data because imputed values are treated as if they were real ().

6 Competing Multiple Imputation Algorithms

Multiple imputation was made widely known by Rubin () and concise history can be found in Scheuren (). In theory, multiple imputation replaces a missing value by M simulated values (M > 1) independently and randomly drawn from the distribution of missing data. The variation among M simulated values reflects uncertainty about missing data; thus, making the standard error valid. In practice, missing data are by definition unobserved; therefore, the distribution of missing data is also unobserved. Instead, under the assumption of MAR (or MCAR), multiple imputation constructs the posterior predictive distribution of missing data, conditional on observed data. Then, a random draw is independently made from this posterior distribution (; ; ).

However, using the analytical methods, it is not easy to randomly draw sufficient statistics from the posterior distribution (; ). In order to solve this problem, three computational algorithms have been proposed in the literature.

6.1 Data Augmentation

The traditional algorithm of multiple imputation is the Data Augmentation (DA) algorithm, which is a Markov chain Monte Carlo (MCMC) technique (). DA improves parameter estimates by repeated substitution conditional on the preceding value, forming a stochastic process called a Markov chain ().

The DA algorithm works as follows (). Equation (1) is the imputation step that generates imputed values from the predictive distribution of missing values, given the observed values and the parameter values at iteration t. Equation (2) is the posterior step that generates parameter values from the posterior distribution, given the observed values and the imputed values at iteration t + 1.

(1)
Ymis(t+1)Pr(Ymis|Yobs,θ(t))
(2)
θ(t+1)Pr(θ|Yobs,Ymis(t+1))

These two steps are repeated T times until convergence is attained. The convergence of MCMC is stochastic because it converges to probability distributions (). Therefore, it is hard to judge the convergence in MCMC.

There are two ways of generating multiple imputations by DA (; ). In the first method, a single chain is run for M × T iterations, taking every t-th iteration of Ymis. In the second method, M parallel chains of length T are run, and the final values of Ymis from M chains are taken as the imputations. The current study adopts the second method.

The software using this algorithm is R-Package NORM2, which was originally developed by Schafer () and is currently maintained by Schafer ().

6.2 Fully Conditional Specification

An alternative algorithm to DA is the Fully Conditional Specification (FCS) algorithm, which specifies the multivariate distribution by way of a series of conditional densities, through which missing values are imputed given the other variables ().

The FCS algorithm works as follows (; ; ). Equation (3) draws the unknown parameters of the imputation model, given the observed values and the t-th imputations, where Y˜j(t)=(Y˜1(t),,Y˜j1(t),Y˜j+1(t1),,Y˜p(t1)), where tilde denotes a random draw. Equation (4) draws imputations, given the observed values, the t-th imputations, and the t-th parameter estimates. These two steps are repeated for j = 1, …, p.

(3)
θ˜j(t)Pr(θj(t)|Yj,obs,Y˜j(t))
(4)
Y˜j(t)Pr(Yj,mis|Yj,obs,Y˜j(t),θ˜j(t))

The entire process is repeated for t = 1,…, T until convergence is attained. FCS can be considered an MCMC method, because FCS is a Gibbs sampler under the compatible conditionals (; ). This means that the convergence of FCS is stochastic. Therefore, it is hard to judge the convergence in FCS.

The software using this algorithm is R-Package MICE (), which stands for Multivariate Imputation by Chained Equations and is currently maintained by van Buuren et al. (). The FCS algorithm is also known as Sequential Regression Multivariate Imputation ().

6.3 Expectation-Maximization with Bootstrapping

Another emerging algorithm is the Expectation-Maximization with Bootstrapping (EMB) algorithm, which combines the Expectation-Maximization (EM) algorithm with the nonparametric bootstrap to create multiple imputation ().

The EMB algorithm works as follows (; ). Suppose that a random sample of size n is drawn from a population, where some values are missing in the sample. Bootstrap resamples of size n are randomly drawn from the sample data with replacement M times (; ). The variation among the M resamples represents uncertainty about estimation. The EM algorithm is applied to each of these M bootstrap resamples to refine M point estimates of parameter θ. Equation (5) is the expectation step that calculates the Q-function by averaging the complete-data log-likelihood over the predictive distribution of missing data. Equation (6) is the maximization step that finds parameter values at iteration t + 1 by maximizing the Q-function.

(5)
Q(θ|θ(t))=l(θ|Y)Pr(Ymis|Yobs,θ(t))dYmis
(6)
θ(t+1)=argmaxθQ(θ|θ(t))

These two steps are repeated until convergence is attained, where the converged value is a Maximum Likelihood Estimate (MLE) under well-behaved conditions (; ). The convergence of EM is deterministic because it converges to a point in the parameter space (). Therefore, it is straightforward to judge the convergence in EM. The substitution of MLEs from bootstrap resamples is asymptotically equal to a sample from the posterior distribution ().

The software using this algorithm is R-Package AMELIA II (), which was originally developed by King et al. () and is currently maintained by Honaker, King, and Blackwell ().

6.4 Relationships among the Three Algorithms

The three algorithms share certain characteristics with each other, but not exactly the same as summarized in Table 3.

Table 3

Relations among DA, EMB, and FCS.

Joint ModelingConditional Modeling

MCMCDAFCS
Non-MCMCEMB

DA and EMB are joint modeling while FCS is conditional modeling (). Joint modeling specifies a multivariate distribution of missing data while conditional modeling specifies a univariate distribution on a variable-by-variable basis (). Conditional modeling is more flexible and joint modeling is computationally more efficient (; ).

DA and FCS are different versions of MCMC techniques. On the other hand, EMB is not an MCMC technique. It is said that DA and FCS require between-imputation iterations to be confidence proper (; ) while EMB does not need iterations to be confidence proper (). However, as is clear in Section 7, whether EMB is confidence proper when DA and FCS are improper, this is an open question that has not been tested in the literature.

7 Comparative Studies on Multiple Imputation in the Literature

Table 4 presents the literature that compared imputation methods. Nine studies compared multiple imputation with other missing data methods, such as listwise deletion, single imputation, and maximum likelihood. Among these nine studies, four studies focused on DA (; ; ; ), four studies on FCS (; ; ; ), and one study on an unknown algorithm ().

Table 4

Summary of the 20 Studies on Multiple Imputation.

AuthorsMI AlgorithmsSample SizeNumber of VariablesNumber of ImputationsNumber of IterationsMissing Rate

Barnard and Rubin ()DA10, 20, 3023, 5, 10Unknown10%, 20%, 30%
Horton and Lipsitz ()DA, FCS1000031020050%
Schafer and Graham ()DA50220Unknown73%
Donders et al. ()FCS500210Unknown40%
Abe and Iwasaki ()DA1004510020%, 30%
Horton and Kleinman ()DA, EMB, FCS1337741010541%
Stuart et al. ()FCS9186400101018%
Lee and Carlin ()DA, FCS10008201033%
Leite and Beretvas ()DA4001010Unknown10%, 30%, 50%
Hardt, Herke, and Leonhart ()DA, EMB, FCS50, 100, 2003, 13, 23, 43, 8320Unknown20%, 50%
Lee and Carlin ()DA1000820Unknown10%, 25%, 50%, 75%, 90%
Cranmer and Gill ()EMB, MHD5005UnknownNA20%, 50%, 80%
Cheema ()FCS10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 100004UnknownUnknown1%, 2%, 5%, 10%, 20%
Kropko et al. ()DA, EMB, FCS1000853025%
Shara et al. ()Unknown22468UnknownUnknown20%, 30%, 40%
Deng et al. ()FCS100200, 1000102040%
von Hippel ()DA25, 10025Unknown50%
Hughes, Sterne, and Tilling ()Unknown100, 1000550Unknown40%, 60%
McNeish ()DA, FCS20, 50, 100, 25045, 25, 100Unknown10%, 20%, 30%, 50%

Note: DA stands for Data Augmentation, EMis for Expectation-Maximization with Importance Sampling, FCS for Fully Conditional Specification, EMB for Expectation-Maximization with Bootstrapping, and MHD for Multiple Hot Deck. Unknown means that information is unavailable. NA means Not-Applicable.

Four studies investigated specialized situations for multiple imputation, such as small-sample degrees of freedom in DA (), Likert-scale data in DA (), non-parametric multiple imputation (), and variance estimators ().

Seven studies compared different multiple imputation algorithms (; Horton and Lipsitz 2002; ; ; ; ; ). The comparative perspective in most of the seven studies, except King et al. (), is based on the difference between joint modeling and conditional modeling. Thus, the perspective from MCMC vs. non-MCMC is generally lacking in the literature.

Ten studies did not explicitly state the number of iterations T. Furthermore, Horton and Kleinman () used the default setting in software for T, and the information in Kropko et al. () can be only found in their computer codes, not in the article.

Thus, no studies in Table 4 have systematically investigated the effects of convergence on the three multiple imputation algorithms.

8 Monte Carlo Simulation

Section 4 introduced MAR, proper imputation, and congeniality as crucial assumptions. To make the assumptions of MAR and congeniality realistic, an inclusive analysis strategy is recommended in the literature (; ), which contains any auxiliary variables that can increase the predictive power of the imputation model or any variables that may be related to the missing data mechanism. What complicates the matter, however, is that auxiliary variables themselves are often incomplete. This creates a dilemma in multiple imputation. Including many auxiliary variables makes it more likely for MAR and congeniality to be satisfied, but including many incomplete variables leads to a higher total missing rate, which further makes it more difficult for convergence in MCMC to be attained.

When assumptions do not hold in statistical methods, analytical mathematics does not often provide answers about the properties of the methods (). Monte Carlo simulation converts the computer into an experimental laboratory, where the researcher can control various conditions in the environment to observe the outcomes (). Thus, Monte Carlo simulation is a powerful method of assessing the performance of statistical methods under various settings especially when assumptions are violated.

8.1 Monte Carlo Simulation Designs

The current study prepares two versions of simulation data, (1) theoretical and (2) realistic. Auxiliary variables X are generated by R-Function mvrnorm. All of the computations are done in R version 3.2.4. The computer used in the current study is HP Z440 Workstation (Windows 7 Professional, processor: Intel Xeon CPU E5-1603 v3), with the processor speed of 2.80 GHz and the memory (RAM) of 32.0 GB under the 64 bit operating system. The number of Monte Carlo simulation runs is set to 1000.

The first setting is theoretical. The number of observations is 1000, which is equivalent to the 75th percentile of the sample sizes found in the studies listed in Table 4. The number of variables p is changed from 2, 3, 4, 5, 6, 7, 8, 9, to 10, which is equivalent to the 70th percentile of the number of variables found in the studies listed in Table 4. Note that in another simulation run, not reported here, p was changed to 20, and the conclusions were similar. As was assumed in Section 2, auxiliary variables xj are multivariate-normal with the mean of 0 and the standard deviation of 1, i.e., X ~ Np–1(0, 1), where the number of auxiliary variables is p – 1. The correlation among xj is randomly generated in R as follows: r<-matrix(runif(9^2,–1,1), ncol=9) and Cor<-cov2cor(r%*%t(r)). The generated correlation matrix is shown in equation (7). The p-th variable yi is a linear combination of xj such that yi=β0+β1x1i++βp1xp1i+εi, where βj ~ U(–2.0, 2.0) and ɛi ~ N(0, σ). Note that βj includes β0 and σ ~ U(0.5, 2.0).

(7)
Cor1=[1.0000.231 0.3350.4010.2760.2470.1200.3270.0680.2311.0000.0740.7610.0410.6230.0830.4320.183 0.3350.0741.0000.1830.3230.2540.4580.4340.8010.4010.7610.1831.0000.0070.6390.0940.6760.1690.2760.0410.3230.0071.0000.5470.3570.0250.0810.2470.6230.2540.6390.5471.0000.0240.2040.023 0.1200.0830.4580.0940.3570.0241.0000.4860.373 0.3270.4320.4340.676 0.0250.2040.4861.0000.1530.0680.1830.8010.1690.0810.0230.373 0.1531.000]

The second setting is realistic. The number of observations is 228, which is the full sample size of the real data in Table 2. The number of variables p is again changed from 2, 3, 4, 5, 6, 7, 8, 9, to 10. Auxiliary variables xj are multivariate-normal with the means and standard deviations based on the empirical data (log-transformed), where xj consist of the nine independent variables in Table 2 (; ). Note that, as was explained in Table 2, the raw empirical data are log-normal; therefore, the input data are log-transformed. Furthermore, the correlation matrix is based on the empirical data (log-transformed) as in equation (8). The p-th variable yi is a linear combination of xj such that yi=β0+β1x1i++βp1xp1i+εi, where βj (including β0) reflects the coefficients in multiple regression models using the empirical data and ɛi ~ N(0, σresid), where σresid is the residual standard deviation from the empirical regression model.

(8)
Cor2=[1.0000.646 0.5000.0070.3760.3540.3780.5340.3120.6461.0000.531 0.0210.3710.3050.150 0.4270.049 0.5000.5311.0000.4740.5120.2780.0920.2800.0860.0070.0210.4741.0000.2050.0790.0140.0860.1610.3760.3710.5120.2051.000 0.2040.0890.3700.2200.3540.3050.2780.0790.2041.0000.1060.2120.180 0.3780.1500.0920.0140.0890.1061.0000.5780.128 0.5340.4270.2800.086 0.3700.2120.5781.0000.1340.3120.0490.0860.1610.2200.1800.128 0.1341.000]

In both settings, xj are incomplete variables for imputation, yi is completely observed in all of the situations, and uij are a set of p – 1 continuous uniform random numbers ranging from 0 to 1 for the missing data mechanism. As was introduced in Section 4.1, under the assumption of MAR, the missingness of xji depends on the values of yi and uij, i.e., xji is missing if yi < median(yi) and uij < 0.5, and xji is missing if yi > median(yi) and uij > 0.9. This creates approximately 30% missing values in each xj. This is realistic, because the average missing rates of income and earnings are 30% on a variable basis in the National Health Interview Survey () and the median missing rate is 30.0% in Table 4. Note that the above setting may be translated into the following statement. Variable yi is age and x1i is income. The missingness of income depends on age and some random components. Income is missing if age is less than the median of age and uniform random numbers are less than 0.5. Also, income is missing if age is larger than the median of age and uniform random numbers are larger than 0.9.

Although the literature (; ; ) recommends to use relatively large M, the simulation studies in Table 4 use relatively small M. This is due to the computational burden of Monte Carlo simulation for multiple imputation. Considering this practical issue, the current study sets M to 20, which is equivalent to the 75th percentile of the number of multiply-imputed data found in the studies listed in Table 4.

As for T, there is no consensus in the literature (Table 4). There are no clear-cut rules for determining whether MCMC algorithms attained convergence (; ; ). Though not perfect, doubling the number of EM iterations is a rule of thumb for a conservative estimate about convergence speed for MCMC (; ). Since it is not possible to check convergence in each of the 1000 simulation runs, the current study relies on the rule of thumb to set T.

8.2 Criteria for Judging Simulation Results

The estimand in all of the simulation runs is β1 in yi=β0+β1x1i++βp1xp1i+εi. The purpose of multiple imputation is to find an unbiased estimate of the population parameter that is confidence valid ().

Unbiasedness can be assessed by equation (9), because an estimator θ^ is an unbiased estimator of θ if the expected value of θ^ is equal to the true θ (; ).

(9)
Bias(θ^)=E(θ^)θ

Unbiasedness and efficiency can be simultaneously assessed by the Root Mean Square Error (RMSE), defined as equation (10). The RMSE measures the spread around the true value of the parameter, placing slightly more emphasis on efficiency than bias (; ).

(10)
RMSE(θ^)=E(θ^θ)2 

Confidence validity can be assessed by the coverage probability of the nominal 95% confidence interval (CI), which ‘is the proportion of simulated samples for which the estimated confidence interval includes the true parameter’ (). The formula of the standard error for proportions is equation (11), where π is the proportion and s is the number of simulation runs.

(11)
SE(π)=π(1π)s

The standard error of the 95% CI coverage over 1000 iterations is 0.95×0.05/10000.007 which is 0.7%. Therefore, with 95% confidence, the estimated coverage probability should be between 93.6% and 96.4% (; ; ; ).

8.3 Results of the Simulation

Abbreviations in this section are explained in Table 5, where MI stands for multiple imputation and SI for single imputation.

Table 5

Abbreviations and the Missing Data Methods.

AbbreviationsMissing Data Methods

CDComplete data without missing values
LDListwise deletion
EMBMI by AMELIA II
DA1MI by NORM2 with no iterations
DA2MI by NORM2 with 2*EM iterations
FCS1MI by MICE with no iterations
FCS2MI by MICE with 2*EM iterations
D-SIDeterministic SI by norm.predict in MICE
S-SIStochastic SI by norm.nob in MICE

8.3.1 Theoretical Case

This section presents the results of the Monte Carlo simulation for the theoretical case, where the correlation matrix and the regression coefficients are randomly generated.

Table 6 shows the Bias and RMSE values for the regression coefficient β1. The Bias and RMSE values for listwise deletion and single imputation methods indicate that these methods are not recommended at all. All of the Bias and RMSE values from EMB, DA1, DA2, and FCS2 are almost identical, showing that they are generally unbiased. However, FCS1 is rather biased, quite similar to S-SI. Therefore, when between-imputation iterations are ignored, there are no discernible effects on bias and efficiency in EMB and DA, but FCS may suffer from some bias.

Table 6

Bias and RMSE (Theoretical Data).

Number of Variables
2345678910

CDBias0.0010.0030.0010.0020.0010.0010.0010.0020.001
RMSE0.0400.0470.0380.0390.0580.0260.0460.0390.047

LDBias0.0320.1350.1050.1040.3320.0850.1290.2100.116
RMSE0.0590.1530.1220.1210.3490.1030.1600.2280.155

EMBBias0.0000.0040.0020.0000.0050.0010.0050.0050.002
RMSE0.0460.0530.0500.0510.0750.0410.0690.0590.072

DA1Bias0.0010.0020.0030.0010.0010.0000.0030.0030.002
RMSE0.0460.0530.0500.0510.0740.0410.0690.0580.072

DA2Bias0.0020.0010.0050.0020.0010.0000.0010.0030.000
RMSE0.0460.0530.0500.0510.0740.0410.0690.0580.072

FCS1Bias0.0020.0010.0820.0400.0900.0470.0930.0270.233
RMSE0.0470.0530.0970.0620.1160.0650.1090.0520.239

FCS2Bias0.0010.0020.0040.0020.0010.0000.0010.0020.001
RMSE0.0460.0530.0500.0510.0750.0410.0690.0580.071

D-SIBias0.1860.2420.1740.0930.1870.0980.2310.0700.163
RMSE0.1920.2480.1820.1100.2070.1090.2480.0990.189

S-SIBias0.0020.0000.0810.0380.0900.0470.0910.0290.230
RMSE0.0500.0570.1020.0660.1240.0760.1190.0620.241

Note: Biased results are in boldface, i.e., Bias > 0.010.

Table 7 gives the coverage probability of the 95% CI for β1. The CIs for listwise deletion and single imputation methods are not confidence valid. When the number of auxiliary variables is small (and hence the overall missing rate is small), the between-imputation iterations may be ignored, where all of the multiple imputation CIs are confidence valid. However, as the number of auxiliary variables becomes large, DA1 and FCS1 drift away from the confidence validity. EMB, DA2, and FCS2 are confidence valid regardless of the number of variables and the missing rate. This shows that EMB is confidence proper even if it does not iterate. This is an important finding in the current study.

Table 7

Coverage of the 95% CI (Theoretical Data).

Number of Variables
2345678910

CD95.394.994.294.096.096.095.394.994.6
LD88.547.954.656.710.865.169.232.178.1
EMB95.095.194.295.594.994.494.394.195.0
DA194.694.993.293.194.191.892.992.492.9
DA294.395.895.194.194.894.394.293.294.9
FCS194.295.075.091.684.495.584.596.86.8
FCS294.795.694.493.995.494.594.295.095.0
D-SI0.80.22.237.822.216.98.351.022.5
S-SI88.989.647.875.062.364.448.976.03.7

Note: Confidence invalid results are in boldface, i.e., outside of 93.6 and 96.4.

Table 8 shows the CI lengths. The CI length by listwise deletion is generally too long, reflecting inefficiency due to the reduced sample size. The CI lengths by single imputation methods are ‘correct’ in the sense that they are quite similar to those of complete data analysis; however, this means that single imputation methods ignore estimation uncertainty associated with imputation. This is the cause of confidence invalidity of single imputation methods in Table 7. The CI length by DA1 is too short and the CI length by FCS1 is too long. The CI lengths by EMB, DA2, and FCS2 are essentially equal, reflecting the correct level of estimation uncertainty associated with imputation.

Table 8

Lengths of the 95% CI (Theoretical Data).

Number of Variables
2345678910

CD0.1570.1840.1440.1480.2360.1020.1840.1510.180
LD0.1890.2590.2260.2350.3840.2130.3580.3390.390
EMB0.1780.2090.1960.2000.3010.1600.2750.2290.281
DA10.1760.2070.1870.1920.2930.1450.2560.2080.253
DA20.1770.2080.1940.1980.2980.1580.2710.2230.274
FCS10.1780.2090.2370.2110.3240.2480.3060.2230.299
FCS20.1780.2090.1970.2010.3020.1610.2750.2280.281
D-SI0.1430.1740.1330.1490.2440.1030.2050.1500.188
S-SI0.1570.1840.1610.1550.2380.1450.1880.1490.186

Table 9 displays the computational time required to generate multiple imputations. When the number of auxiliary variables is small (and hence the overall missing rate is small), DA2 is fastest among the three confidence proper multiple imputation algorithms. On the other hand, as the number of auxiliary variables becomes large, EMB becomes fastest. As is known in the literature (; ), FCS2 is at least 5 times slower and can be more than 50 times slower than EMB and DA2. However, the difference in computational time is not substantial, given that all of the computations can be done within a few minutes.

Table 9

Computational Time (Theoretical Data).

Number of Variables
2345678910

EMB0.460.530.530.590.710.780.971.271.69
DA20.100.160.290.420.551.091.392.223.63
FCS22.475.9814.4821.3325.4054.7159.1485.69133.17

Note: Reported values are the time in seconds to perform multiple imputation, which is averaged over 1,000 simulation runs. The fastest results are in boldface.

8.3.2 Realistic Case

This section presents the results of the Monte Carlo simulation for the realistic case, where the correlation matrix and the regression coefficients are based on the real data (; ). The results in this section reinforce the findings in Section 8.3.1.

Table 10 shows the Bias and RMSE values for the regression coefficient β1. The overall conclusions are similar to Table 6. When between-imputation iterations are ignored, there are no discernible effects on bias and efficiency in EMB and DA, but FCS may occasionally suffer from small bias.

Table 10

Bias and RMSE (Realistic Data).

Number of Variables
2345678910

CDBias0.0030.0020.0020.0020.0010.0020.0000.0020.002
RMSE0.0740.0860.0680.0670.0660.0650.0700.0690.075

LDBias0.0340.0470.0370.0540.0820.0990.0830.0720.085
RMSE0.0950.1280.1040.1180.1410.1540.1570.1590.188

EMBBias0.0010.0020.0020.0050.0010.0000.0000.0020.006
RMSE0.0840.1130.0910.0900.0890.0920.1020.0990.110

DA1Bias0.0060.0010.0030.0030.0010.0010.0010.0010.002
RMSE0.0840.1120.0900.0890.0870.0910.1000.0960.105

DA2Bias0.0090.0000.0020.0040.0020.0040.0000.0010.001
RMSE0.0840.1110.0890.0880.0860.0900.0980.0940.102

FCS1Bias0.0070.0130.0060.0050.0020.0080.0060.0120.000
RMSE0.0840.1060.0810.0810.0800.0810.0860.0830.088

FCS2Bias0.0070.0010.0020.0020.0030.0050.0020.0030.005
RMSE0.0840.1120.0880.0880.0860.0900.0970.0930.100

D-SIBias0.1880.0750.0110.0350.0370.0470.0230.0340.059
RMSE0.2070.1630.1150.1180.1180.1230.1300.1270.151

S-SIBias0.0050.0140.0070.0060.0020.0060.0050.0090.006
RMSE0.0890.1160.0960.0950.0910.0940.1000.1020.105

Note: Biased results are in boldface, i.e., Bias > 0.010.

Table 11 gives the coverage probability of the 95% CI for β1. The overall conclusions are similar to Table 7, except that DA1 is confidence invalid even when p = 3. This implies that we cannot ignore between-imputation iterations in MCMC-based approaches even when the number of variables is small. On the other hand, EMB is confidence valid and we can safely ignore between-imputation iterations in EMB. Again, this is an important finding in the current study.

Table 11

Coverage of the 95% CI (Realistic Data).

Number of Variables
2345678910

CD94.695.395.894.795.296.494.695.394.8
LD92.291.692.891.586.885.089.890.090.8
EMB94.394.194.793.996.194.294.094.494.7
DA194.192.294.493.495.792.293.192.993.1
DA294.094.094.894.495.994.593.895.095.0
FCS194.694.796.396.797.097.096.796.997.7
FCS294.793.895.595.796.494.394.895.296.1
D-SI32.774.579.277.677.774.175.375.168.8
S-SI87.983.282.382.584.282.181.080.381.2

Note: Confidence invalid results are in boldface, i.e., outside of 93.6 and 96.4.

Table 12 shows the CI lengths. The overall conclusions are similar to Table 8. One difference is that the CI length by FCS1 is slightly short.

Table 12

Lengths of the 95% CI (Realistic Data).

Number of Variables
2345678910

CD0.2790.3340.2680.2660.2670.2610.2780.2740.289
LD0.3330.4410.3890.4120.4360.4570.5160.5430.631
EMB0.3140.4290.3640.3560.3620.3590.3970.3960.432
DA10.3130.4140.3480.3420.3430.3370.3700.3640.390
DA20.3150.4230.3560.3510.3530.3510.3830.3800.410
FCS10.3150.4160.3530.3480.3500.3500.3820.3800.406
FCS20.3160.4290.3590.3550.3580.3520.3890.3860.413
D-SI0.2880.3800.2920.2890.2910.2780.3020.2940.315

S-SI0.2810.3250.2620.2570.2590.2550.2690.2670.277

Table 13 displays the computational time required to generate multiple imputations. The overall conclusions are similar to Table 9.

Table 13

Computational Time (Realistic Data).

Number of Variables
2345678910

EMB0.140.150.160.200.230.280.360.440.53
DA20.040.050.060.100.150.220.330.470.67
FCS21.052.554.228.9212.0215.5920.8226.7835.95

Note: Reported values are the time in seconds to perform multiple imputation, which is averaged over 1,000 simulation runs. The fastest results are in boldface.

9 Conclusions

This article assessed the relative performance of the three multiple imputation algorithms (DA, FCS, and EMB). In both theoretical and realistic settings (Table 7 and Table 11), if between-imputation iterations were ignored, the MCMC algorithms (DA and FCS) did not attain confidence validity. The nominal 95% CIs by DA and FCS without iterations were different from 95% coverage beyond the margin of error in 1,000 simulation runs. This is because the CI lengths by DA without iterations were generally too short, and the CI lengths by FCS are generally too long (Table 8 and Table 12). Based on Schafer (), this can be explained by choices for starting values. DA uses EM as a single starting value for M chains that understates missing data uncertainty () while FCS uses random draws as M over-dispersed starting values that overstates missing data uncertainty (). Without iterations, imputed values depend on the choice of starting values.

DA and FCS can be both confidence valid under the large number of iterations; however, the assessment of convergence in MCMC is notoriously difficult. Furthermore, the convergence properties of FCS are currently under debate due to possible incompatibility (; ). On the other hand, the current study found that EMB was confidence valid regardless of the situations. Therefore, EMB is a confidence proper imputation algorithm without iterations, which allows us to avoid a painful decision-making process of how to judge the convergence to generate confidence proper multiple imputations. This finding is useful in the missing data literature. For example, while ratio imputation is often used in official statistics (), multiple ratio imputation does not exist in the literature. The EMB algorithm was applied to ratio imputation to create multiple ratio imputation (; ).

No simulation studies can include all the patterns of relevant data (). Therefore, the current study focused on two types of data, (1) theoretical and (2) realistic. Although the author believes that the two data generation processes cover data types relevant to many social research situations, the results in any simulation studies must be read with caution (Hardt, Herke, and Leonhart 2014: 11). Future research should delve into other data types, such as small-n data, large-p data, categorical data, and non-normal data, to name a few.

Additional File

The additional file for this article can be found as follows:

Data for Tables 1 and 2

Political and Economic Data from CIA () and Freedom House (). DOI: https://doi.org/10.5334/dsj-2017-037.s1