Variance homogeneity (HOV) is a critical assumption for ANOVA whose violation may lead to perturbations in Type I error rates. Minimal consensus exists on selecting an appropriate test. This SAS macro implements 14 different HOV approaches in one-way ANOVA. Examples are given and practical issues discussed.

The most important ingredient in Bayesian analysis is prior or prior distribution. A new prior determination method was developed under the framework of parametric empirical Bayes using bootstrap technique. By way of example, Bayesian estimations of the parameters of a normal distribution with unknown mean and unknown variance conditions were considered, as well as its application ...

Using a simulation study, the performance of complete case analysis, full information maximum likelihood, multivariate normal imputation, multiple imputation by chained equations and two-fold fully conditional specification to handle missing data were compared in longitudinal surveys with continuous and binary outcomes, missing covariates, and an interaction term.

The quadratic form of non-central normal variables is presented based on a sum of weighted independent non-central chi-square variables. This presentation provides moments of quadratic form. The maximum entropy method is used to estimate the density function because distribution moments of quadratic forms are known. A Euclidean distance is proposed to select an appropriate maximum ...

Monte Carlo simulations are used to investigate the effect of two factors, the amount of variability and an outlier, on the size of the Pearson correlation coefficient. Some simulation algorithms are developed, and two theorems for increasing or decreasing the amount of variability are suggested.

Composite endpoints are a popular outcome in controlled studies. However, the required sample size is not easily obtained due to the assortment of outcomes, correlations between them and the way in which the composite is constructed. Data simulations are required. A macro is developed that enables sample size and power estimation.

Of the three kinds of two-mean comparisons which judge a test statistic against a critical value taken from a Student t-distribution, one – the repeated measures or dependent-means application – is distinctive because it is meant to assess the value of a parameter which is not part of the natural order. This absence forces a choice between two interpretations of a significant test ...

When running a confirmatory factor analysis (CFA), users specify and interpret the pattern (loading) matrix. It has been recommended that the structure coefficients, indicating the factors’ correlation with the observed indicators, should also be reported when the factors are correlated (Graham, Guthrie, Thompson, 1997). The aims of this article are: (1) to note the structure ...

A series of simulation studies are reported that investigated the impact of a skewed predictor(s) on the Type I error rate and power of the Wald test in a logistic regression model. Five simulations were conducted for three different regression models. A detailed description of the impact of skewed cell predictor probabilities and sample size provide guidelines for practitioners ...

A rebuttal to Frane's letter to the Editor in this issue.

Multivariate Statistical Methods, A Primer, 4th Ed. Bryan F. J. Manly and Jorge A. Navarro Alberto. NY: Chapman & Hall / CRC Press. 2016. 264 p. ISBN 10: 1498728960 / ISBN 13: 978-1498728966

The greatest lower bound to the reliability of a test, based on a single administration, is the Greatest Lower Bound (GLB). However the estimate is seriously biased. An algorithm is described that corrects this bias.

Although single ratio imputation is often used to deal with missing values in practice, there is a paucity of discussion regarding multiple ratio imputation. Code in the R statistical environment is presented to execute multiple ratio imputation by the Expectation-Maximization with Bootstrapping (EMB) algorithm.

Missing data may be a concern for data analysis. If it has a hierarchical or nested structure, the SUDAAN package can be used for multiple imputation. This is illustrated with birth certificate data that was linked to the Centers for Disease Control and Prevention’s National Assisted Reproductive Technology Surveillance System database. The Cox-Iannacchione weighted sequential hot ...

A stochastic model for cancer cell growth in any organ is presented, based on a single forward mutation. Cell growth is explained in a one-dimensional stochastic model, and statistical measures for the variable representing the number of malignant cells are derived. A numerical study is conducted to observe the behavior of the model.

The performance of several models under different conditions of zero-inflation and dispersion are evaluated. Results from simulated and real data showed that the zero-altered or zero-inflated negative binomial model were preferred over others (e.g., ordinary least-squares regression with log-transformed outcome, Poisson model) when data have excessive zeros and over-dispersion.

The purpose of this study is to re-analyze the atmospheric science component of the Florida Public Hurricane Loss Model v. 5.0, in order to investigate if the distributional fits used for the model parameters could be improved upon. We consider alternate fits for annual hurricane occurrence, radius of maximum winds and the pressure profile parameter.

Traditionally, quality control methodology is based on the assumption that serially-generated data are independent and normally distributed. On the basis of these assumptions the operating characteristic (OC) function of the control chart is derived after setting the control limits. But in practice, many of the basic industrial variables do not satisfy both the assumptions and ...

A comparison of double informative priors assumed for the parameter of exponential life time model is considered. Three different sets of double priors are included, and the results are compared with a forth single prior. The data is Type II censored and Bayes estimators for the parameter and reliability are carried out under a squared error loss function in the cases of the four ...

The Pickands dependence function characterizes an extreme value copula, a useful tool in the modeling of multivariate extremes. A new estimator is presented along with its convergence properties and performance through simulation.

Confidence interval construction for the scale parameter of the half-logistic distribution is considered using four different methods. The first two are based on the asymptotic distribution of the maximum likelihood estimator (MLE) and log-transformed MLE. The last two are based on pivotal quantity and generalized pivotal quantity, respectively. The MLE for the scale parameter is ...

A new method is proposed based on construction of perceptual maps using techniques of correspondence analysis and interval algebra that allow specifying the measurement error expected in panel choices in the evaluation form described in unstructured 9-point hedonic scale.

In most empirical studies (clinical, network modeling, and survey-based and aeronautical studies, etc.), sample observations are drawn from population to analyze and draw inferences about the population. Such analysis is done with reference to a measurable quality characteristic of a product or process of interest. However, fixing a sample size is an important task that has to be ...