Journal of Modern Applied Statistical Methods

http://digitalcommons.wayne.edu/jmasm/

List of Papers (Total 526)

JMASM 47: ANOVA_HOV: A SAS Macro for Testing Homogeneity of Variance in One-Factor ANOVA Models (SAS)

Variance homogeneity (HOV) is a critical assumption for ANOVA whose violation may lead to perturbations in Type I error rates. Minimal consensus exists on selecting an appropriate test. This SAS macro implements 14 different HOV approaches in one-way ANOVA. Examples are given and practical issues discussed.

Bayesian Hypothesis Testing of Two Normal Samples using Bootstrap Prior Technique

The most important ingredient in Bayesian analysis is prior or prior distribution. A new prior determination method was developed under the framework of parametric empirical Bayes using bootstrap technique. By way of example, Bayesian estimations of the parameters of a normal distribution with unknown mean and unknown variance conditions were considered, as well as its application ...

Missing Data in Longitudinal Surveys: A Comparison of Performance of Modern Techniques

Using a simulation study, the performance of complete case analysis, full information maximum likelihood, multivariate normal imputation, multiple imputation by chained equations and two-fold fully conditional specification to handle missing data were compared in longitudinal surveys with continuous and binary outcomes, missing covariates, and an interaction term.

Approximating the Distribution of Indefinite Quadratic Forms in Normal Variables by Maximum Entropy Density Estimation

The quadratic form of non-central normal variables is presented based on a sum of weighted independent non-central chi-square variables. This presentation provides moments of quadratic form. The maximum entropy method is used to estimate the density function because distribution moments of quadratic forms are known. A Euclidean distance is proposed to select an appropriate maximum ...

A Monte Carlo Study of the Effects of Variability and Outliers on the Linear Correlation Coefficient

Monte Carlo simulations are used to investigate the effect of two factors, the amount of variability and an outlier, on the size of the Pearson correlation coefficient. Some simulation algorithms are developed, and two theorems for increasing or decreasing the amount of variability are suggested.

Power and Sample Size Estimation for Nonparametric Composite Endpoints: Practical Implementation using Data Simulations

Composite endpoints are a popular outcome in controlled studies. However, the required sample size is not easily obtained due to the assortment of outcomes, correlations between them and the way in which the composite is constructed. Data simulations are required. A macro is developed that enables sample size and power estimation.

'Parallel Universe

Of the three kinds of two-mean comparisons which judge a test statistic against a critical value taken from a Student t-distribution, one – the repeated measures or dependent-means application – is distinctive because it is meant to assess the value of a parameter which is not part of the natural order. This absence forces a choice between two interpretations of a significant test ...

Using Pratt's Importance Measures in Confirmatory Factor Analyses

When running a confirmatory factor analysis (CFA), users specify and interpret the pattern (loading) matrix. It has been recommended that the structure coefficients, indicating the factors’ correlation with the observed indicators, should also be reported when the factors are correlated (Graham, Guthrie, Thompson, 1997). The aims of this article are: (1) to note the structure ...

The Impact of Predictor Variable(s) with Skewed Cell Probabilities on Wald Tests in Binary Logistic Regression

A series of simulation studies are reported that investigated the impact of a skewed predictor(s) on the Type I error rate and power of the Wald test in a logistic regression model. Five simulations were conducted for three different regression models. A detailed description of the impact of skewed cell predictor probabilities and sample size provide guidelines for practitioners ...

In Response to Frane, "Errors in a Program for Approximating Confidence Intervals

A rebuttal to Frane's letter to the Editor in this issue.

Book Review: Multivariate Statistical Methods, A Primer

Multivariate Statistical Methods, A Primer, 4th Ed. Bryan F. J. Manly and Jorge A. Navarro Alberto. NY: Chapman & Hall / CRC Press. 2016. 264 p. ISBN 10: 1498728960 / ISBN 13: 978-1498728966

An Unbiased Estimator Of The Greatest Lower Bound

The greatest lower bound to the reliability of a test, based on a single administration, is the Greatest Lower Bound (GLB). However the estimate is seriously biased. An algorithm is described that corrects this bias.

JMASM44: Implementing Multiple Ratio Imputation by the EMB Algorithm (R)

Although single ratio imputation is often used to deal with missing values in practice, there is a paucity of discussion regarding multiple ratio imputation. Code in the R statistical environment is presented to execute multiple ratio imputation by the Expectation-Maximization with Bootstrapping (EMB) algorithm.

Using Multiple Imputation to Address Missing Values of Hierarchical Data

Missing data may be a concern for data analysis. If it has a hierarchical or nested structure, the SUDAAN package can be used for multiple imputation. This is illustrated with birth certificate data that was linked to the Centers for Disease Control and Prevention’s National Assisted Reproductive Technology Surveillance System database. The Cox-Iannacchione weighted sequential hot ...

Stochastic Model for Cancer Cell Growth through Single Forward Mutation

A stochastic model for cancer cell growth in any organ is presented, based on a single forward mutation. Cell growth is explained in a one-dimensional stochastic model, and statistical measures for the variable representing the number of malignant cells are derived. A numerical study is conducted to observe the behavior of the model.

A Comparison of Different Methods of Zero-Inflated Data Analysis and an Application in Health Surveys

The performance of several models under different conditions of zero-inflation and dispersion are evaluated. Results from simulated and real data showed that the zero-altered or zero-inflated negative binomial model were preferred over others (e.g., ordinary least-squares regression with log-transformed outcome, Poisson model) when data have excessive zeros and over-dispersion.

Distribution Fits for Various Parameters in the Florida Public Hurricane Loss Model

The purpose of this study is to re-analyze the atmospheric science component of the Florida Public Hurricane Loss Model v. 5.0, in order to investigate if the distributional fits used for the model parameters could be improved upon. We consider alternate fits for annual hurricane occurrence, radius of maximum winds and the pressure profile parameter.

Control Charts for Mean for Non-Normally Correlated Data

Traditionally, quality control methodology is based on the assumption that serially-generated data are independent and normally distributed. On the basis of these assumptions the operating characteristic (OC) function of the control chart is derived after setting the control limits. But in practice, many of the basic industrial variables do not satisfy both the assumptions and ...

The Double Prior Selection for the Parameter of Exponential Life Time Model under Type II Censoring

A comparison of double informative priors assumed for the parameter of exponential life time model is considered. Three different sets of double priors are included, and the results are compared with a forth single prior. The data is Type II censored and Bayes estimators for the parameter and reliability are carried out under a squared error loss function in the cases of the four ...

A New Estimator for the Pickands Dependence Function

The Pickands dependence function characterizes an extreme value copula, a useful tool in the modeling of multivariate extremes. A new estimator is presented along with its convergence properties and performance through simulation.

Confidence Intervals for the Scaled Half-Logistic Distribution under Progressive Type-II Censoring

Confidence interval construction for the scale parameter of the half-logistic distribution is considered using four different methods. The first two are based on the asymptotic distribution of the maximum likelihood estimator (MLE) and log-transformed MLE. The last two are based on pivotal quantity and generalized pivotal quantity, respectively. The MLE for the scale parameter is ...

Methodology For Constructing Perceptual Maps Incorporating Measuring Error In Sensory Acceptance Tests

A new method is proposed based on construction of perceptual maps using techniques of correspondence analysis and interval algebra that allow specifying the measurement error expected in panel choices in the evaluation form described in unstructured 9-point hedonic scale.

A Note on Determination of Sample Size from the Perspective of Six Sigma Quality

In most empirical studies (clinical, network modeling, and survey-based and aeronautical studies, etc.), sample observations are drawn from population to analyze and draw inferences about the population. Such analysis is done with reference to a measurable quality characteristic of a product or process of interest. However, fixing a sample size is an important task that has to be ...