Journal of Modern Applied Statistical Methods

List of Papers (Total 916)

JMASM 50: A Web-based Shiny Application for Conducting a Two Dependent Samples Maximum Test (R)

A web-based Shiny application written in R statistical language was developed and deployed online to calculate a new two dependent samples maximum test as presented in Maggio and Sawilowsky (2014b). The maximum test allows researchers to conduct both the dependent samples t-test and Wilcoxon signed-ranks tests on same data without raising concerns associated with Type I error ...

JMASM 49: A Compilation of Some Popular Goodness of Fit Tests for Normal Distribution: Their Algorithms and MATLAB Codes (MATLAB)

The main purpose of this study is to review calculation algorithms for some of the most common non-parametric and omnibus tests for normality, and to provide them as a compiled MATLAB function. All tests are coded to provide p-values for those normality tests, and the proposed function gives the results as an output table.

JMASM 48: The Pearson Product-Moment Correlation Coefficient and Adjustment Indices: The Fisher Approximate Unbiased Estimator and the Olkin-Pratt Adjustment (SPSS)

This syntax program is intended to provide an application, not readily available, for users in SPSS who are interested in the Pearson product–moment correlation coefficient (r) and r biased adjustment indices such as the Fisher Approximate Unbiased estimator and the Olkin and Pratt adjustment.

JMASM 47: ANOVA_HOV: A SAS Macro for Testing Homogeneity of Variance in One-Factor ANOVA Models (SAS)

Variance homogeneity (HOV) is a critical assumption for ANOVA whose violation may lead to perturbations in Type I error rates. Minimal consensus exists on selecting an appropriate test. This SAS macro implements 14 different HOV approaches in one-way ANOVA. Examples are given and practical issues discussed.

JMASM 46: Algorithm for Comparison of Robust Regression Methods In Multiple Linear Regression By Weighting Least Square Regression (SAS)

The aim of this study is to compare different robust regression methods in three main models of multiple linear regression and weighting multiple linear regression. An algorithm for weighting multiple linear regression by standard deviation and variance for combining different robust method is given in SAS along with an application.

Bayesian Hypothesis Testing of Two Normal Samples using Bootstrap Prior Technique

The most important ingredient in Bayesian analysis is prior or prior distribution. A new prior determination method was developed under the framework of parametric empirical Bayes using bootstrap technique. By way of example, Bayesian estimations of the parameters of a normal distribution with unknown mean and unknown variance conditions were considered, as well as its application ...

Study Evaluating the Alterations Caused in an Exploratory Factor Analysis when Multivariate Normal Data is Dichotomized

The relationships resulting from the dichotomization of multivariate normal data is a question that causes concern when using exploratory factor analysis. The relationships in an exploratory factor analysis are examined when multivariate normal data, generated by Monte Carlo methods, is dichotomized.

The Impact of Inappropriate Modeling of Cross-Classified Data Structures on Random-Slope Models

Previous studies that explored the impact of misspecification of cross-classified data structure as strictly hierarchical are limited to random intercept models. This study examined the effects of misspecification of a two-level, cross-classified, random effect model (CCREM) where both the level-1 intercept and slope were allowed to vary randomly. Results suggest that ignoring one ...

A Double EWMA Control Chart for the Individuals Based on a Linear Prediction

Industrial process use single and double Exponential Weighted Moving Average control charts to detect small shifts in it. Occasionally there is a need to detect small trends instead of shifts, but the effectiveness to detect small trends. A new control chart is proposed to detect a small drift.

Around Gamma Lindley Distribution

Some remarks and correction on a new distribution, Gamma Lindley, of which the Lindley distribution is a particular case, are given pertaining to its parameter space.

Detection of Outliers in Univariate Circular Data using Robust Circular Distance

A robust statistic to detect single and multi-outliers in univariate circular data is proposed. The performance of the proposed statistic was tested by applying it to a simulation study and to three real data sets, and was demonstrated to be robust.

Missing Data in Longitudinal Surveys: A Comparison of Performance of Modern Techniques

Using a simulation study, the performance of complete case analysis, full information maximum likelihood, multivariate normal imputation, multiple imputation by chained equations and two-fold fully conditional specification to handle missing data were compared in longitudinal surveys with continuous and binary outcomes, missing covariates, and an interaction term.

Approximating the Distribution of Indefinite Quadratic Forms in Normal Variables by Maximum Entropy Density Estimation

The quadratic form of non-central normal variables is presented based on a sum of weighted independent non-central chi-square variables. This presentation provides moments of quadratic form. The maximum entropy method is used to estimate the density function because distribution moments of quadratic forms are known. A Euclidean distance is proposed to select an appropriate maximum ...

Prediction of Percent Change in Linear Regression by Correlated Variables

Multiple linear regression can be applied for predicting an individual value of dependent variable y by the given values of independent variables x. But it is not immediately clear how to estimate percent change in y due to changes in predictors, especially when those are correlated. This work considers several approaches to this problem, including its formulation via predictors ...

Semi-Parametric Method to Estimate the Time-to-Failure Distribution and its Percentiles for Simple Linear Degradation Model

Most reliability studies obtained reliability information by using degradation measurements over time, which contains useful data about the product reliability. Parametric methods like the maximum likelihood (ML) estimator and the ordinary least square (OLS) estimator are used widely to estimate the time-to-failure distribution and its percentiles. In this article, we estimate the ...

Characterizations of Distributions by Expected Values of Lower Record Statistics with Spacing

The characterizations of a certain class of probability distributions are established through conditional expectation of lower record values when the conditioned record value may not be the adjacent one. Some of its important deductions are also discussed.

Parameter Estimation In Weighted Rayleigh Distribution

A weighted model based on the Rayleigh distribution is proposed and the statistical and reliability properties of this model are presented. Some non-Bayesian and Bayesian methods are used to estimate the β parameter of proposed model. The Bayes estimators are obtained under the symmetric (squared error) and the asymmetric (linear exponential) loss functions using non-informative ...

A Monte Carlo Study of the Effects of Variability and Outliers on the Linear Correlation Coefficient

Monte Carlo simulations are used to investigate the effect of two factors, the amount of variability and an outlier, on the size of the Pearson correlation coefficient. Some simulation algorithms are developed, and two theorems for increasing or decreasing the amount of variability are suggested.

Power and Sample Size Estimation for Nonparametric Composite Endpoints: Practical Implementation using Data Simulations

Composite endpoints are a popular outcome in controlled studies. However, the required sample size is not easily obtained due to the assortment of outcomes, correlations between them and the way in which the composite is constructed. Data simulations are required. A macro is developed that enables sample size and power estimation.

'Parallel Universe

Of the three kinds of two-mean comparisons which judge a test statistic against a critical value taken from a Student t-distribution, one – the repeated measures or dependent-means application – is distinctive because it is meant to assess the value of a parameter which is not part of the natural order. This absence forces a choice between two interpretations of a significant test ...

Effectively Comparing Differences in Proportions

A single framework of developing and implementing tests about proportions is outlined. It avoids some of the pitfalls of methods commonly put forward in an introductory data analysis course.

Performance Evaluation of Confidence Intervals for Ordinal Coefficient Alpha

The aim of this study was to investigate the performance of the Fisher, Feldt, Bonner, and Hakstian and Whalen (HW) confidence intervals methods for the non-parametric reliability estimate, ordinal alpha. All methods yielded unacceptably low coverage rates and potentially increased Type-I error rates.

Unit Root Test for Panel Data AR(1) Time Series Model With Linear Time Trend and Augmentation Term: A Bayesian Approach

The univariate time series models, in the case of unit root hypothesis, are more biased towards the acceptance of the Unit Root Hypothesis especially in a short time span. However, the panel data time series model is more appropriate in such situation. The Bayesian analysis of unit root testing for a panel data time series model is considered. An autoregressive panel data AR(1) ...

On Variance Balanced Designs

Balanced incomplete block designs are not always possible to construct because of their parametric relations. In such a situation another balanced design, the variance balanced design, is required. This construction of binary, equal replicated variance balanced designs are discussed using the half fraction of the 2n factorial designs with smaller block sizes. This method was also ...