A web-based Shiny application written in R statistical language was developed and deployed online to calculate a new two dependent samples maximum test as presented in Maggio and Sawilowsky (2014b). The maximum test allows researchers to conduct both the dependent samples t-test and Wilcoxon signed-ranks tests on same data without raising concerns associated with Type I error...

The main purpose of this study is to review calculation algorithms for some of the most common non-parametric and omnibus tests for normality, and to provide them as a compiled MATLAB function. All tests are coded to provide p-values for those normality tests, and the proposed function gives the results as an output table.

This syntax program is intended to provide an application, not readily available, for users in SPSS who are interested in the Pearson product–moment correlation coefficient (r) and r biased adjustment indices such as the Fisher Approximate Unbiased estimator and the Olkin and Pratt adjustment.

Variance homogeneity (HOV) is a critical assumption for ANOVA whose violation may lead to perturbations in Type I error rates. Minimal consensus exists on selecting an appropriate test. This SAS macro implements 14 different HOV approaches in one-way ANOVA. Examples are given and practical issues discussed.

The aim of this study is to compare different robust regression methods in three main models of multiple linear regression and weighting multiple linear regression. An algorithm for weighting multiple linear regression by standard deviation and variance for combining different robust method is given in SAS along with an application.

The most important ingredient in Bayesian analysis is prior or prior distribution. A new prior determination method was developed under the framework of parametric empirical Bayes using bootstrap technique. By way of example, Bayesian estimations of the parameters of a normal distribution with unknown mean and unknown variance conditions were considered, as well as its...

The relationships resulting from the dichotomization of multivariate normal data is a question that causes concern when using exploratory factor analysis. The relationships in an exploratory factor analysis are examined when multivariate normal data, generated by Monte Carlo methods, is dichotomized.

The log logistic model with doubly interval censored data is examined. Three methods of constructing confidence interval estimates for the parameter of the model were compared and discussed. The results of the coverage probability study indicated that the Wald outperformed the likelihood ratio and jackknife inferential procedures.

Previous studies that explored the impact of misspecification of cross-classified data structure as strictly hierarchical are limited to random intercept models. This study examined the effects of misspecification of a two-level, cross-classified, random effect model (CCREM) where both the level-1 intercept and slope were allowed to vary randomly. Results suggest that ignoring...

Industrial process use single and double Exponential Weighted Moving Average control charts to detect small shifts in it. Occasionally there is a need to detect small trends instead of shifts, but the effectiveness to detect small trends. A new control chart is proposed to detect a small drift.

Some remarks and correction on a new distribution, Gamma Lindley, of which the Lindley distribution is a particular case, are given pertaining to its parameter space.

A robust statistic to detect single and multi-outliers in univariate circular data is proposed. The performance of the proposed statistic was tested by applying it to a simulation study and to three real data sets, and was demonstrated to be robust.

This paper proposes a recent version of compound Poisson distributions named the Poisson quasi-Lindley (PQL) distribution by compounding Poisson and quasi-Lindley distributions. Some properties of the distributions are given with estimation and some illustrative examples.

Using a simulation study, the performance of complete case analysis, full information maximum likelihood, multivariate normal imputation, multiple imputation by chained equations and two-fold fully conditional specification to handle missing data were compared in longitudinal surveys with continuous and binary outcomes, missing covariates, and an interaction term.

The quadratic form of non-central normal variables is presented based on a sum of weighted independent non-central chi-square variables. This presentation provides moments of quadratic form. The maximum entropy method is used to estimate the density function because distribution moments of quadratic forms are known. A Euclidean distance is proposed to select an appropriate...

Multiple linear regression can be applied for predicting an individual value of dependent variable y by the given values of independent variables x. But it is not immediately clear how to estimate percent change in y due to changes in predictors, especially when those are correlated. This work considers several approaches to this problem, including its formulation via predictors...

Most reliability studies obtained reliability information by using degradation measurements over time, which contains useful data about the product reliability. Parametric methods like the maximum likelihood (ML) estimator and the ordinary least square (OLS) estimator are used widely to estimate the time-to-failure distribution and its percentiles. In this article, we estimate...

The characterizations of a certain class of probability distributions are established through conditional expectation of lower record values when the conditioned record value may not be the adjacent one. Some of its important deductions are also discussed.

Cancer screening and diagnostic tests often are classified using a binary outcome such as diseased or not diseased. Recently large-scale studies have been conducted to assess agreement between many raters. Measures of agreement using the class of generalized linear mixed models were implemented efficiently in four recently introduced R and SAS packages in large-scale agreement...

A weighted model based on the Rayleigh distribution is proposed and the statistical and reliability properties of this model are presented. Some non-Bayesian and Bayesian methods are used to estimate the β parameter of proposed model. The Bayes estimators are obtained under the symmetric (squared error) and the asymmetric (linear exponential) loss functions using non-informative...

Monte Carlo simulations are used to investigate the effect of two factors, the amount of variability and an outlier, on the size of the Pearson correlation coefficient. Some simulation algorithms are developed, and two theorems for increasing or decreasing the amount of variability are suggested.

Composite endpoints are a popular outcome in controlled studies. However, the required sample size is not easily obtained due to the assortment of outcomes, correlations between them and the way in which the composite is constructed. Data simulations are required. A macro is developed that enables sample size and power estimation.

Of the three kinds of two-mean comparisons which judge a test statistic against a critical value taken from a Student t-distribution, one – the repeated measures or dependent-means application – is distinctive because it is meant to assess the value of a parameter which is not part of the natural order. This absence forces a choice between two interpretations of a significant...