Evaluating the risk of nonresponse bias in educational large-scale assessments with school nonresponse questionnaires: a theoretical study

Large-scale Assessments in Education, Feb 2017

Survey participation rates can have a direct impact on the validity of the data collected since nonresponse always holds the risk of bias. Therefore, the International Association for the Evaluation of Educational Achievement (IEA) has set very high standards for minimum survey participation rates. Nonresponse in IEA studies varies between studies and cycles. School participation is at a higher risk relative to within-school participation; school students are more likely to cooperate than adults (i.e., university students or school teachers). Across all studies conducted by the IEA during the last decade, between 7 and 33% of participating countries failed to meet the minimum participation rates at the school level. Quantifying the bias introduced by nonresponse is practically impossible with the currently implemented design. During the last decade social researchers have introduced and developed the concept of nonresponse questionnaires. These are shortened instruments applied to nonrespondents, and aim to capture information that correlates with both: survey’s main outcome variable(s), and respondent’s propensity of participation. We suggest in this paper a method to develop such questionnaires for nonresponding schools in IEA studies. By these means, we investigated school characteristics that are associated with students’ average achievement scores using correlational and multivariate regression analysis in three recent IEA studies. We developed regression models that explain with only 11 school questionnaire variables or less up to 77% of the variance of the school mean achievement score. On average across all countries, the R 2 of these models was 0.24 (PIRLS), 0.34 (TIMSS, grade 4) and 0.36 (TIMSS grade 8), using 6–11 variables. We suggest that data from such questionnaires can help to evaluate bias risks in an effective way. Further, we argue that for countries with low participation rates a change in the approach of computing nonresponse adjustment factors to a system were school´s participation propensity determines the nonresponse adjustment factor should be considered.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

https://link.springer.com/content/pdf/10.1186%2Fs40536-017-0038-6.pdf

Evaluating the risk of nonresponse bias in educational large-scale assessments with school nonresponse questionnaires: a theoretical study

Meinck et al. Large-scale Assess Educ Evaluating the risk of nonresponse bias in educational large‑scale assessments with school nonresponse questionnaires: a theoretical study Sabine Meinck 0 Diego Cortes 1 Sabine Tieck 0 0 IEA Data Processing and Research Center , Hamburg , Germany 1 Johannes-Gutenberg-University , Mainz , Germany Survey participation rates can have a direct impact on the validity of the data collected since nonresponse always holds the risk of bias. Therefore, the International Association for the Evaluation of Educational Achievement (IEA) has set very high standards for minimum survey participation rates. Nonresponse in IEA studies varies between studies and cycles. School participation is at a higher risk relative to within-school participation; school students are more likely to cooperate than adults (i.e., university students or school teachers). Across all studies conducted by the IEA during the last decade, between 7 and 33% of participating countries failed to meet the minimum participation rates at the school level. Quantifying the bias introduced by nonresponse is practically impossible with the currently implemented design. During the last decade social researchers have introduced and developed the concept of nonresponse questionnaires. These are shortened instruments applied to nonrespondents, and aim to capture information that correlates with both: survey's main outcome variable(s), and respondent's propensity of participation. We suggest in this paper a method to develop such questionnaires for nonresponding schools in IEA studies. By these means, we investigated school characteristics that are associated with students' average achievement scores using correlational and multivariate regression analysis in three recent IEA studies. We developed regression models that explain with only 11 school questionnaire variables or less up to 77% of the variance of the school mean achievement score. On average across all countries, the R2 of these models was 0.24 (PIRLS), 0.34 (TIMSS, grade 4) and 0.36 (TIMSS grade 8), using 6-11 variables. We suggest that data from such questionnaires can help to evaluate bias risks in an effective way. Further, we argue that for countries with low participation rates a change in the approach of computing nonresponse adjustment factors to a system were school´s participation propensity determines the nonresponse adjustment factor should be considered. Nonresponse bias; Multistage sample surveys; Weighting adjustments - © The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. results of a study are reported and the data is made publicly available, and are meant to ensure a high quality and validity of the survey results. Among other measures, the IEA outlines minimum participation rates. This is due to the fact that usually no or very little information is available for nonresponding units or individuals, which is why nonresponse always holds the risk of bias. Therefore, the general goal of any survey researcher is to achieve a 100% response rate. However, IEA studies acknowledge the difficulties in achieving this goal. Instead, they determine specific minimum participation rates to reduce the risk of bias due to nonresponse. As a standard rule, 85% of the sampled schools within a country as well as 85% of the sampled individuals must participate in the survey in order to accept the data and results for a final release. Participation rates in IEA studies vary among educational systems (further referred to as “countries”), target populations and surveys. Notably, highly developed western economies are facing increasing problems to comply with IEA´s response rate standards. As a general rule, data from participating countries that fail to meet these standards get annotated in the international reports or are even reported in separated report sections, highlighting the possibly reduced validity of the results to the readers. Interested readers are referred to the TIMSS International Report Appendix C.8 (Mullis et al. 2012a) for details on participation rates and guidelines for annotations. A common approach to mitigate the risk of nonresponse bias in survey estimates is through adjustment cell reweighting, where participating units (schools, students, teachers etc.) carry the weight of nonresponding units. This technique is based on the assumption of a non-informative response model, that is, nonresponse occurs completely at random within each adjustment cell. This weighting adjustment method is used in all IEA studies, as no—or very limited—information is available about nonrespondent units. Explicit strata constitute, in most cases, the adjustment cells for school and class level nonresponse, while schools or classes constitute usually the adjustment cells for individual level nonresponse (Martin and Mullis 2013; Schulz et al. 2011; Meinck 2015). Since, there is no way to prove that the units’ nonresponse is completely at random within an adjustment cell, the IEA standards are very strict on response rate thresholds as pointed out above. This paper will propose a novel approach on how to evaluate the risk of bias due to nonresponse at the school level. IEA surveys usually implement a two-stage stratified cluster sampling design. Normally, schools are selected first, and then individuals (or classes) are randomly selected within the sampled schools (hence, nonresponse can occur at both sampling stages). In order to validate our approach, we first provide evidence in this paper that school level participation is at a higher risk, relative to within-school participation. This implies that the highest burden for survey administrators is to convince schools to participate in these assessments, while high rates of within-school participation are usually easy to achieve. Hence, understanding nonresponse at the school level is of great importance, and adjusting for the bias introduced by any systematic nonresponse pattern recommended. The current state of nonresponse bias analysis in LSA Encouraging participating countries to achieve the highest response rate possible in order to maximize data quality is not unique to the IEA, but is rather a common feature of all international comparative large-scale assessments in education. The minimum thresholds set for participation, though, vary substantially among studies as there is no universal consensus of what is the minimum participation rate acceptable. Increasing nonresponse rates motivate study centers to develop further strategies to ensure high data quality besides setting minimum requirements. However, no general standards are extant that help countries facing low participation rates to analyze their data to verify the bias risk due to poor response rates. To our knowledge there are three international comparative surveys in education which have systematically conducted nonresponse bias analysis to evaluate the risk of bias due to poor participation. In what follows we briefly summarize the different approaches implemented by these studies. All participating countries in the Programme for the International Assessment of Adult Competences (PIAAC) (OECD 2013) were required to carry out a “basic nonresponse bias analysis”. This consisted in comparing survey respondents and nonrespondents on individual characteristics which were assumed to be associated with the main outcome variable of the survey. All countries had to include in this analysis at least the following variables: age, gender, education, employment and region. When participating countries were not able to achieve an overall participation rate of 70%, they were required to perform a more in-depth nonresponse bias analysis (Mohadjer et al. 2013). Examples of such an analysis are: comparing survey total estimates with census totals, comparison of responding rates by demographic characteristics, and correlation analysis of weighting adjustment variables with proficiency measures (outcome variables). To name one exemplary outcome of such analysis, in Germany, Helmschrott and Martin (2014) found that age, citizenship, the level of education, the type of house the sampled persons live in, and municipality size were the main factors influencing response to PIAAC. The Teaching and Learning International Study (TALIS) (OECD 2014) is a comparative international large-scale survey on teacher competences. The international survey and sampling design of TALIS coincides, to a larger extent, with the design of most other IEA studies. The primary sampling units are schools and the responses are also at risk at both sampling stages (in the case of TALIS schools and the teachers within sampled schools). The TALIS International Consortium invited those countries facing participation problems at any sampling stage to conduct a nonresponse bias analysis to evaluate the risk of bias. The first step proposed was to compare the weighted estimates of characteristics from the school and teacher surveys with official statistics. This was done to show that (non) response propensity is independent of teacher or school characteristics. Establishing the impact of response propensities on teachers’ characteristics was analyzed as a second step. This analysis consisted of comparing teachers’ and/or schools’ characteristics between participating schools having different within-school participation rates. The aim was to show that survey results from schools with high participation rates can be compared with those from schools with low rates of participation. Analysis results of affected countries are not publicly available. ICILS was the first IEA study to systematically conduct a nonresponse analysis in order to evaluate the risk of bias due to systematic non-participation (Meinck and Cortes 2015). ICILS aims to infer on two populations: students and teachers, and the nonresponse analysis was performed at the student and teacher levels (i.e., within participating schools). At the student level, associations between response propensities, gender and students’ information and computer literacy (ICILS´ main outcome variable) were explored. At the teacher level, distributions of respondents and nonrespondents were compared with respect to age, subject domain and gender. These were the only available individual characteristics for respondents and nonrespondents that ICILS collected. The analysis showed that different response patterns between boys and girls were negligible, but significant for gender, age and main subject domains of teachers (Meinck and Cortes 2015). The approaches presented above vary significantly in the way the common goal—evaluating potential bias introduced by nonresponse—was addressed. The common feature between PIAAC and TALIS is that they use auxiliary variables for the nonresponse analysis which might have not been present on the sampling frame, therefore allowing country-specific variables on the analysis. ICILS, on the other hand, exploited the very little information available for respondents and nonrespondents for all countries in a standardized way. The restraints of both applied approaches are obvious: (1) the availability and reliability of auxiliary statistical information varies substantially across countries, and (2) restrictions in the array of available information on nonresponding units limit the explanatory power of the analyses. From the authors’ point of view, the approach followed by ICILS is more consistent in a cross-country comparative framework, but very limited in terms of available information. Another approach to evaluate bias was developed in non-educational social surveys. So-called nonresponse or basic questionnaires are handed out to individuals who refuse participation, or who could not be contacted in the main data collection (e.g., Bethlehem and Kersten 1985; Lynn 2003; Stoop 2004; Matsuo et  al. 2010). These questionnaires contain a significantly reduced number of survey questions. The items in the questionnaires are assumed to be highly associated to survey´s main outcome variables and with unit’s participation propensity. This allows researchers to evaluate the risk of bias arising from nonresponse, determine methods of nonresponse adjustments (e.g., weight adjustments related with the features of nonrespondents), or identify missing data imputation models. Recent research provided evidence that it is possible to achieve high participation rates in nonresponse questionnaires, which is the precondition for a meaningful use of the collected data (Lynn 2003; Stoop 2004; Matsuo et  al. 2010). To our knowledge, nonresponse questionnaires have yet to be used in any large cross-national comparative assessment in education. Research focus, methods and data sources There is extensive evidence on the literature that the main outcome variables in IEA assessments (usually achievement scores in specific subject domains) are highly associated with background characteristics of the participants (Caldas and Bankston 1997; Fuller 1987; Grace and Thompson 2003), suggesting that school context explains an important portion of the variability of student achievement scores (e.g., Koretz et  al. 2001; Lamb and Fullarton 2001; Baker et al. 2002; Mullis et al. 2012a, b). In a first step, this paper will evaluate the scope of nonresponse in IEA surveys. All IEA studies conducted within the last ten years will be reviewed with respect to nonresponse levels at the different sampling stages. We will then focus on the methodological feasibility of the development of a schoollevel nonresponse questionnaire by identifying items that serve as good predictors of school average achievement. We will thereby address also operational constraints by trying to keep the number of items at a minimum. Note, since the practical implementation of such questionnaires is pending, we cannot yet evaluate whether the items do also correlate with response propensities. The potential content of these questionnaires will be determined through analyzing the association of school-level variables with student-level results using data of TIMSS and PIRLS 2011. Regression analysis, using only school-level characteristics, will be applied to identify the best-fitting model in predicting averaged student achievement scores. We will compare cross-country standardized models with country-specific models. We accounted for the complex sample design (i.e., stratification and unequal selection probabilities of schools) by applying sampling weights for the estimation of population parameters and jackknife repeated replication for the estimation of standard errors. Between and within‑school nonresponse rates across IEA studies Table 1 summarizes nonresponse rates of all IEA studies within the last decade. It can be seen that the amount of nonresponse varies between studies and cycles. Overall, about 17% of the countries failed to meet the minimum participation standards at the school level when the target population was school students. In ICCS 2009 and ICILS 2013 however, even every third country could not convince at least 85% of the sampled schools to participate in the study. In contrast, countries hardly ever struggle to reach the minimum participation rates for the sampled students within participating schools. Looking through the technical documentation of IEA studies, one will find that in the majority of all countries, the student participation rates are well above 90%. Hence, even if non-participants deviate systematically from participants, the risk of bias is very low. When adults comprise the target population, achieving high participation rates at both sampling stages becomes even more challenging, as shown in the lower part of Table 1. On average, 40% of the countries failed to meet the minimum participation requirements for the sampled schools, and more than 30% failed to meet these requirements within participating schools. Replacing sampled schools that refuse to participate with predefined (replacement) schools is a common strategy to support countries facing school participation problems. In most student surveys the use of replacement schools has helped countries to achieve survey´s minimum participation rates. However, there might be a risk of bias due to the use of replacement schools. Specific methods are used to determine replacement schools in all IEA studies in order to keep this risk as low as possible: replacements are assigned in a way to ensure that they share similar features with the originally sampled school (i.e., they belong to the same stratum and have a similar size). However, since information on the originally sampled schools is very limited, one cannot be certain that there are no systematic differences between the sampled and their replacement schools that could cause nonresponse on one side but not on the other. Therefore, the bias risk is not quantifiable; this is why the use of replacement schools is strictly limited in IEA studies. Countries that meet the minimum participation requirements only after including replacement schools get annotated in the international reports. Table 1 Percentages of countries failing the participation rate requirements in IEA studies (last 10 years) Study, cycle and  target population Number of participating countries Countries failing participation rate requirements* at School level (before replacement) School level (after replacement) Studies with school students comprising the target population ICCS 2009—grade 8 students 42 ICILS 2013—grade 8 students 21 PIRLS 2006—grade 4 students 47 PIRLS 2011—grade 4 students 58 TIMSS 2007—grade 4 students 43 TIMSS 2007—grade 8 students 56 TIMSS 2011—grade 4 students 50 TIMSS 2011—grade 8 students 59 TIMSS advanced 2008 Advanced mathematics students 10 TIMSS advanced 2008 Physics students 9 Overall 395 Studies with adults comprising the target population ICCS 2009—grade 8 teachers 37 ICILS 2013—grade 8 teachers 21 SITES 2006—math teachers 22 SITES 2006—science teachers 22 TEDS-M 2008—future primary Math teachers** 16 Secondary math teachers** TEDS-M 2008—future lower 16 TEDS-M 2008 University educators** 16 Overall 150 In conclusion, IEA studies face a non-negligible amount of nonresponse, which occurs especially at school level in student surveys and at both sampling stages when adults are the target population. Therefore, enhancing methods of analyzing and addressing nonresponse is of general importance in order to attain evidence that study results remain unaffected by nonresponse. Results Association of school‑level variables with student‑level results using selected IEA survey data The analyses and procedural steps explicated in this section were carried out with the goal to develop a shortened school questionnaire. This questionnaire would have variables that could comprise a regression model with a high explanatory power on the school’s average achievement score. Analysis was conducted first with data of TIMSS 2011, grade 4, and repeated with data of TIMSS 2011, grade 8 and PIRLS 2011. As the first step, we calculated mathematics or reading score averages by school (across students and plausible values) and merged these with school level data. Then, we determined the relationship between each variable from the school questionnaire with the average student achievement by running a correlation analysis for each participating country, weighted by the school level weight (SCHWGT). Standardized Questionnaire In an effort to develop a questionnaire that may work in a standardized format for any participating country, we considered now all variables with cross-country average correlation coefficients r ≥ ±0.2 for further analysis. Table 2 shows which variables fulfilled this condition in the considered studies. As can be seen, some variables fulfill the criterion in all studies; others only in one or two. In TIMSS grade 4, only six variables fulfilled the criterion while ten and eleven variables respectively were kept for TIMSS grade 8 and PIRLS. Then, we ran regression models separately for each country and study as y = α + β1x1 + β2x2 + · · · + βnxn with y being the students’ achievement score averaged at school level, α being the intercept of the regression equation, β comprising the regression coefficients (assuming linear effects on the school mean scores), x the relevant school questionnaire variables, and subscript n denoting the number of variables included into the model. We estimated and reported the adjusted R2 of each model, which is the portion of the average achievement scores’ variance explained by the model. For any given country and study, we started with a model with only one variable and added then step by step the next considered variable to the model in order to monitor the increase in R2. As expected, the explained variance portion varied significantly between countries as shown in the Tables  3, 4, 5. The standard model explained as much as 77% of the achievement scores’ variance in Chinese Taipei (PIRLS), 67% in Korea (TIMSS grade 8) and 66% again in Chinese Taipei (TIMSS grade 4). To get an overview on the effectiveness of the models across countries, we computed the cross-country average of R2 for each model and study (Table 6). On average across countries, the explained variance was 34% for PIRLS (model with 11 variables), 24% for TIMSS grade 4 (model with 6 variables) and 36% for TIMSS grade 8 (model with 10 variables). Country‑specific questionnaires Often times, the standardized models were able to explain a relatively high level of variation between the school’s student achievement averages in some countries but not always in others. Therefore, we instead considered applying tailor-cut models for specific countries. We conducted respective analyses exemplarily for the five countries with the lowest participation rates in PIRLS 2011—Belgium (French), England, Netherlands, Northern Ireland and Norway. In order to determine the best fitting model for each country, we fitted regression models with stepwise in-/exclusion of the variables according to specific model parameters (probability of F for entry  =  0.05 and of 0.1 for removal). We selected the model solution with 11 variables in order to be able to Table 2 School questionnaire variables with  cross‑ country average correlation coeffi‑ cients r ≥ ±0.2 with the students’ achievement scores averaged at school level Approximately what percentages of students in your school have the following backgrounds? (Come from economically disadvantaged homes) Approximately what percentages of students in your school have the following backgrounds? (Come from economically affluent homes) How many people live in the city, town, or area where your school is located? Which best describes the immediate area in which your school is located? Which best characterizes the average income level of the school’s immediate area? How would you characterize each of the following within your school? BCBG11D (Teachers’ expectations for student achievement) How would you characterize each of the following within your school? BCBG11E (Parental support for student achievement) P4—Model 5 to Model 11 P4—Model 7 to Model 11 Used in model* T8—Model 1 to Model 10 T4—Model 1 to Model 6 T8—Model 2 to Model 10 P4—Model 10 to Model 11 T4—Model 2 to Model 6 T8—Model 8 to Model 10 P4—Model 3 to Model 11 T8—Model 7 to Model 10 P4—Model 1 to Model 11 T4—Model 3 to Model 6 T8—Model 3 to Model 10 T8—Model 5 to Model 10 T4—Model 4 to Model 6 T8—Model 6 to Model 10 T4—Model 5 to Model 6 T8—Model 4 to Model 10 P4—Model 9 to Model 11 P4—Model 8 to Model 11 BCBG12AB T8—Model 9 to Model 10 ACBG13AB P4—Model 2 to Model 11 P4—Model 4 to Model 11 P4—Model 5 to Model 11 How would you characterize each of the following within your school? BCBG11F (Parental involvement in school activities) How would you characterize each of the following within your school? BCBG11H (Students’ desire to do well at school) To what degree is each of the following a problem among <fourth/ eight-grade> students in your school? (Unjustified absenteeism) About how many of the students in your school can do the following when they begin primary/elementary school? (Read some words) About how many of the students in your school can do the following when they begin primary/elementary school? (Recognize most of the letters of the alphabet) * T4—TIMSS grade 4; T8—TIMSS grade 8; P4—PIRLS Discussion and conclusions Table 3 TIMSS grade 4—explained variance of  school‑averaged by model and country mathematics score Table 3 continued 0.19 (0.09) 0.27 (0.09) 0.32 (0.09) 0.33 (0.09) 0.33 (0.09) 0.26 (0.08) 0.35 (0.09) 0.35 (0.08) 0.35 (0.09) 0.36 (0.08) 0.08 (0.05) 0.07 (0.06) 0.08 (0.06) 0.14 (0.08) 0.15 (0.07) 0.17 (0.06) 0.17 (0.06) 0.22 (0.05) 0.22 (0.06) 0.23 (0.05) 0.15 (0.05) 0.18 (0.05) 0.28 (0.07) 0.33 (0.08) 0.33 (0.07) 0.06 (0.03) 0.07 (0.03) 0.13 (0.04) 0.13 (0.04) 0.15 (0.04) 0.03 (0.04) 0.04 (0.05) 0.07 (0.06) 0.08 (0.06) 0.13 (0.06) 0.12 (0.07) 0.22 (0.07) 0.33 (0.07) 0.32 (0.07) 0.32 (0.07) 0.08 (0.18) 0.09 (0.23) 0.10 (0.22) 0.10 (0.22) 0.16 (0.22) 0.13 (0.11) 0.14 (0.11) 0.20 (0.12) 0.29 (0.13) 0.29 (0.13) 0.01 (0.03) 0.03 (0.04) 0.03 (0.05) 0.11 (0.07) 0.11 (0.07) 0.16 (0.01) 0.18 (0.01) 0.21 (0.01) 0.22 (0.01) 0.24 (0.01) Standard errors appear in parenthesis * Variables included in model: Model 1: ACBG03A; Model 2: ACBG03A, ACBG03B; Model 3: ACBG03A, ACBG03B, ACBG05C; Model 4: ACBG03A, ACBG03B, ACBG05C, ACBG12E; Model 5: ACBG03A, ACBG03B, ACBG05C, ACBG12E, ACBG12F; Model 6: ACBG03A, ACBG03B, ACBG05C, ACBG12E, ACBG12F, ACBG12H G C B G BC ,BC B B B B C BC ,BB l*e10odM2.j)(aRd ..()060430 rtseed ..)(006140 ..()006400 ..()011220 ..()007730 ..()008230 ..()009160 ..()007630 ..()006830 ..()009230 ..()009330 ..()014630 ..()007920 ..()007520 ..()007330 ..()010620 ..()007130 ..()007730 ..()008710 ..()008620 ..()090602 ..()030803 ..()040601 ..()020660 i n i l*e9odM2.j)(aRd ..)(600402 rtaeeondm ..)(500401 ..)(500004 ..)(900108 ..)(800304 ..)(700203 ..)(800106 ..)(600204 ..)(500305 ..)(700300 ..)(700203 ..)(110103 ..)(700802 ..)(700602 ..)(070203 ..)(090103 ..)(070203 ..)(090502 ..)(070901 ..)(050902 ..()060602 ..()070902 ..()050301 ..()070206 w l*e8odM2.j)(aRd ..)(060040 16BBdCAG ..)(050140 ..()050400 ..()090610 ..()060130 ..()080320 ..()080710 ..()060420 ..()050530 ..()070920 ..()070230 ..()110030 ..()070720 ..()006420 ..()007920 ..()008920 ..()007130 ..()008520 ..()006610 ..()005920 ..()006620 ..()007920 ..()000510 ..()002760 n a l*e7odM2.j)(aRd ..()006930 s16BnCAAG ..()005040 ..()005400 ..()005200 ..()006230 ..()008420 ..()008810 ..()007020 ..()006230 ..()008820 ..()006130 ..()011920 ..()007702 ..()004720 ..()009720 ..()088020 ..()000830 ..()020820 ..()050610 ..()010620 ..()060620 ..()800720 ..()900500 ..()200760 o i t l*e6odM2.j)(aRd ..)(500630 sscaeeuuq ..()600830 ..()400300 ..()500200 ..()600030 ..()800910 ..()800910 ..()700020 ..()600720 ..()700820 ..()600130 ..()010920 ..()080320 ..()006420 ..()007920 ..()008820 ..()008720 ..()008120 ..()006510 ..()006020 ..()001620 ..()007720 ..()005500 ..()007950 e b 5 B 0 C B C G A CA ,AA B , * )7 )8 )7 )8 )4 )5 )7 )5 )1 CA 5A ,F2 16 le6d .j)ad .(004 .(007 .(005 .(008 .(003 .(007 .(003 .(001 .(006 :le4d 0BCG 1BCG BCAG oM2(R .03 .03 .01 .01 .01 .01 .02 .04 .02 oM ,BA ,AA ,6B ; A 6 1 A 3 1 G 5 1 G B l*e5 .j)d .()007 .()008 .()007 .()008 .()005 .()005 .()007 .()006 .()001 0BCAG ,BCAG ,BBCA ,5CAA d a 4 4 4 7 3 6 2 7 4 , C 6 0 oM2(R .30 .30 .10 .10 .10 .10 .20 .30 .20 3BA 05G 1BG BCG 1 B C A G C A , B A , B C : A A l*e4od 2.j)(ad ..()00473 ..()00273 ..()00361 ..()00781 ..()00351 ..()00751 ..()00272 ..()00753 ..()00312 ,05BCCAG ;lFe7odM ,05BBCAG ,13BCCAG M R 0 0 0 0 0 0 0 0 0 :A 12 3A 50 3 G 1 G l B G B e C B C l*e3 .j)d .)(007 .)(007 .)(008 .)(008 .)(004 .)(004 .)(007 .)(005 .)(001 ;BodM ,61AA ,5CCA :l11A d a 1 0 0 7 0 8 2 8 1 A G 0 e o 2( .3 .3 .1 .1 .1 .0 .2 .3 .2 13 BC BG od M R 0 0 0 0 0 0 0 0 0 BG ,A C M A ;B C 6B : 9 3 ) ) ) ) ) ) ) ) ) ,A 1 l 0 * 7 7 8 7 3 3 6 4 1 C G e G le2odM2.j)(aRd ..(00300 ..(01300 ..(00100 ..(04100 ..(07000 ..(04000 ..(00020 ..(07030 ..(07100 :05BCAG ,05BCAA ;21odHM ,21BCAD 2 G G G l B B B e C C C ) ) ) ) ) ) ) ) ) d A * 6 7 8 6 2 2 6 4 1 o , A ,A le1 .j)d .(00 .(00 .(00 .(00 .(00 .(00 .(00 .(00 .(00 ;CM 3BA ,E21 21H odM2(aR .920 .130 .010 .210 .300 .100 .910 .330 .310 50BG 1BCG BCAG BCAG A , CA , F ,E : C 2 2 1 5 1 1 i)ab isseh led 0BG BCG BCG h ) tn :loM :CA ,A A D ia e A ,F ub ub rap ed le6 16G 12G iltcaee5ounndb tryoun ianp íil)(caaaunndpA eendw iiraaToonndddgbA iirrttsaaEeendbAm ii(rrttsaaEeendbAAm ii(rrttsaaEeendbADm itttsaSeend rrr-tssycvaaeeuoong irrrrrtsaaaaeeonnddpp iiillrscaaeeounnVddbm ,;1661BBBodCCAAGGM ,,1650BBBBCCCAAAGG ,,1661BBBBCCCAAAGG T C S S S T U U U U C S * A A A Table 6 Descriptive statistics of R2 (explained variance of achievement score) across coun‑ tries by model and study TIMSS grade 4 TIMSS grade 8 Table 7 R2 (explained variance of achievement score) by country (PIRLS) Participation rate of schools (before replacement) (%) Standard model Country‑specific model R2 (adj.) R2 (adj.) Standard errors appear in parenthesis administer these questionnaires, ensuring that the participation in the actual survey is not jeopardized. Methodological and financial considerations will determine whether a standard approach (one standardized questionnaire for all affected countries) or a tailored approach (country-specific questionnaires) is more efficient. Further investigations are needed to show whether the presented approach of developing nonresponse questionnaires is also applicable to other large-scale assessments and if nonresponse questionnaires for individuals could be developed in similar ways. Moreover, a study on the feasibility of the practical application is pending. Careful consideration is needed to optimally integrate the administration of such questionnaires in the tight schedule of large-scale assessments. High participation rates would be needed to ensure the usability of this instrument. In this sense, short questionnaires might be favorable, while another option would be to administer full school questionnaires. The latter would simplify data processing and operations, but also be beneficial regarding the quality of the nonresponse bias analysis. X X X X X y a w X X X e ilfrrttzycccaaaeeeuuoooohhhdww l)scooh iillffrttscaaeeeeeooohhhngdgww iill)(rrttscvaaeoonhgA iillffrttscaaeeeeeooohhhngdgww iiijf)(ttsssaeeeeunndbUm iillffrttscaaeeeeeooohhhngdgww il)(rrtssscaaeuoondbCm iillffrttscaaeeeeeooohhhngdgww iiiIlf(rrtttscvaaaaeeeuooonhndbbm iillffrttscaaeeeeeooohhhngdgww ,illlffrrtsyycaaeeuoooooohnhngw iiil(rrrttsssycvaaeeeoonhnnbbppO ,illlffrrtsyycaaeeuoooooohnhngw i)(tttcvaSeeeeuhnndm ,iilrrrtttsxyyaaaaeeeuoohhngpppwm iiiiillrrttssssyccvaaaeeuoooonhhpp iiiill)rtttsccaaaaeeeuoonnhnhdgg ,iilrrrtttsxyyaaaaeeeuoohhngpppwm iiiiillrrttssssyccvaaaeeuoooonhhpp ’illlrttttsssccaaaaaeeeuoooohhnhdg ,iilrrrtttsxyyaaaaeeeuoohhngpppwm iiiiillrrttssssyccvaaaeeuoooonhhpp ,iilrrrtttsxyyaaaaeeeuoohhngpppwm iiiiillrrttssssyccvaaaeeuoooonhhpp ,iilrrrtttsxyyaaaaeeeuoohhngpppwm iiiiillrrttssssyccvaaaeeuoooonhhpp iff)rrssccaeeeeeoonnndw H To To To To To In In D A B C J B H A A A A B A C C 2 3 3 3 3 3 4 4 5 1 1 1 1 1 1 1 1 1 G G G G G G G G G B B B B B B B B B C C C C C C C C C A A A A A A A A A -c -c se in u u s d n r/y r/y irtsn irtsn isahp rag< itcuo iram iram iisn iisn em itsn irtsn p p sa sa ch en g in in h h u d in egb egb epm epm m tsu raed y y r r ow to teh teh jao jao ,)h ills frso t?> ehnw ehngw ivaeem ivaeem i)ton iscceen rskycae iirsvoon ftseeog ing )te in rce rce ten nd ilt ep au llfoow laahbp llfoow ifrtss ifrtss iron itscaa aendg lkam lang< teohd fteoh teohd irttaeeg )ttxe irttaeeg itscveep taehm laaung rscooh itson lcan rttse lcan sand faao sd rep .,.gm ing syuo eung cooh lteeh cooh illssk iiend illsskan ’rtsuoh l(eum llfoow )e ,eod rteoh iyuon tsoom iyuon rodw iaend tehm iaend tehg irrccu ithng aaung leb em ts e ts e r g r in teh cah llra and soh n iz n m ng iyn ng in w ed ng ed so iw ift iw rm fo te (o e> ts fttseuoh l(?ceRooo fttseuoh l(?aeRood llfteoooh Il(?eoond llfteoooh l(t?eeooD rrtsaaeeh llcaeoonp iilt/senngg rrtauohdg frtseuond yan sch yan sch d h d h o o in f< eu w ta w ta g yo g yo d u Sp w io w io m o – b e b e t t t t o d 1 ro m A A A A C F Authors’ contributions SM developed the research questions and design, supervised data compilation, conducted major parts of the statistical analysis and interpretation of results and drafted major parts of the manuscript. DC conducted parts of the statistical analysis, drafted minor parts of the manuscript and critically revised all other parts of the manuscript. ST was responsible for data compilation and merging, preparation of data analysis, drafting all tables, and manuscript revision. All authors have given final approval of the manuscript version to be published and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Acknowledgements The authors are thankful to Marc Joncas and Plamen Mirazchiyski and two peer reviewers for their very useful comments. Competing interests The authors declare to have no competing interests. Baker , D. P. , Goesling , B. , & Letendre , G. K. ( 2002 ). Socioeconomic status, school quality, and national economic development: A cross-national analysis of the “Heyneman-Loxley effect” on mathematics and science achievement . Comparative Education Review , 46 ( 3 ), 291 - 312 . Bethlehem , J. G. , & Kersten , H. M. P. ( 1985 ). On the treatment of nonresponse in sample surveys . Journal of Official Statistics , 1 ( 3 ), 287 - 300 . Caldas , S. J. , & Bankston , C. ( 1997 ). Effect of school population socioeconomic status on individual academic achievement . Journal of Educational Research , 90 ( 5 ), 269 - 277 . Fuller , Bruce. ( 1987 ). What school factors raise achievement in the Third World . Review of Educational Research , 37 , 255 - 293 . Grace , K. , & Thompson , J. S. ( 2003 ). Racial and ethnic stratification in Educational Achievement and Attainment . Annual Review of Sociology , 29 , 417 - 442 . Helmschrott , S. , & Martin , S. ( 2014 ). Nonresponse in PIAAC Germany . Methods, Data, Analysis I , 8 ( 2 ), 243 - 266 . Koretz , D. , McCaffrey , D. , & Sullivan , T. ( 2001 ). Predicting variations in mathematics performance in four countries using TIMSS . Education Policy Analysis Archives , 9 ( 34 ). Retrieved on 08/27/2009 from http://epaa.asu.edu/epaa/v9n34/. Lamb , S. , & Fullarton , S. ( 2001 ). Classroom and school factors affecting mathematics achievement: A comparative study of the US and Australia using TIMSS . Trends in International Mathematics and Science Study (TIMSS), TIMSS Australia Monograph Series, Australian Council for Educational Research. Lepidus Carlson , B. & Williams , S. ( 2001 ). A comparison of two methods to adjust weights for non-response: propensity modelling and weighting class adjustments . In Proceedings of the annual meeting of the American Statistical Association , August 5- 9 , 2001 . Lynn , P. ( 2003 ). PEDAKSI: Methodology for collecting data about survey nonrespondents . Quality & Quantity , 37 , 239 - 261 . Martin , M. O. , & Mullis , I. V. S . (Eds.). ( 2013 ). Methods and procedures in TIMSS and PIRLS 2011. Chestnut Hill , MA: Lynch School of Education, Boston College. Matsuo , H. , Billiet , J. , Loosvelt , G. , & Kleven , O. ( 2010 ). Measurement and adjustment of nonresponse bias based on nonresponse surveys: The case of Belgium and Norway in the European Social Survey Round 3 . Survey Research Methods, 4 ( 3 ), 165 - 178 . Meinck , S. ( 2015 ). Computing sampling weights in large-scale assessments in education. Survey insights: Methods from the field, weighting: Practical issues and 'how to' approach . Retrieved from http://surveyinsights.org/?p= 5353 . Meinck , S. , & Cortes , D. ( 2015 ). Sampling weights, nonresponse adjustments and participation rates . In: Fraillon, J. , Schulz , W. , Friedman , T. , Ainley , J. , & Gebhardt , E. (Eds.), International computer and information literacy study 2013 technical report. Amsterdam: International Association for the Evaluation of Educational Achievement (IEA). Mohadjer , L. , Krenzke, T. , & van de Kerchhove, W. ( 2013 ). Indicators of the quality of the sample data . In: Kirsch I., & Thorn , W. (Eds), Technical report of the survey of adult skills (PIAAC) . Paris, France: Organization for Economic Co-operation and Development (OECD). Mullis , I. V. S. , Martin , M. O. , Foy , P. , & Arora , A. ( 2012a ). TIMSS 2011 international results in mathematics . Boston: TIMSS & PIRLS International Study Center , Boston College. Mullis , I. V. S. , Martin , M. O. , Foy , P. , & Drucker , K. T. ( 2012b ). PIRLS 2011 international results in reading . Boston: TIMSS & PIRLS International Study Center, Boston College. OECD ( 2013 ). OECD skills outlook 2013: First results from the survey of adult skills . New York : OECD Publishing. doi:10.1787/9789264204256-en OECD ( 2014 ). TALIS 2013 results: An international perspective on teaching and learning . New York : OECD Publishing. Schulz , W. , Ainley , J. , & Fraillon , J. (Eds.). ( 2011 ). ICCS 2009 technical report . Amsterdam: The International Association for the Evaluation of Educational Achievement . Stoop , I. A. L. ( 2004 ). Surveying nonrespondents . Field Methods , 16 ( 1 ), 23 - 54 . Watson , N. ( 2012 ). Longitudinal and cross-sectional weighting methodology for the HILDA survey . HILDA Project Technical Paper Series, 2/12.


This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1186%2Fs40536-017-0038-6.pdf

Sabine Meinck, Diego Cortes, Sabine Tieck. Evaluating the risk of nonresponse bias in educational large-scale assessments with school nonresponse questionnaires: a theoretical study, Large-scale Assessments in Education, 2017, 3, DOI: 10.1186/s40536-017-0038-6