Quantitative Analyses/Experimental Designs - Early Childhood Education - Pedagogy

Early Childhood Education

Quantitative Analyses/Experimental Designs

 

Quantitative (parametric) statistics are used to analyze data collected in group research designs, including nonexperimental designs as well as natural experiments, quasi-experiments, and randomized trial experimental designs (Shadish et al., 2002.) These are all multisubject designs in which characteristics of interest are measured systematically across a sample of participants. There are important differences between quantitative and qualitative research methodologies as they respond to education research questions.

In a recent monograph focused on scientific research in education, the National Research Council (2002) suggested that many education research questions can be characterized as addressing questions of “description—what is happening? cause—is there a systematic effect? and process or mechanisms—why or how is it happening?” (p. 99). Both quantitative and qualitative research methods are used in analyses of data collected to answer questions related to description. In general, experimental research methods and quantitative analyses are used to answer questions related to (a) relations among variables and (b) differences between groups.

Correlational studies are “quantitative, multi-subjects designs in which participants have not been randomly assigned to treatment conditions” (Thompson et al., 2005, p. 182). The analytic models applied with these designs are designed to evaluate the relations among two or more variables of interest. These analytic methods include multiple regression analysis, canonical correlation analysis, hierarchical linear modeling and structural equation modeling (Thompson et al., 2005). Although they do not provide definitive causal evidence, results from correlational studies can offer directions for future experimental research designs. The use of sophisticated causal modeling or exclusion methods in correlational designs provides some basis through which “correlational evidence can at least tentatively inform evidence-based practice” (Thompson et al., 2005, p. 190).

In contrast to correlational designs, analyses of data collected using natural experiments, quasi-experimental or experimental designs typically focus on outcome differences between groups (e.g., analysis of variance, analysis of covariance, multivariate analysis of variance) or different rates of growth (e.g., growth curve analysis with multiple data points; multivariate repeated-measures analysis of variance). A brief description of each of these types of experimental designs appears below, followed by general comments on analytic strategies.

Natural experiments are group designs in which a “naturally occurring contrast between a treatment and a comparison condition” (Shadish et al., 2002, p. 17) is the focus of the research question. For example, the Swedish Adoption/Twin Study of Aging is a natural experiment in which data collected on pairs of twins separated at a young age and reared apart (46 identical, 100 fraternal pairs) are compared with data from matched pairs of twins reared together (67 pairs of identical and 89 pairs of fraternal twins). Data from this study have been used to understand genetic and environmental influences on cognitive and social behaviors (cf. Bergeman et al., 2001; Kato and Pedersen, 2005). Because it would be unethical to experimentally assign infants to be separated from their parents and siblings, twin studies such as this one rely on naturally occurring events in children’s lives. Other examples of natural experiments include studies of children living in orphanages (e.g., Morison and Elwood, 2000) and of adults with mental retardation (Skeels and Dye, 2002). Natural experiments such as these often fit the definition of quasi-experimental research designs (described below) when there is a comparison group against which children in the intervention are compared. The Swedish Twin Study is an example of a natural quasi-experimental design.

Of particular interest in education and intervention research are answers to questions about the effects of interventions and the mechanisms through which those effects might occur. These are often referred to as “What works?” and “How does it happen?” questions (National Research Council, 2002). Quasi experimental and experimental research designs are typically used to address these types of research questions. In quasi-experimental and experimental designs, researchers are interested in understanding different treatment or intervention effects across two or more groups. In a true experiment (described as a randomized controlled-trial design, below), participants are randomly assigned to an intervention or control (nontreatment, placebo) group. In contrast, assignment to group is by means of self-selection in a quasi-experimental design. In this case, unknown preexisting differences may be systematically associated with group selection. This makes it difficult to exclude all possible alternative explanations if different intervention outcomes are found across groups (Shadish et al., 2002). Many important and policy-relevant research questions, including questions about the effectiveness of intervention programs such as Head Start and the contributions of different types of early care to children’s development, are addressed using quasi-experimental methods (cf. NICHD ECCRN, 2004; U.S. Department of Health and Human Services, 2005).

Randomized controlled trial designs are the best approach for understanding how specific intervention components are related to outcomes for children or families (Feuer et al., 2002). The unique strength of randomized experimental designs “is in describing the consequences attributable to deliberately varying a treatment” (Shadish et al., 2002, p. 9). In randomized designs, participants are assigned to experimental groups by chance. If done correctly, random assignment creates two or more groups that are probabilistically similar on average. When an intervention is applied to one group (the experimental group) but not to the other (control or placebo group), or when different types of interventions are applied across groups, and differences in outcomes are detected, such outcome differences can be attributed to the intervention (Gersten et al., 2005; Shadish et al., 2002).

Because there are at least two groups (treatment and comparison), analyses of data collected using natural experiments, quasi-experimental designs, or randomized controlled trials use general linear modeling techniques (including variations of analysis of variance, growth curve modeling, and hierarchical linear modeling) to compare group outcomes (Tabachnick and Fidell, 2001). Because there are often several potential units of analysis (e.g., data collected on children, teachers, and schools provide three different units of analysis), multilevel analyses (such as hierarchical linear modeling or growth curve modeling) are often most appropriate (Gersten et al., 2005). Current recommendations also require researchers to provide evidence that the research design has sufficient power to detect group differences and to provide evidence for the size of the intervention effect as well as evidence of significant differences between groups (Gersten et al., 2005; National Research Council, 2002).

In addition to the correlational and group designs described above, analyses designed to provide descriptive information, and analyses of the psychometric characteristics of assessment instruments, also fit within the broad category of quantitative analyses. Descriptive education research methods include those designed to allow statements about the characteristics of a population, descriptions of simple relationships between variables, or descriptions of special groups or populations (National Research Council, 2002). For example, information about the average level and variability of characteristics of interest is typically addressed by providing data on central tendencies such as the mean or median, and on variability such as standard deviation. Nonexperimental research designs also include procedures used in scale development. Analytic approaches may include factor analysis (including principal-components analysis and confirmatory factor analysis) and assessments of internal consistency reliability (calculation of Cronbach’s alpha).

Further Readings: Bergeman, C. S., J. M. Neiderhiser, N. L. Pedersen, and R. Plomin (2001). Genetic and environmental influences on social support in later life: A longitudinal analysis. International Journal of Aging and Human Development 53, 107-135; Feuer, M. J., L. Towne, and R. J. Shavelson (2002). Scientific culture and educational research. Educational Researcher 31, 4-14; Gersten, R., L. S. Fuchs, D. Compton, M. Coyne, C. Greenwood, and M. S. Innocenti (2005). Quality indicators for group experimental and quasi-experimental research in special education. Exceptional Children 71, 149-164; Kato, K., and N. L. Pedersen (2005). Personality and coping: A study of twins reared apart and twins reared together. Behavior Genetics 35, 147-158; National Research Council (2002). Committee on Scientific Principles for Education Research. In R. J. Shavelson and L. Towne, eds., Scientific research in education. Center for education. Division of behavioral and social sciences and education. Washington, DC: National Academy Press; NICHD Early Child Care Research Network (2004). Type of child care and children’s development at 54 months. Early Childhood Research Quarterly 19, 203-230; Skeels, H. M., and H. B. Dye (2002). A study of the effects of differential stimulation on mentally retarded children. In J. Blacher and B. L. Baker, eds., The best of AAMR: Families and mental retardation: A collection of notable AAMR journal articles across the 20th century. Washington, DC: American Association on Mental Retardation, pp. 19-33; Tabachnick, B.G., and L. S. Fidell (2001). Using multivariate statistic. 4th ed. Boston: Allyn and Bacon. Thompson, B., K. E. Diamond, R. McWilliam, P. Synder, and S. Snyder (2005). Evaluating the quality of evidence from correlational research for evidence-based practice. Exceptional Children 71, 181-194; U.S. Department of Health and Human Services, Administration for Children and Families. (2005). Head Start impact study: First year findings. Washington, DC.

Karen Diamond