Research and Analysis Terminology

Baseline Equivalence

The extent to which two or more groups (e.g., treatment/intervention and comparison groups) are similar in characteristics at the start of study for the purpose of comparing the effect of the intervention on those groups. When comparing the effects of an intervention on two or more groups, it is assumed that the groups are equivalent in terms of performance, demographics or other target characteristics prior to the intervention (i.e., at baseline) so that any subsequent differences can be attributed to the intervention. Establishing baseline equivalence enables one to isolate the effect of the intervention on the treatment, while ruling out confounding factors and extraneous influences. An example of a factor that could contribute to no baseline equivalence would be if the treatment group has more higher performing students than the comparison group at the start of the study. In such a situation, the effect of an intervention cannot be isolated and the results would be considered biased.

Cluster Analysis

A statistical method of dividing participants (e.g., students, schools) into meaningful groups based on the degree to which they share common characteristics (e.g., product usage). Characteristics of participants in one cluster are significantly different from those in the other clusters.

Comparison Group

The group of participants in a study who either does not receive a treatment/intervention or who receives a different treatment/intervention, and is thus compared to the group of participants who did receive the treatment/intervention (i.e., the treatment group).

Confidence Interval

A range of values estimated from a sample that is likely to contain the true value from the population. For example, if the estimated value in the sample is between 0.40 and 0.60 with a 90% level of confidence, then one can conclude that if you take multiple samples from a population, 90% of the time the estimated value will fall between .40 to .60. In other words, there is a 90% probability that the population mean value will be between .40 and .60.

Confidence Level

The probability of the observed confidence intervals containing the true value for the population. The confidence level is set by the researcher and reported with percentages. For example, a 90% confidence level would indicate that if we take multiple samples from a population, 90% of the intervals would include the true value for the population.

Confounding Factor

A factor or variable in a study that influences at least some of the relationship between the independent (i.e. treatment/intervention) and dependent (i.e. learning outcome) variables. Failure to include or eliminate the influence of potential confounding variables (e.g., as a covariate) limits the reliability and validity of the study. For example, when examining whether students who use a product (i.e., the treatment/intervention group) demonstrate greater gains in achievement than students who do not use a product (i.e., the comparison group), a confounding factor might be the average student achievement of each group before the treatment. Any difference in average student achievement between groups before introducing the treatment can impact their achievement gain post treatment. Thus, failure to include any difference in their achievement in the analysis or eliminating that difference before the treatment will confound the results.

Correlation

A statistical measure of the relationship (magnitude and direction) between two variables — the extent to which one variable changes in relation to another variable. 

Correlation Coefficient

The correlation coefficient indexes the direction and the magnitude of the statistical relationship between two variables. The index can range from -1.00 (negative relationship) to +1.00 (positive relationship), with 0 being indicative of no relationship. For example, if the correlation coefficient between the frequency of usage of a product and achievement is 0.80, then one can conclude that these variables are highly (magnitude) and positively (direction) correlated to each other. In other words, when frequency of product usage increases, the level of achievement also increases.  

Cost-Effectiveness

The relative effectiveness of a treatment or intervention compared to its cost. Specifically, cost-effectiveness analysis consists of a set of techniques that allows one to compare the costs (direct and indirect) of a treatment or intervention to its effectiveness. Typically the cost-effectiveness analysis will result in a value that can be placed somewhere within quadrants ranging from “high cost, low effectiveness” to “low cost, high effectiveness.”

Covariate

Covariates are factors/variables that have the potential to influence the outcomes of a study. A common assumption is that covariate levels are identical for participants (e.g., students) in all study groups (e.g., treatment/intervention group and comparison groups). Thus, any differences found between groups can be attributable to the treatment or intervention. Covariates can be included in the analysis for two different purposes. A covariate can be of primary interest. In such cases, subgroup analysis can be conducted to determine their effect on study outcomes. For example, grade level can be a primary interest and considered as covariate when exploring the impact of intervention on student outcome. On the other hand, covariates can be included in an analysis to avoid confounding results. When covariates are extraneous factors, statistically controlling for them allows one to hold constant (or remove) their influence and rule out possible confounding factors. Common covariates include socioeconomic status, gender, grade, prior achievement, and school locale.

Dependent Variable

A factor that represents the outcome of interest. Dependent variables are also referred to as response variables, outcome variables, or explained variables. Common dependent variables include different types of educational outcomes such as academic achievement. 

Educational Evaluation

The investigation of an ongoing or completed educational intervention, with the aim of determining the extent to which the objectives of the intervention were accomplished. Specific methodologies can help determine the effectiveness and impact of an intervention, as well as its utility and cost-effectiveness.

Effect Size

A quantitative measure of the strength of the relationship between one or more variables or groups in a population. The effect size provides evidence about the impact of a given intervention by indexing the magnitude of the intervention’s effect in a standardized way, which allows the results to be compared across numerous contexts. Effect sizes can take multiple forms, with a common form being the standardized mean difference (e.g., Hedge’s g) between groups. When examining a single group, a different form of effect size (e.g., correlation, regression) is used, which indexes the relationship between the independent variable (e.g., edtech usage) and the dependent variable (e.g., academic achievement).

Hedge’s g

Specific type of measure of effect size based on the standardized mean difference between two or more groups (e.g., treatment/intervention and comparison groups). It provides evidence about the impact of a given intervention by showing the magnitude of the intervention’s effect. Hedge’s g is more robust to the effects of smaller samples. The value of Hedge’s g can range from +1 (positive effect) to -1 (negative effect).

Independent Variable

A factor that is expected to have influence on another factor (i.e., the dependent variable). Typically, the independent variable is manipulated to examine the extent to which varying levels of the independent variable (e.g., educational technology usage) predict or relate to changes in the dependent variable (e.g., academic achievement). Independent variables are also referred to as predictor variables or explanatory variables.

Intervention

The process of applying a treatment with users to examine whether the treatment has an effect. Interventions may include educational technologies, classroom activities, digital learning tools, pedagogical approaches, teaching practices.

Locale

The geographic location of the school or district, particularly with regard to where it falls on the spectrum between city, suburban, town, or rural.

Margin of Error

A statistical approximation of the amount of sampling error that is in a study’s effect size, which indicates the likelihood that the estimated value from a sample accurately represents the true value of the entire population. The larger the margin of error, the less confidence one has that the study’s estimated value is close to the population’s true value.

Matching

A set of statistical procedures that allows one to identify matched sets of participants from the study groups (e.g., treatment/intervention and comparison groups). Participants are matched (and subsequently compared) when they have roughly equal characteristics or attributes measured by the covariates (e.g., gender, ethnicity, previous performance). Theoretically, the matching procedure will result in study groups that are approximately equivalent, which lessens the likelihood that extraneous or confounding factors are causing the treatment effects.

Outcome

Any educational criterion, outcome, or response variable used to measure an educational outcome of interest in a rapid cycle evaluation. Outcomes may involve typical cognitive measures (e.g., test scores, gradebook data), noncognitive measures (e.g., self-esteem, critical thinking, 21st century skills, persistence), or alternative educational outcomes such as attendance, course retention, and graduation rate. The outcome should be what the intervention (e.g., edtech product) is supposed to improve.

Posttest

A quantitative measure of the outcome variable (e.g., achievement scores) that are taken after the intervention is implemented. 

Power (Statistical)

The ability of a study to detect the true value (e.g., a difference between groups or the relationship between variables) of the population from the sample. When statistical power is high, the study is more likely to detect the impacts of an intervention. Alternatively, when statistical power is low, the study may be unable to detect the effects of an intervention. Statistical power is influenced by (a) the level of confidence (e.g., 95% confidence level) one has in their estimate, (b) the magnitude of the effect (e.g., effect size) one is trying to detect (larger effects are easier to detect), and (c) the sample size. Power ranges from 0 to 1.00, with experts suggesting .80 as a standard for sufficient power.

Pretest

A quantitative measure of the outcome variable (e.g., achievement scores) that are taken before the intervention is implemented. 

Random Assignment

A technique for assigning participants to study conditions (e.g., treatment/intervention and comparison groups) using randomization, so that each participant has an equal chance of being in a given study condition. Random assignment is a necessary condition for a true experimental design (e.g., randomized controlled trial), and it increases internal validity of the study by assuring that different study groups are equivalent prior to the intervention (i.e., baseline equivalence). When random assignment is not feasible, specific measures can be taken to test for baseline equivalence and help remove the influence of group differences.

Randomized Controlled Trial

A study design where participants (e.g., students, teachers) are randomly assigned to a treatment/intervention group or comparison group, which allows one to assume groups are equivalent on all variables except for the treatment. The treatment is administered to the treatment/intervention group and the treatment is withheld from the comparison group. 

The expected amount or frequency (exposure) to a treatment/intervention that is suggested in order for it to be efficacious. For example, EdTech Product A may recommend 10 modules completed per week, or EdTech Product B may recommend 50 minutes of product engagement per day. Recommended dosage is also sometimes referred to as recommended usage, prescribed dosage/usage, or dosage/usage recommendation.

Quantile

A group that results from dividing a sample into roughly equal subgroups after the data is ordered from the smallest to largest. Any number of quantiles can be determined for a set of values, with common quantiles being terciles (three groups), quartiles (four groups), and quintiles (five groups). For example, partitioning a set of values into quintiles would result in five roughly equal groups, where approximately 20% of the full set of values falls within each quintile.

Quasi-Experimental Design

A research design with methods and procedures that satisfy most conditions of a true experimental design, but lacks random assignment of participants to study conditions. Because random assignment is often impractical and sometimes impossible, quasi-experimental designs are implemented frequently in educational research. With the appropriate methodology, quasi-experimental designs are highly effective at addressing research questions. 

Sample Size

The number of participants (e.g., students, educators, schools) that are included in your study. In the event that you have multiple subsamples, the total sample size is the sum of the subsamples — for example, if you have 250 in your comparison group (n = 250) and 250 in your treatment/intervention group (n = 250), then your total sample size is 500 (N = 500). 

Sampling

The process of selecting participants (e.g., students, educators, schools) for a sample from a population of interest. The degree to which one can generalize results from a study depends on how representative the sample is of the population. 

Study Condition

The study group (e.g., treatment/intervention group, comparison group) to which the participant belongs.

Treatment Group

The group of participants in a study who receive a treatment or intervention. The treatment group is compared to the group (or groups) of participants who did not receive the treatment or receives a different intervention (e.g., comparison group). The treatment group is also referred to as the experimental group or intervention group.

Usage Metric

A measure of the extent to which the participant (e.g., student) used or was exposed to the treatment or intervention. For example, the usage metric for an educational technology product could refer to the number of times the student logged in, the time spent using the product, the number of modules completed, or the percentage of the syllabus completed.

Variable

Any measurable factor such as a characteristic, usage, or educational outcome. A variable can have multiple values representing unique attributes of the variable. For example, the variable “gender” can take on multiple values (e.g., male, female, other). There are many types of variables, including independent variables, dependent variables, mediational variables, and moderating variables, covariate variables.


How did we do?


Powered by HelpDocs (opens in a new tab)