IMPACT Analysis FAQs (Intermediate)

Updated by Mary Styers

What are usage clusters and how are they formed?

Usage clusters are subsets of students grouped together based on how much they use an edtech product (e.g., low use, moderate use, high use). IMPACT™ Analysis statistically generates these clusters based on natural usage patterns using an advanced algorithm. The algorithm identifies the optimal number of clusters based on similarities in total product usage (e.g., total minutes using the edtech product). IMPACT™ compares usage patterns and product effectiveness across these usage clusters.

What is a trial (or pilot) and how is it integrated into the IMPACT™ Analysis?

A trial (or pilot) uses a research-backed survey to help users gather feedback and insight from educators regarding perceptions of edtech effectiveness. It allows stakeholders to generate qualitative and quantitative data (i.e., product grades on the eight core criteria, open-ended comments) from educators across an entire school, district, or state. In addition to product feedback sourced from verified educators in LearnPlatform, trial results are integrated in the Feedback section of the IMPACT™ Analysis report, allowing users to better understand how their educators and those in the LearnCommunity — evaluate the product on the core criteria deemed most important when trying, buying, or using an edtech product.

How does the IMPACT™ Analysis divide the sample into treatment and control groups?

Control study design. Treatment and control (or comparison) groups are determined by the school or district in a Control study design. If a school or district assigns students to the treatment and control groups (either using random assignment or not), then these pre-defined groups are used in the IMPACT™ Analysis.

Comparative study design. Many schools and districts choose to run widespread edtech implementations rather than conduct a trial (or pilot) via experimental design. As another alternative, schools and districts may provide historical data to evaluate edtech usage and impact without having previously employed a research design. In cases like these, treatment groups consist of students who used the edtech product, and control groups consist of students who did not use the product.

Correlative study design. Correlative studies do not include a control group. These studies only include a treatment group who received the intervention. In these study designs, IMPACT examines the relationship between product usage and an educational outcome, while statistically controlling for covariates.

In addition to the effects of edtech on student achievement, there are other factors that impact the effectiveness of any given intervention, such as quality of instruction and student demographic or achievement differences. How do you account for these additional variables?

IMPACT™ Analysis has the ability to account for student, class, and school variables such as grade level, previous performance, student demographics, and many other factors. IMPACT accounts for all covariates included in the data, and will statistically adjust the effect size accordingly.

How does effect size within performance quintiles inform decisions on closing the achievement gap?

An examination of performance quintiles allows IMPACT™ to determine whether an edtech product demonstrates the ability to close the achievement gap.

Comparative/Control study designs. First, within treatment and control groups, students are grouped into quintiles based on their prior performance (e.g., GPA prior to the intervention, previous test score). Then, an effect size is computed within each achievement group, demonstrating how well an edtech product works for students at different achievement levels (i.e., standardized mean difference between the posttest scores of students at each achievement level). Edtech products that show a large, positive effect size for historically low performing students are products that may help close the achievement gap. For example, within a Control or Comparative design, if IMPACT™ finds that effect sizes for an edtech product are positive and higher for students in the “low achievement” quintiles, then this product demonstrates potential effectiveness at closing the achievement gap.

Correlative study designs. IMPACT groups treatment students into quintiles based on their prior performance. Then, an effect size is computed within each achievement group, demonstrating the relationship between product usage and posttest student achievement. Positive effect sizes represent that as usage increases, achievement generally increases, whereas negative effect sizes indicate that as usage increases, achievement generally decreases. For example, within a Correlative design, if IMPACT™ finds a negative effect size for a product for students in the "lowest achievement" quintile, this indicates that greater product usage related to lower posttest scores for this specific student group.

How are the results shared with stakeholders?

Administrators have complete flexibility and control to share results across their organizations and with key stakeholders. LearnPlatform offers administrators the ability to share IMPACT™ Analysis reports, teacher feedback results, and usage dashboards with a unique URL to the report and/or in printed format. Administrators can set login permissions to allow each type of user to access results relevant to them/their role. In addition, all graphics and visual displays in the IMPACT™ Analysis can be exported (e.g., PNG, JPEG, or SVG).


How did we do?


Powered by HelpDocs (opens in a new tab)