Faculty Advisor or Committee Member

Janice Gobert, Advisor

Faculty Advisor or Committee Member

Wouter van Joolingen, Committee Member

Faculty Advisor or Committee Member

Neil Heffernan, Committee Member

Faculty Advisor or Committee Member

Ryan S.J.d. Baker, Committee Member

Identifier

etd-042513-062949

Abstract

Despite widespread recognition by science educators, researchers and K-12 frameworks that scientific inquiry should be an essential part of science education, typical classrooms and assessments still emphasize rote vocabulary, facts, and formulas. One of several reasons for this is that the rigorous assessment of complex inquiry skills is still in its infancy. Though progress has been made, there are still many challenges that hinder inquiry from being assessed in a meaningful, scalable, reliable and timely manner. To address some of these challenges and to realize the possibility of formative assessment of inquiry, we describe a novel approach for evaluating, tracking, and scaffolding inquiry process skills. These skills are demonstrated as students experiment with computer-based simulations. In this work, we focus on two skills related to data collection, designing controlled experiments and testing stated hypotheses. Central to this approach is the use and extension of techniques developed in the Intelligent Tutoring Systems and Educational Data Mining communities to handle the variety of ways in which students can demonstrate skills. To evaluate students' skills, we iteratively developed data-mined models (detectors) that can discern when students test their articulated hypotheses and design controlled experiments. To aggregate and track students' developing latent skill across activities, we use and extend the Bayesian Knowledge-Tracing framework (Corbett & Anderson, 1995). As part of this work, we directly address the scalability and reliability of these models' predictions because we tested how well they predict for student data not used to build them. When doing so, we found that these models demonstrate the potential to scale because they can correctly evaluate and track students' inquiry skills. The ability to evaluate students' inquiry also enables the system to provide automated, individualized feedback to students as they experiment. As part of this work, we also describe an approach to provide such scaffolding to students. We also tested the efficacy of these scaffolds by conducting a study to determine how scaffolding impacts acquisition and transfer of skill across science topics. When doing so, we found that students who received scaffolding versus students who did not were better able to acquire skills in the topic in which they practiced, and also transfer skills to a second topic when was scaffolding removed. Our overall findings suggest that computer-based simulations augmented with real-time feedback can be used to reliably measure the inquiry skills of interest and can help students learn how to demonstrate these skills. As such, our assessment approach and system as a whole shows promise as a way to formatively assess students' inquiry.

Publisher

Worcester Polytechnic Institute

Degree Name

PhD

Department

Social Science & Policy Studies

Project Type

Dissertation

Date Accepted

2013-04-25

Accessibility

Unrestricted

Subjects

Behavior Detection, Skill Prediction, User Modeling, Validation, Inquiry Learning Environment, Science Education, Computer-Based Assessment, Inquiry Assessment, Performance Assessment, Science Simulations, Science Microworlds, Educational Data Mining, Formative Assessment, Exploratory Learning Environment, Open-Ended Learning Environment, Science Inquiry, Science Assessment, Designing and Conducting Experiments, Construct Validity, Generalizability, Text Replay Tagging, J48 Decision Trees, Bayesian Knowledge Tracing

Share

COinS