• Funded by: National Science Foundation
  • Focus: Assessment
  • Project type: Tablets

Overview

In collaboration with an interdisciplinary team of assessment specialists, learning scientists, K12 educators, educational technologists, and informal science educators, GameDesk is undertaking a research project to develop and implement the Learning Lens, a mobile assessment tool that enables educators to capture observations and design assessments around unique and emergent 21st century learning experiences such as project-based learning, games, apps in class, and DIY/Maker activities. Although these non-traditional learning activities have been hailed for their potential to foster STEM learning and engagement, the field lacks tools for assessing the kinds of learning that these activities afford. Learning Lens will make it possible for educators to iteratively and collaboratively create assessments grounded in activity types and in valued STEM learning outcomes, and to track students’ real-time growth in situ. Through formative and summative research, the project will contribute new knowledge regarding the kinds of learning outcomes that ensue from emergent learning experiences, and will generate processes for linking robust evidence to support valid claims about valued outcomes. The technological innovation includes a crowd-sourced strategy for responding to foundational research questions about the nature of learning in emergent activities.

Learning Lens Objectives

  • Support educators to properly assess what students are learning when they engage in apps, games, digital classroom activities, and hands-on curriculum
  • Accommodate a variety of perspectives and approaches to 21st century assessment
  • Be an efficient, practical, and effective tool that makes reasoning from observed evidence accessible to educators and practitioners

Broader Impacts

The Learning Lens project has the potential to make a variety of broader impacts. The project involves formal and informal institutions who serve underrepresented students in STEM. We will distribute the Learning Lens to interested educators to support rigorous assessment practices in a variety of settings. The Learning Lens will serve as a next-generation instrument for researchers and evaluators interested in understanding learning in new types of activities, as well as in understanding more deeply how educators reason about assessment of student learning. The Learning Lens could also be used in professional development programs to grow educators’ fluency in robust assessment practices.

Initial Conceptualization

In this section we present a series of design and process mock-ups on how the Learning Lens might function and be utilized by educators. These conceptual ideas illustrate our initial conceptualization of the exploration, and offer insight into the experiences of user interacting with the Learning Lens. We also describe a research-based rationale for the potential of the Learning Lens features and capabilities to generate a repository of data that would enable us to extract common assessment themes and processes. Insights gained from analyses of these crowd-sourced data sets could then be shared with a broader audience to support assessment innovation efforts nationwide.

Screen Shot 2013-10-16 at 11.34.10 PM

Defining the Activity and Capturing Data

The educator/researcher begins to use the tool by describing the design/make/play learning activity, as well as anticipated learning outcomes such as specific content knowledge and skills. The tool then prompts the user to reflect on the features of the activity that are potentially relevant for assessment, such as whether the activity is new to the learners and the amount of support provided. Users then begin to capture and annotate rich media data around the activity. In this concept mockup, we illustrate the ability to leverage the mobile device to capture video, images, and textual documentation around a central activity. This allows educators/researchers in both formal and informal context to begin to link observable practice, articulation, and resulting artifacts and begin to develop an articulation around those observations that in turn allow them to build and iteratively design an assessment approach around the activity. The tool will encourage the user to document their observations and reflections in relation to the activity, data, and learning outcomes, and to form tentative linkages from the data and annotations to specific learning outcomes. These reflections might target the nature and patterns observed, and may also capture emergent formative assessment and facilitation ideas around the activity.

Creating Categories of Assessed Learning Outcomes

From those observations, claims about knowledge, skills, and abilities will emerge whereby the tool will offer the educator/researcher an opportunity to define KSA types and categories and link content from the observation media and data collection to the KSA types. For example, if a knowledge outcome type was a science standard then the teacher would make initial claims linked to that standard around the activity and link observations that appear aligned to that standard-based knowledge outcome. One of the goals of exploratory development would be to develop features that implement an ease-of-use to make the process of integrating activity observations as effortless as possible. Another example would be if through the observations a 21 century skill was observed such as problem solving in which again the researcher/educator would define that skill and attribute previous observations to that skill.

 

Screen Shot 2012-12-16 at 1.27.53 PM

Generating Scoring Rubrics

Once the activity data capture process ends, the user may choose to generate scoring rubrics based on the gathered data. Scoring rubrics are guides for translating qualitative data into quantitative scores for the relevant learning outcomes. These rubrics become resources that subsequently might guide the user on what kinds of evidence to consider collecting during certain kinds of activities, what one might look for in that evidence to find out how proficient a learner is on the linked learning outcomes.

Creating Activity Assessment Reports

The teacher/researcher would continue to effectively build categories under a particular activity that would then result in a consolidated Activity Report attributed to individuals and groups of learners. At this stage, the researcher/educator could continue to make observations under that Activity Report, refine and iterate the categories and criteria as needed. The teacher could also begin to make qualitative judgements based on this emerging criteria and to assign them to quantitative measures such as a scale or percentage. In the above image, we illustrate a design where educators are able to populate a template with assessment category types (in this case, 21st century skills) attributed to one activity and one individual’s performance within that activity. This design would seek to help the educator link quantitative values (a percentage meter) to qualitative rationale that describe their observations and reasoning for the quantitative value. In this version, the data is linked to an individual learner, a specific learning activity, and date of implementation. In this design the tool would inform the educators teaching and assessment practices moving forward on a daily basis by collecting data around their observations.

Sharing Data, Promoting Cross-talk, and Crowd-Sourcing the Iteration of Assessment Practice

It is important to examine how groups of people use the tool and examine the ways in which they share that data, examine each others entries, and how the sharing of data helps them to do their respective work. The tool will leverage the sharing of data across multiple activities, individuals and assessment templates (see above) and look to promote cross-talk among implementers. Crowd sourcing and knowledge sharing would allow researchers/educators to extract all observations, definitions, and claims around a particular activity. The tool could also allow for the researcher/educator to examine and compare all observations around students that scored at a particular level of proficiency. For example, the tool could extract all data on students who scored above 90. This would allow an examination of various rationales across a diverse group of assessment lenses and approaches on what constitutes high performance, proficiency or mastery.. The tool could therefore, later inform shared consensus on what constitutes high quality work. Likewise, you could examine students who scored low on a given skill for a given activity, and identify patterns and commonalities, again building shared consensus, but also informing facilitation and instruction for students struggling with a particular ability or knowledge.


Top