Viewing page 90 of 208

This transcription has been completed. Contact us with corrections.

6.5 Assessment of KidSat: An Education Experiment

Shawn Sakamoto
Since the primary aim of the KidSat project was to improve education in middle schools, the success of the  project was ultimately determined by whether there was a measure improvement in student learning. The evaluation encompassed two main forms of assessment: embedded and standardized. Teachers from the pilot program schools were instructed on the use of the measure during the KidSat Summer Teacher Training Institute.

The first form of assessment consisted of activities that were incorporated into the curriculum and that had prescribed skill expectations in order to evaluate students' understanding and progress. These activities were unique to each lesson and were based on the lesson design. Teachers developed pre- and post-tests to determine qualitative changes in students' abilities to read, analyze, and interpret images. These skills were imperative to the success of the KidSat curriculum and to successful learning outcomes. One of the opening activities to Kid Sat lessons was a questionnaire that was given to all students. The teachers were given multiple copies of the images that were to be used in this assessment exercise. The rationale for the embedded assessment was that it provided a diagnostic opportunity. Specifically, it documented student skill mastery, created a student portfolio, offered a- formal evaluation of student performance, provided avenues for individual student choices and accomplishments, provided feedback to gauge teaching effectiveness, and allowed for qualitative assessment of a hands-on curriculum.

The second form of assessment used by KidSat was the Educational Record Bureau's (ERB) Comprehensive Testing Program (CTP) III. This nationally normed form of assessment was presented as a paper-and-pencil test requiring answers in a multiple choice format. The CTP III tested skills in writing, mathematics, and verbal and quantitative reasoning. The test was administered before the implementation of the curriculum and at the end of the program instruction, which coincided respectively with the beginning and the end of the school term. This assessment was also administered to a matched group of students. Gains on these sub-test of the ERB were then compared to those of the students who did not participate in the program intervention.

All data from the pre-tests in the pilot phase were analyzed and reported for both the participating KidSat students and the matched group. Students from both groups took the ERB post-test at the end of 1996-1997 school year. Because studies on technology-based programs in schools have focused almost exclusively on software evaluation, results from our standardized testing provided interesting information on the impact of KidSat on measurable ability and aptitude. It is wise to note an important caveat in the interpretation of standardized tests or any other testing instruments: they measure only a single facet of learning outcomes: only a pixel in a picture, if you will.

81