Learning objects are interactive web-based tools that support the learning of specific concepts by enhancing, amplifying, and/or guiding the cognitive processes of learners. Research on the impact, effectiveness, and usefulness of learning objects is limited, partially because comprehensive, theoretically based, reliable, and valid evaluation tools are scarce, particularly in the K-12 environment. The purpose of the following study was to investigate a Learning Object Evaluation Scale for Students (LOES-S) based on three key constructs gleaned from 10 years of learning object research: learning, quality or instructional design, and engagement. Tested on over 1100 middle and secondary school students, the data generated using the LOES-S showed acceptable internal reliability, face validity, construct validity, convergent validity and predictive validity.Learning objects are at the heart of the Sharable Content Object Reference Model (SCORM) criterion that is required for every training course used in the Department of Defense (DoD) under Advanced Distributed Learning (ADL). Although this study deals with middle and secondary school students, the principles of learning objects remain constant. Their application to andragogy vice pedagogy may require some adjustment for learning style. As summarized in the abstract, learning objects act on the cognitive processes of learners, often through visual aids such as 2D and 3D graphics, photos, and animations or videos. This is especially important when training a technical task. Similarly, visual learning objects can have great impact on the behavioral and affective domains, such as through demonstration of a procedure or depicting the potential results of a safety violation. Significantly, learning objects offer the capability of reuse in a variety of situations, minimizing redundancy and duplication of effort.
This research is about the evaluation of learning objects, which takes place mostly in the design and development phases in a formative analysis. However, little research can be found that incorporates the user's input as part of a summative analysis with regard to learning objects. Thus, most often they fall into the easy to use category. There are some repositories that use content experts to evaluate the quality of the objects after they have been developed, but "the number of evaluators is usually limited, the assessors have limited background in instructional design, and the end user does not enter the feedback loop in a significant way" (p. 148). Most evaluation has been done at the level of higher education. Little has been done in the K-12 arena, and the article does not deal at all with technical education incorporate or military training.
The definition of what is a learning object is key to this study. Original definitions focused on characteristics such as accessibility, adaptability, use of metadata, reusability, and standardization. Contemporary definitions emphasize qualities such as interaction and the degree to which the learner actively constructs knowledge. These technically based and learning-based definitions have been replaced for this study with a pedagogically based definition that is a composite of both. It includes interactivity, accessibility, a specific conceptual focus, reusability, meaningful scaffolding, and learning.
Three aspects of each learning object were assessed in the study through student feedback: (1) how much they learned; (2) the quality of the learning object; and (3) how much they were engaged with the learning object. Students were asked in an open-ended format to comment on what they liked and disliked about the learning object. Total response included 1,922 comments which were categorized into the 3 main constructs and analyzed using a coding scheme. Each comment was also rated on a 5-point Likert scale. Two raters were used and their ratings compared; differences between their ratings were discussed and revised as necessary.
Next, student performance was assessed based on their exposure to the learning objects using several different tests that surveyed learning at different levels. Finally, the teachers who selected the learning objects were surveyed with a similar instrument to that used with the students to determine their perspective on (1) how much their students learned; (2) the quality of the learning object; and (3) how much their students were engaged with the learning object. The results were not as reliable statistically as those for the student evaluation. However, the focus of the research was to investigate an approach for evaluating learning objects that related most to students.
The key features of learning objects that were most supported by the responses to the study included interactivity, clear feedback, and graphics or animations that support learning. Design qualities most supported included effective help, clear instructions, transparency of use and organization. With reference to engagement, overall theme can impact positively or negatively on learning. There was also a low but significant correlation among student evaluations of learning, quality, and engagement and learning performance. Ultimately, however, "[l]earning objects are simply tools used in a complex educational environment where decisions on how to use these tools may have considerably more import than the actual tools themselves" (p. 161).
REFERENCES
Kay, R. H. & Knaack, L. (2009). Assessing learning, quality and engagement in learning objects: The Learning Object Evaluation Scale for Students (LOES-S). Educational Technology, Research and Development, 57(2), 147-168. DOI: 10.1007/s11423-008-9094-5
No comments:
Post a Comment