Tuesday, December 6, 2011

SWTng 13: Team-Based Peer Review

     This will be the last article for now. It is by Lavy and Yadin (2010) in the Journal of Information Systems Education and is titled "Team-Based Peer Review as a Form of Formative Assessment--The Case of a Systems Analysis and Design Workshop." It has 37 references and the following keywords: peer review, team-based peer review, formative assessment, SOLO taxonomy, and systems analysis and design. Here is the abstract:
The present study was carried out within a systems analysis and design workshop. In addition to the standard analysis and design tasks, this workshop included practices designed to enhance student capabilities related to non-technical knowledge areas, such as critical thinking, interpersonal and team skills, and business understanding. Each task was reviewed and assessed by both the students and the instructor. The main research study objective was to examine the effect of team-based peer-review on the students' learning process in an information systems workshop. What is presented is data referring to the grading process, to students' enhanced learning reflected in the narrowing gap between the instructor's and the students' grading, as well as the students' reflections demonstrating their perception of the workshop's components.
     The relevance of this article to my study lies in the peer review part. It is the non-technical aspects mentioned in the abstract that I am interested in promoting, the demonstration and augmentation of the learners' understanding of the ways that the use of technology can develop new organizational processes and achieve organizational goals. After all, technology will be an essential tool in their work. The problem is that in many team-based exercises, either the team descends to the level of the least capable person on the team or one or a few really sharp people carry the load while everyone else watches. Control of the team effort is essential, even if it means identifying a project manager. The television program The Apprentice, where a group of people vie for one spot on Donald Trump's staff as his personal apprentice, often demonstrates this principle where one person on the team is the project manager and the project literally sinks or swims based on their force of personality to get the others to do what is necessary to successfully complete the task. Enter the "team-based peer review" or "TBPR," a form of evaluation where the students engage in reviewing and evaluating their fellow students' projects. Something similar happens in Donald Trump's "board meetings" where the different team members evaluate each other's contribution to the project both before and after the winner is announced. It can sometimes get pretty "bloody." In this case, the evaluation happened in the context of a workshop based on the "SOLO (Structure of the Observed Learning Outcomes) taxonomy (Biggs and Collis, 1982)" which "elevated students' overall understanding of the processes to a higher level of abstraction" (p. 85).
     "The SOLO taxonomy is a hierarchical model suitable for measuring learning outcomes of different subjects, levels, and for assignments of various lengths (Biggs and Collis, 1982)" (p 87). It encompasses five levels: Pre-structural, Uni-structural, Multi-structural, Rational, and Extended abstract. At the pre-structural level, the student lacks the ability to perform the task; there is insufficient understanding. At the uni-structural level, one of a few aspects of the task to be performed is taken into account. There is some understanding. At the multi-structural level, more aspects of the task are taken into account; however, the student still lacks the "full picture." At the rational level, all aspects are understood and integrated as a "whole." The student exhibits understanding of the parts, as well as the relationships between them. In the extended abstract level, the whole derived at the previous level is conceptualized at a higher abstract level so that it can now be used in different settings.
     As applied to the workshop, at the first level the student lacks the understanding required for the task. Either the "story" is not clear or many of the principles of analysis are still missing. At the second level, The student understands some aspects of the process principles (gathering requirements, analysis, design, programming, testing), but w/he still lacks understanding of the business situation expressed by the "story." In level 3, the principles are clear and the student has started to implement these principles in designing a suitable solution for the customer. At level 4, All aspects of the solution are clear and were used for preparing the third and fourth documents. The last level allows the student to understand the solution concept and provide proper feedback for her/his fellow students' solutions. The student develops an abstract understanding of the steps and procedures required for designing a useful and complete solution.
     With respect to my study, level one is when the developers walk in. They may have never used authoring software before, let alone ours. They are domain specialists (subject matter experts, SMEs) who are being asked to put their knowledge in an online instructional artifact. At this point they are unable to perform the task. At level two, the developers have begun to understand some of the process principles. They should know what the major components are though they may not understand how they interact at the program level. They require frequent to constant supervision to ensure successful development. They are not able to create at the concrete or abstract level. At level 3 they clearly understand the basic principles of how data is input into the authoring tool. They can follow the basic steps to build a frame. However, they still lack the "full picture" and probably don't understand how branching works or the finer points of why and how interaction happens among the main components of the instructional software they are developing. They still require some supervision and assistance in putting major blocks of the puzzle together but can be trusted to complete discrete components with minimal oversight. At level four (the ideal target level for successful training), the learner understands both the parts and the relationships between them. They can successfully create instructional software with minimal supervision and rework. Level 5 allows the student to understand the overall concept of instructional software development and provide proper feedback for his/her fellow developers' courseware. The student at this level develops an abstract understanding of the steps and procedures required for designing and useful and complete course and should be a supervisor.
     Back to the confines of this article, in the workshop the students' evaluations of each other were initially very different from those of the instructor; however, with practice and experience, they soon came to resemble those of the instructor, minus the instructor's advanced experience in the field.
REFERENCES
Biggs, J. B., and Collis, K. F. (1982). Evaluating the quality of learning: The SOLO taxonomy (Structure of the Observed Learning Outcome). New York: Academic Press.
Lavy, I., and Yadin, A. (2010). Team-based peer review as a form of formative assessment - The case of a systems analysis and design workshop. Journal of Information Systems Education, 21(1), pp. 85-98.