Tuesday, December 6, 2011

SWTng 13: Team-Based Peer Review

     This will be the last article for now. It is by Lavy and Yadin (2010) in the Journal of Information Systems Education and is titled "Team-Based Peer Review as a Form of Formative Assessment--The Case of a Systems Analysis and Design Workshop." It has 37 references and the following keywords: peer review, team-based peer review, formative assessment, SOLO taxonomy, and systems analysis and design. Here is the abstract:
The present study was carried out within a systems analysis and design workshop. In addition to the standard analysis and design tasks, this workshop included practices designed to enhance student capabilities related to non-technical knowledge areas, such as critical thinking, interpersonal and team skills, and business understanding. Each task was reviewed and assessed by both the students and the instructor. The main research study objective was to examine the effect of team-based peer-review on the students' learning process in an information systems workshop. What is presented is data referring to the grading process, to students' enhanced learning reflected in the narrowing gap between the instructor's and the students' grading, as well as the students' reflections demonstrating their perception of the workshop's components.
     The relevance of this article to my study lies in the peer review part. It is the non-technical aspects mentioned in the abstract that I am interested in promoting, the demonstration and augmentation of the learners' understanding of the ways that the use of technology can develop new organizational processes and achieve organizational goals. After all, technology will be an essential tool in their work. The problem is that in many team-based exercises, either the team descends to the level of the least capable person on the team or one or a few really sharp people carry the load while everyone else watches. Control of the team effort is essential, even if it means identifying a project manager. The television program The Apprentice, where a group of people vie for one spot on Donald Trump's staff as his personal apprentice, often demonstrates this principle where one person on the team is the project manager and the project literally sinks or swims based on their force of personality to get the others to do what is necessary to successfully complete the task. Enter the "team-based peer review" or "TBPR," a form of evaluation where the students engage in reviewing and evaluating their fellow students' projects. Something similar happens in Donald Trump's "board meetings" where the different team members evaluate each other's contribution to the project both before and after the winner is announced. It can sometimes get pretty "bloody." In this case, the evaluation happened in the context of a workshop based on the "SOLO (Structure of the Observed Learning Outcomes) taxonomy (Biggs and Collis, 1982)" which "elevated students' overall understanding of the processes to a higher level of abstraction" (p. 85).
     "The SOLO taxonomy is a hierarchical model suitable for measuring learning outcomes of different subjects, levels, and for assignments of various lengths (Biggs and Collis, 1982)" (p 87). It encompasses five levels: Pre-structural, Uni-structural, Multi-structural, Rational, and Extended abstract. At the pre-structural level, the student lacks the ability to perform the task; there is insufficient understanding. At the uni-structural level, one of a few aspects of the task to be performed is taken into account. There is some understanding. At the multi-structural level, more aspects of the task are taken into account; however, the student still lacks the "full picture." At the rational level, all aspects are understood and integrated as a "whole." The student exhibits understanding of the parts, as well as the relationships between them. In the extended abstract level, the whole derived at the previous level is conceptualized at a higher abstract level so that it can now be used in different settings.
     As applied to the workshop, at the first level the student lacks the understanding required for the task. Either the "story" is not clear or many of the principles of analysis are still missing. At the second level, The student understands some aspects of the process principles (gathering requirements, analysis, design, programming, testing), but w/he still lacks understanding of the business situation expressed by the "story." In level 3, the principles are clear and the student has started to implement these principles in designing a suitable solution for the customer. At level 4, All aspects of the solution are clear and were used for preparing the third and fourth documents. The last level allows the student to understand the solution concept and provide proper feedback for her/his fellow students' solutions. The student develops an abstract understanding of the steps and procedures required for designing a useful and complete solution.
     With respect to my study, level one is when the developers walk in. They may have never used authoring software before, let alone ours. They are domain specialists (subject matter experts, SMEs) who are being asked to put their knowledge in an online instructional artifact. At this point they are unable to perform the task. At level two, the developers have begun to understand some of the process principles. They should know what the major components are though they may not understand how they interact at the program level. They require frequent to constant supervision to ensure successful development. They are not able to create at the concrete or abstract level. At level 3 they clearly understand the basic principles of how data is input into the authoring tool. They can follow the basic steps to build a frame. However, they still lack the "full picture" and probably don't understand how branching works or the finer points of why and how interaction happens among the main components of the instructional software they are developing. They still require some supervision and assistance in putting major blocks of the puzzle together but can be trusted to complete discrete components with minimal oversight. At level four (the ideal target level for successful training), the learner understands both the parts and the relationships between them. They can successfully create instructional software with minimal supervision and rework. Level 5 allows the student to understand the overall concept of instructional software development and provide proper feedback for his/her fellow developers' courseware. The student at this level develops an abstract understanding of the steps and procedures required for designing and useful and complete course and should be a supervisor.
     Back to the confines of this article, in the workshop the students' evaluations of each other were initially very different from those of the instructor; however, with practice and experience, they soon came to resemble those of the instructor, minus the instructor's advanced experience in the field.
REFERENCES
Biggs, J. B., and Collis, K. F. (1982). Evaluating the quality of learning: The SOLO taxonomy (Structure of the Observed Learning Outcome). New York: Academic Press.
Lavy, I., and Yadin, A. (2010). Team-based peer review as a form of formative assessment - The case of a systems analysis and design workshop. Journal of Information Systems Education, 21(1), pp. 85-98.

Wednesday, November 30, 2011

SWTng 12: Learning Object Evaluation Scale

     Article number twelve by Kay and Knaack (2009) is titled "Assessing Learning, Quality and Engagement in Learning Objects: The Learning Object Evaluation Scale for Students (LOES-S)," published in Educational Technology, Research and Development. With 92 references, the keywords include assess, evaluate, learning object, middle school, quality, scale, and secondary school. Here's the abstract.
Learning objects are interactive web-based tools that support the learning of specific concepts by enhancing, amplifying, and/or guiding the cognitive processes of learners. Research on the impact, effectiveness, and usefulness of learning objects is limited, partially because comprehensive, theoretically based, reliable, and valid evaluation tools are scarce, particularly in the K-12 environment. The purpose of the following study was to investigate a Learning Object Evaluation Scale for Students (LOES-S) based on three key constructs gleaned from 10 years of learning object research: learning, quality or instructional design, and engagement. Tested on over 1100 middle and secondary school students, the data generated using the LOES-S showed acceptable internal reliability, face validity, construct validity, convergent validity and predictive validity.
     Learning objects are at the heart of the Sharable Content Object Reference Model (SCORM) criterion that is required for every training course used in the Department of Defense (DoD) under Advanced Distributed Learning (ADL). Although this study deals with middle and secondary school students, the principles of learning objects remain constant. Their application to andragogy vice pedagogy may require some adjustment for learning style. As summarized in the abstract, learning objects act on the cognitive processes of learners, often through visual aids such as 2D and 3D graphics, photos, and animations or videos. This is especially important when training a technical task. Similarly, visual learning objects can have great impact on the behavioral and affective domains, such as through demonstration of a procedure or depicting the potential results of a safety violation. Significantly, learning objects offer the capability of reuse in a variety of situations, minimizing redundancy and duplication of effort.
      This research is about the evaluation of learning objects, which takes place mostly in the design and development phases in a formative analysis. However, little research can be found that incorporates the user's input as part of a  summative analysis with regard to learning objects. Thus, most often they fall into the easy to use category. There are some repositories that use content experts to evaluate the quality of the objects after they have been developed, but "the number of evaluators is usually limited, the assessors have limited background in instructional design, and the end user does not enter the feedback loop in a significant way" (p. 148). Most evaluation has been done at the level of higher education. Little has been done in the K-12 arena, and the article does not deal at all with technical education incorporate or military training.
     The definition of  what is a learning object is key to this study. Original definitions focused on characteristics such as accessibility, adaptability, use of metadata, reusability, and standardization. Contemporary definitions emphasize qualities such as interaction and the degree to which the learner actively constructs knowledge. These technically based and learning-based definitions have been replaced for this study with a pedagogically based definition that is a composite of both. It includes interactivity, accessibility, a specific conceptual focus, reusability, meaningful scaffolding, and learning.
     Three aspects of each learning object were assessed in the study through student feedback: (1) how much they learned; (2) the quality of the learning object; and (3) how much they were engaged with the learning object. Students were asked in an open-ended format to comment on what they liked and disliked about the learning object. Total response included 1,922 comments which were categorized into the 3 main constructs and analyzed using a coding scheme. Each comment was also rated on a 5-point Likert scale. Two raters were used and their ratings compared; differences between their ratings were discussed and revised as necessary.
     Next, student performance was assessed based on their exposure to the learning objects using several different tests that surveyed learning at different levels. Finally, the teachers who selected the learning objects were surveyed with a similar instrument to that used with the students to determine their perspective on (1) how much their students learned; (2) the quality of the learning object; and (3) how much their students were engaged with the learning object. The results were not as reliable statistically as those for the student evaluation. However, the focus of the research was to investigate an approach for evaluating learning objects that related most to students.
     The key features of learning objects that were most supported by the responses to the study included interactivity, clear feedback, and graphics or animations that support learning. Design qualities most supported included effective help, clear instructions, transparency of use and organization. With reference to engagement, overall theme can impact positively or negatively on learning. There was also a low but significant correlation among student evaluations of learning, quality, and engagement and learning performance. Ultimately, however, "[l]earning objects are simply tools used in a complex educational environment where decisions on how to use these tools may have considerably more import than the actual tools themselves" (p. 161).
REFERENCES
Kay, R. H. & Knaack, L. (2009). Assessing learning, quality and engagement in learning objects: The Learning Object Evaluation Scale for Students (LOES-S). Educational Technology, Research and Development, 57(2), 147-168. DOI: 10.1007/s11423-008-9094-5

Thursday, November 17, 2011

SWTng 11: Change and Learning at Work

     Article number 11, by Hetzner, Gartmeier, Heid, and Gruber, is titled "The Interplay between Change and Learning at the Workplace: A Qualitative Study from Retail Banking." It is published in the Journal of Workplace Learning with 41 references and the author-supplied keywords professional education, performance management, and workplace learning. Here's the abstract (p. 398):
Purpose - The purpose of this paper is to analyse employees' perception of a change at their workplaces and requirements for learning and factors supporting or inhibiting learning in the context of this change.
Design/methodology/approach - Data collection included personal face-to-face semi-structured interviews with ten client advisors inthe retail-banking department of a German bank. The interviews took place during a time when the participants' workplaces were affected by a drastic change, namely the implementation of an integrated consulting concept. The data were analysed by a qualitative, content analysis approach, adapting Billett's framework for analysing workplace changes.
Findings -  Challenges and requirements for learning as a consequence of the workplace change were analysed. The results show that the employees realised many affordances of the modification of work routines, especially concerning work performance, professional knowledge, and professional role. Thus, employees recognised the change as an opportunity for the acquisition of knowledge and competence development.
Originality/value - This paper contributes to the understanding of workplace change's effect on employees' knowledge, work routines and professional development.
     The tenuousness of the organizational structure in most workplaces combined with the fluctuation in the very nature of the job or product have created a situation today where workers must constantly adapt their workplace knowledge to new conditions, procedures, and peers. "Learning to cope with new requirements means employees must modify existing work routines or establish new ones (Becker, 2004; Becker et al., 2005; Hoeve and Nieuwenhuis, 2006)" (p. 398). "However, effective learning in change situations does not occur automatically, mainly due to the tension between needing to keep up the pace and ensure job performance efficiency on one hand, and time-consuming learning activities on the other (Eraut, 2004)" (p. 399). This is most commonly seen in supervisors' unwillingness to release employees to training for fear their absence will derail the production schedule. This frequently results in a search for a "quick fix" to most problems that arise, which is often both a symptom and a cause of superficial learning. Trainers are turning to informal learning processes more and more. In order for the workplace to function as the learning environment, the employees' participation must be active. One of the major stumbling blocks to employee acceptance of change is their perception of inability to influence even the change process, let alone the change itself that is impressed upon them from on high. This article looks at two factors: (1) "how employees perceive a change at their workplaces and the requirements for learning"; and (2) "which factors support or inhibit learning in the context of this change" (p. 399).
     First, the authors look at a workplace learning perspective on workplace changes from the individual perspective, the individual from a contextual perspective, and then formulate conclusions for a study on workplace changes and workplace learning. The result is a qualitative study that investigates the interrelation between change in the workplace and workplace learning (p. 401).
The change context of the study  involves a new concept for client advising in the retail-banking department of a German bank. Advisers went from specializing in a small number of products to working with a larger number of products. Also, their interaction with the client was scripted with little room for adjustment or modification.
     The researchers formulated two questions to guide their inquiry: (1) How did the employees perceive the change and what were the resulting requirements for learning? and (2) Which factors were perceived as supportive or inhibitive for learning in the context of the change? They conducted semi-structured interviews with ten client advisers, all of whom had the same exposure to the change, worked on the same functional area, and had at least five years' experience in retail banking.
     Responses to question 1 were categorized according to routineness, intensity, multiplicity, complexity, and artifacts and external tools. Responses to question 2 were categorized under discretion, accessibility, homogeneity, working with others, and status of employment. While the employees found the challenge invigorating, the tangible rewards for their efforts, aside from keeping their jobs, were not forthcoming. However, there was evidence that each employee adapted the learning process to their learning style and the new requirements. However, the most important conclusion was that "As learning at work is basedon negotiations between the individual and the social context (Billett, 2008), a communication strategy is recommended that explains to the employees the learning requirements involved and the resulting individual benefits, such as professional development, rather than just the necessity and reasons for change" (p. 411).

REFERENCES

Billett, S. (2008). Emerging perspectives on workplace learning. In S. Billett, C. Harteis, and
     A. Etelapelto (Eds.), Emerging perspectives on learning through work (pp. 1-15). 
     Rotterdam: Sense.
Hetzner, S., Gartmeier, M., Heid, H., & Gruber, H. (2009). The interplay between change and 
     learning at the workplace; A qualitative study from retail banking. Journal of Workplace 
     Learning, 21(5), pp. 398-415. DOI: 10.1108/13665620910966802

Wednesday, November 16, 2011

SWTng 10: Assessing Learning with Concept Mapping

     Article ten by Gregoriades, Pampaka, and Michail (2009) is titled "Assessing Students' Learning in MIS using Concept Mapping," published in the Journal of Information Systems Education with 47 references and the author-supplied keywords MIS internationalization, concept mapping, learning assessment, and knowledge gaps. Here is the abstract:
The work described here draws on the emerging need to internationalize the curriculum in higher education. The focus of the study is on the evaluation of a Management Information Systems (MIS) Module, and the specification of appropriate course of action that would support its internationalization. To realize this goal it is essential to identify the possible learning needs of the two dominant cultural groups that compose the university student population in Britain, specifically European and Asian (UUK, 2005). Identification of knowledge pattersn among these cultural groups is achieved through the application of a concept mapping technique. The main research questions addressed are: (1) How to internationalize the MIS module's content and teaching methods to provide for students from different cultural backgrounds? (2) What are the main gaps in knowledge of students in MIS? The paper presents the results of this study and proposes actions needed to streamline the current teaching methods towards improving the quality of the students' learning experience.
     The driving focus of this research is how to internationalize curriculum to match the ethnic diversity of today's student body. However, the use of concept mapping to accomplish this is what I am interested in. A form of concept mapping has been used with domain specialists who develop instructional software but have little or no experience in instructional design. Frequently, their efforts become a tribute to everything they know about the topic, but lack any organization that would benefit long-term learning, real-world application, or even logical branching of the material. Thus, I am interested in the possibilities of application for use of concept mapping techniques in software training. In this research, it is used to evaluate students' level of learning and identify commonalities and gaps in knowledge between the two target groups, British students (representing Western European culture) and Chinese students (representing Asian culture).
     The two groups are identified as having different learning styles based on their approach to memorization. The Chinese style is base primarily on rote learning, perhaps because of the need to memorize the multitude of characters that make up their written language (pictograms). The British style, on the other hand, tends to be more reflective with less passive memorization. Thus, the intent is to identify knowledge gaps or misunderstandings among both groups with regard to the subject matter (MIS).
     The research followed student participation in an MIS module of instruction and used four steps: (1) introduction of concept mapping to students; (2) assessment of student understanding of concept mapping; (3) student preparation of a concept map of their understanding of the MIS module in 30 minutes on paper; and (4) analysis of the models based on students' origin and level of prior Information Systems (IS)/Information Technology (IT) experience. A master concept map key was previously developed based on the same MIS module. Student concept maps were compared to this key. The students maps were scored from three perspectives: (1) a holistic approach was used to assess the students' overall understanding of the module; (2) a relational approach was used to asses the quality and number of propositions specified in each model; and (3) an existential approach was used to assess the existence of concepts in the map compared to the master key. In the holistic method, each map was assigned a score between 1 and 10. In the relational method, some relationships between concepts were coded with low importance, some medium, and some high; each was weighted with a value of 1, 2, and 3, respectively. Color-coding was used to identify them on the key. The relations between concepts, or propositions, "were multiplied by their corresponding weighting factor and subsequently summed before reaching the final relational score of each map" (p. 423). Total score for the key was 282. In the existential method, if a correct concept from the master key was included in the student map, it was assigned a score of 1. If not, it received a 0. Again, a weighting score was applied and a total score of 59 arrived at for the existential method. These three possible scores (10, 282, and 59) were each given a percentage value and then averaged into an overall score with a range of 0-10.
     The main finding of interest to me is the low scores on the relational assessment. "Low performance is attributed to the difficulty in identifying relevant relationships among concepts and specifying them with correct propositions, which is a first indication of surface learning (Biggs, 2003)" (p. 424). Many learners are content simply to learn the facts of a matter and not understand why they are facts. They often lack the logic and critical thinking skills to put two and two together, let alone come up with four as the sum. Thus any software training must embed logic and critical thinking in its content. Questioning strategies also must assess higher levels of cognition rather than simple knowledge recall, e.g. "What must you add to x to make z, and what will happen if you don't?" The rest of the discussion and conclusions pertained to internationalization of the curriculum, which is outside the scope of my study.
REFERENCES
Biggs, J. (2003). Teaching for quality learning at university. Buckingham: Society for Research into Higher Education and Open University Press.
Gregoriades, A., Pampaka, M., and Michail, H. (2009). Assessing students' learning in MIS using concept mapping. Journal of Information Systems Education, 20(4), pp. 419-430.
UUK. (2005). Select committee evidence, Treasury Committee, Impact of China on the world and UK economy.

Monday, November 14, 2011

SWTng 9: Developing Team Competencies

     Article number nine by Kathrin Figl of the Vienna University of Economics and Business is titled "A Systematic Review of Developing Team Competencies in Information Systems Education," published in the Journal of Information Systems Education with 122 references. Author-supplied keywords include team competencies, team projects, curriculum development, and information systems education. The abstract follows (p. 323):
The ability to work effectively in teams has been a key competence for information systems engineers for a long time. Gradually, more attention is being paid to developing this generic competence as part of academic curricula, resulting in two questions: how to best promote team competencies and how to implement team projects successfully. These questions are closely interwoven and need to be looked at together. To address these questions, this paper identifies relevant studies and approaches, best practices, and key findings in the field of information systems education and related fields such as computer science and business, and examines them together to develop a systematic framework. The framework is intended to categorize existing research on teams and team competencies in information systems education and to guide information systems educators in supporting teamwork and promoting team competencies in students at the course and curricular level in the context of teaching in tertiary education.
      Working in teams has always been an essential element of development efforts in Information Systems (IS). The same is equally important in instructional software development (ISD); the team often consists of instructional designers, graphic artists, subject matter experts, and software programmers, among others, so the correlation should be very close. In a team effort, the labor is divided among the members in a way that is complementary. Each team member works to their strength, not their weakness. This includes the ability to work effectively in a team. Technical competence is insufficient in this environment. Social competencies, such as communication skills and the ability to work together with others, are also important. Training in instructional software development should prepare the developers "to work effectively in teams and foster collaborative skills necessary in the workplace" (p. 323). This should be considered a critical skill and should drive the development of training curricula for instructional software development.
     Team competencies can be either specific or generic and related each way to the team or to the task. Team-generic team competencies are transportable to other teams; task-generic team competencies are transportable to other tasks. Team-specific or task-specific team competencies are applicable only to the corresponding team or task. "For IS curricula, team-generic, task-contingent and transportable team competencies are especially relevant, since graduates may apply for jobs in different companies and have to work within different teams in their job" (p. 324). This should also be true for ISD curricula. Team skill competencies can be broken down into major sub-skills, including group decision making/planning, adaptability/flexibility, and interpersonal relations.
     The purpose of training in team competencies is to enhance individual knowledge, skills, and attitudes that improve team effectiveness when applied in context. A review of the literature identified three levels at which instruction in team competencies could be enhanced: the course level, the instructor level, and the curriculum level. At the course level, the following background topics were culled from the literature (pp. 326-327):
  • The importance of team projects in IS education
  • Basics of team projects and their positive effects
  • Team projects as training for team competencies
     Specific activities for promoting teams and team competencies at the course level include the following (pp. 327-331):
  1. Building teams for team projects
    1. Team size
    2. Team composition
    3. Team roles
  2. Raising awareness
  3. Team building activities
  4. Dealing with social loafing and promoting positive interdependence
  5. Supporting the team process
  6. Reflection of teamwork
  7. Feedback on teamwork
  8. Assessment of teamwork
  9. Additional team competencies training
    1. Lecture-based input
    2. Exercises, e.g. icebreaker games, communicating requirements, active listening, role plays, and pair-programming
  10. Measuring the effect of interventions on teamwork competencies
  11. Evaluative studies on the effectiveness of team competencies training
     At the instructor level, the primary intervention should be training and supervision for course instructors (pp. 331-332).
     At the curriculum level, students should gain expertise in three basic types of  interaction: competition with peers, working independently of peers, and working cooperatively with peers (pp. 332-333).
     Whatever the approach that is used, it should be systematic in supporting and promoting team competencies in the context of either IS or ISD.
REFERENCE
Figl, K. (2010). A systematic review of developing team competencies in information systems education. Journal of Information Systems Education, 21(3), pp. 323-337.

Sunday, November 13, 2011

SWTng 8: Quality for new learning cultures

The next article is by Ulf Daniel Ehlers of the Institute for Computer Science and Business Information Systems at the University of Duisburg-Essen in Essen, Germany. It is titled "Web 2.0--e-learning 2.0--quality 2.0? Quality for new learning cultures," published in 2009 in Quality Assurance in Education. It has 33 references and the author supplies the keywords e-learning, quality management, quality, learning processes, and learning. Following is the abstract:
Purpose - The purpose of this paper is to analyse the changes taking place when learning moves from a transmissive learning model to a collaborative and reflective learning model and proposes consequences for quality development.
Design/methodology/approach - The paper summarises relevant research in the field of e-learning to outline the differences between e-learning 1.0 and e-learning 2.0 and amalgamates it with a series of previously published works. The characteristics of quality development are analyses in a next step and suitable methodologies for developing quality for e-learning 2.0 environments are selected, proposed and explained.
Findings - Even though the question of quality is controversially discussed already when e-learning 1.0 appeared on the market, e-learning 2.0 creates even more insecurity. This paper aims at answering the following questions: what constitutes the new, innovative element, which is described by Web 2.0 and e-learning 2.0? Does this development have consequences for how it assures, manage and develop quality in e-learning? In three steps, it is described what e-learning 2.0 constitutes, which basic elements of Web 2.0 it builds on, and what has changed. In a second, step the consequences this implies for quality development in e-learning are discussed. Third, a number of methods as examples and practical advice on how to further advance quality development are described.
Originality/value - The original value of the paper is to outline the changes which have to be taken into account in new and innovative learning environment which are build on Web 2.0 technologies and to draw consequences to quality development as well as suggest methodologies for educators and learners to improve quality of such learning environments.
 The focus of this article is e-learning and quality management. While I am no longer looking at quality management and instructional design, the article also deals with constructivist issues concerning e-learning 2.0. E-learning 2.0 is characterized by the authors with terms such as learner-centered, immersive learning, connected learning, game-based learning, workflow (informal) learning, and mobile learning. (p. 297). E-learning 2.0 is further characterized as a "personalized learning environment." However, this is not individually prescribed; the nature and scope of the learning goals are identified by the learner, not the instructor. While some of that may be feasible within the scope of my topic, the goal in my situation is to make the learner more capable of doing their job. Thus, only those tasks which apply to the job will be apropos to the curriculum. Nevertheless, the point is well taken that this is different than e-learning 1.0, which resembled an online textbook. This type of e-learning must be more flexible to the wants as well as the needs of the learner. They must be able to make it their own; hence, the characterization "personalized."
From e-learning 1.0 to e-learning 2.0 - Right at the outset what needs to be stated is that e-learning 2.0 is not a scientific term. It is not about further development, a new paradigm or a replacement in the sense of a new release. Strictly speaking it is not even about a new technology, a new model of learning or a new, separate, innovative variety of e-learning. E-learning 2.0 rather refers to a number of developments, trends and points of view, which require change from teaching to learning. The new point of view essentially connects e-learning with five characteristics:
  1. Learning takes place always and everywhere (i.e., it is ubiquitous) and therefore in many different contexts, not only in the classroom.
  2. Learners take on the role of organizers.
  3. Learning is a lifelong process, has many episodes and is not (only) linked to educational institutions.
  4. Learning takes place in communities of learning (so-called communities of practice, Wenger, 1998): learners participate in formal as well as informal communities.
  5. Learning is informal and non-formal, takes place at home, at the work place and during leisure time and is no longer centered on teachers or institutions. (p. 297)
 In order to provide for quality in e-learning 2.0, the following transitions need to be made from 1.0:
  • From reception to participation
  • From inspection to reflection
  • From product orientation through process orientation to performance and competence orientation
  • From planning education for the learner to planning education by the learner
  • From receiver to developer of learning materials
  • From the "learning island" LMS to the internet as a learning environment
  • From tests to performance
Next, the authors discuss concepts and methods of quality development for e-learning 2.0. Important aspects of methods for quality assessment include the following:
  • Self-evaluation
  • Quality assessment with e-portfolios (web-based portfolios)
  • Social recommendation and community participation
  • Evaluation processes aimed at a target group
The point the authors are making is that the method of identifying quality must change because the nature of learning has changed in e-learning 2.0.

REFERENCE

Ehlers, U. F. (2009). Web 2.0 - e-learning 2.0 - quality 2.0? Quality for new learning cultures. Quality Assurance in Education, 17(3) pp. 296-314. DOI: 10.1108/s09684880910970687

Saturday, November 12, 2011

SWTng 7: Ill-Structured Problem Solving

The next article is by Choi and Lee (2009) titled “Designing and implementing a case-based learning environment for enhancing ill-structured problem solving: Classroom management problems for prospective teachers,” published in Educational Technology, Research and Development. It has 67 references, and the author-supplied keywords include case-based learning, constructivist learning environment design, design-based research, ill-structured problem solving, teacher education, and classroom management. This is the abstract:

“This design-based research study is aimed at two goals: (1) developing a feasible case-based instructional model that could enhance college students’ ill-structured problem solving abilities, while (2) implementing the model to improve teacher education students’ real-world problem solving abilities todeal with dilemmas faced by practicing teachers in elementary classrooms. To achievethese goals, an online case-based learning environment for classroom management problem solving (CBL-CMPS) was developed based on Jonassen’s (in: Reigeluth (ed.) Instructional-Design Theories and Models: A New Paradigm of Instructional Theory, 1999) constructivist learning environment model and the general process of ill-structured problem solving (1997). Two successive studies, in which the effectiveness of the CBL-CMPS was tested while the CBL-CMPS was revised, showed that the individual components of the CBL-CMPS promoted ill-structured problem solving abilities respectively, and that the CBL-CMPS as a whole learning environment was effective to a degree for the transfer of learning in ill-structured problem solving. The potential, challenge, and implications of the CBL-CMPS are discussed.”

[My rambling comments will appear thus, in brackets hereafter in this review.]

[Teachers, as a whole, are used to using well-structured problems in their classrooms. Simply put, they make it easier to grade. After all, no one likes it when a student points out there is more than one answer to the test question. However, life is not so simple. Life’s problems are generally ill-structured, and that’s part of the point of this article and part of why it is apropos to my topic, teaching adult domain specialists to use an authoring system to build instructional software. The problems they will face generally will be ill-structured. Though they may run into the same problem more than once, and some problems many times over, there may be more than one way to solve a problem, given the capabilities of the software. Thus, this research meets a need in my research. Though the foci of the article are higher education and teacher training, application to corporate training and adult learning should not be a stretch, and in fact should inform the structure and theoretical foundation of the online tutorial I intend to test. Thus, I need to include that in my problem statement as part of the theoretical foundation of my research. Constructivism! Gotta love it!]

The authors’ purpose is two-fold: (1) to fill the gap between classroom learning and real-world problem solving, and (2) to create “feasible design and implementation models for improving college students’ real-world problem solving abilities” (p. 100). To do that, the authors’ goals were first “to develop and refine a feasible case-based instructional model that could enhance college students’ ill-structured problem solving abilities,” (p. 100) and second to apply the model to the development of training for prospective teachers to prepare them for the real world of teaching. They incorporate two concepts: case-based learning (CBL) as an environment for the tool which is classroom management problem solving (CMPS). Together, they form a case-based learning environment for classroom management problem solving (CBL-CMPS). [So much gibberish.]

The characteristics of ill-structured problems include the following: complexity of the context; multiple, and sometimes conflicting, perspectives among the stakeholders; diverse or even no solution(s); and multiple criteria for evaluation of the solution. Domain knowledge is the first tool and key to solving not only well-structured problems but ill-structured problems as well. How the problem-solver believes humans process information and their own interpersonal communication skills may not affect how they solve well-structured problems but will serve a role in how they solve ill-structured ones. Four critical skills or factors figure heavily in ill-structured problem solving:The characteristics of ill-structured problems include the following: complexity of the context; multiple, and sometimes conflicting, perspectives among the stakeholders; diverse or even no solution(s); and multiple criteria for evaluation of the solution. Domain knowledge is the first tool and key to solving not only well-structured problems but ill-structured problems as well. How the problem-solver believes humans process information and their own interpersonal communication skills may not affect how they solve well-structured problems but will serve a role in how they solve ill-structured ones. Four critical skills or factors figure heavily in ill-structured problem solving:
  • Respect for and incorporation of multiple perspectives on human information processing while questioning one’s own beliefs and knowledge (epistemological beliefs) [at face value, this seems contradictory; however, I think this may be an attempt at objectivity]
  • Planning and monitoring of solutions and the cognitive processes by which they were obtained (metacognition)
  • Thereconciling of conflicting interpretations and solutions with sound arguments (justification/argumentation skills)
  • Domain knowledge
In Figure 1 below, the top half of the figure illustrates four ill-structured problem solving process models from the literature while the bottom half shows the Case-Based Learning for Classroom Management Problem Solving (CBL-CMPS) model by comparison.


Figure 1. Ill-structured problem solving models and the CBL-CMPS model (p. 102)

The researchers then applied this concept and their model to classroom management.

[While this is certainly not within the scope of my topic (in my case, if the students misbehave, they get fired!), I will attempt to pull some principles out of their research and see how they fit.]
First, they point out that traditional philosophy portrays teaching as a linear process preceded by classroom management and as discrete acts separated from each other.

[This relates to my topic in that instructional software development includes using the authoring software, which requires certain knowledges, skills, and abilities (KSAs); however, from a constructivist viewpoint, the KSAs need to be blended with the process in the training rather than separating them in pure cognitive blocks without integrating them into training the learner to do their job. Theoretically, the latter approach should be both more effective and more efficient. Perhaps this is the nature of the peer coaching model that is currently used. However, the KSAs don’t seem to be emphasized to a level that would help the learner to develop a greater proficiency. Instead, the domain specialists must often consult the instructional designers or the authoring software “experts” to find out how to accomplish a certain task. Repeatedly. This slows the development process for everyone. Yet, there is another element that must be recognized: the emotional state of the domain specialist being called upon to use their expertise in ways many of them have never experienced. The training must take this into account and attempt to alleviate some of it in order to increase its effectiveness and efficiency. Dumping a whole bunch of knowledge on the student at once will not suffice. The training must be structured carefully with a balance of raw information, application, and reinforcement through both drill and practice and appropriate feedback. In fact, in this case it is the learner who will be developing the ill-structured problem solving skills.]

This must be accomplished by making the instructional software development a “’contextual, local, and situated’ act that demands ‘subtle judgments and agonizing decisions’ (Shulman, 1992, p. 28)” (Choi and Lee, 2009, p. 103).

[Fortunately, the situation presents copious amounts of opportunity for contextualized domain knowledge integrated with authentic situations to facilitate the formation of ill-structured problem solving skills.]

These will include the ability to identify and interpret important situational cues as well as the ability to apply (or modify) appropriate principles to a particular situation (Choi and Lee, 2009, p. 104). The authors affirm that “ill-structured problem solving often relies on case-based reasoning by applying one’s prior experience (Hernandez-Serrano and Jonassen, 2003; Schank, 1999) because ill-structured problems are often context dependent (Voss, 1987)” (Choi and Lee, 2009, p. 104).

[While the domain specialist developers may not have sufficient experience in instructional software development, they do have experience in the domain in which they specialize. This then can be the key by using information with which they are already familiar, specifically correlated to their specialization which will also carry over to their actual job. While they may not be required to do actual production that will go to the customer, they may be presented with scenarios for practice that are drawn from past production.]

“To this end, case-based instruction seems to be one of the most effective pedagogical approaches to ill-structured problem solving skills because it provides richer contexts for framing problems and facilitates experience-based knowledge construction (Williams, 1992)” (Choi and Lee, 2009, 104).

The authors then delve deeply into classroom management issues and how they approached their topic using the CBL-CMPS model, “based on the adapted model of Jonassen’s (1997) ill-structured problem solving process…and a modified model of Jonassen’s (1999) constructive learning environment design. The former guided us to identify what kinds of problem solving activities need to be facilitated, whereas the latter identified what kinds of learning resources need to be arranged and in which ways” (Choi and Lee, 2009, p. 105).

[Clearly these two resources by Jonassen are seminal and will be reviewed separately]

REFERENCES
Choi, I., and Lee, K. (2009). Designing and implementing a case-based environment for enhancing ill-structured problem solving: Classroom management problems for prospective teachers. Educational Technology, Research and Development, 57(1), pp. 99-129. DOI: 10.1007/s11423-008-9089-2

Hernandez-Serrano, J., and Jonassen, D. H. (2003). The effects of case libraries on problem solving. Journal of Computer Assisted Learning, 19(1), pp. 103-114.

Jonassen, D. H. (1997). Instructional design models for well-structured and ill-structured problem-solving learning outcomes. Educational Technology, Research and Development, 45(1), pp. 65-94.

Jonassen, D. H. (1999). Designing constructivist learning environments. In C. M. Reigeluth (Ed.), Instructional-design theories and models: A new paradigm of instructional theory (Vol. 2, pp. 215-239). Mahway, NJ: Lawrence Erlbaum Associates.

Schank, R. C. (1999). Dynamic memory revisited. New Your: Cambridge University Press.

Shulman, L. S. (1992). Toward a pedagogy of cases. In J. H. Shulman (Ed.), Case methods in teacher education (pp. 1-30). New York: Teachers College Press.

Voss, J. F. (1987). Learning and transfer in subject-matter learning: A problem-solving model. International Journal of Educational Research, 11(6), pp. 607-622.

Williams, S. M. (1992). Putting case-based instruction into context: Examples from legal and medical education. The Journal of the Learning Science, 2(4), pp. 367-427.