Sunday, October 30, 2011

SWTng Article No. 2

Here's article number two on software training. It's by S.C.J. Palvia and P.C. Palvia (2007) in the Journal of Information Systems Education titled "The effectiveness of using computers for software training: An exploratory study." Keywords are computer-based education, software training, learning methods, computer literacy, and instructional strategy. There are 42 references listed. This is the abstract:

"Both academic institutions and corporations have invested huge amounts of resources in computer-based training and education. The evidence in support of the effectiveness of computers and instructional technology in the classroom is mixed at best, and much of the practice is based on faith and ongoing trends in education. In this study, we conduct an exploratory experimental investigation into the effectiveness of four computer-based software training methods: traditional, delayed, asynchronous, and synchronous. We do not find any evidence to support the commonly held beliefs that there is an improvement in the computing literacy scores of students if the instructor has access to computers or if the students have access to computers during the software lesson. On the other hand, students find the practice of using computers both by themselves and by the instructors more satisfying than not being able to use them in the classroom. Our results have serious implications for instructors and decision-makers in both education and industry. While our results are directed at the lower levels of the Bloom's taxonomy of learning, we recommend research into higher levels in order to assess the full impact of computer-based education."

The authors begin by citing Alavi & Leidner (2001) for the "dramatic growth in computer-based teaching and learning in the last decade." (!) This article was published four years ago. The one cited was published 10 years ago. A common assumption is that knowledge is doubling in half the time. However, this may be a misstatement of a concept called the "half-life of knowledge" that is attributed to Fritz Machlup (1962). It is defined as "the amount of time that has to elapse before half of the knowledge in a particular area is superseded or shown to be untrue." (from the article "Half-life of Knowledge" at http://en.wikipedia.org/wiki/Half-life_of_knowledge).

Gonzalez (2004) also refers to this concept, albeit somewhat differently:

"Technology is placing unique requirements on people in the workplace, compelling a sharp focus on training and education. One of the most persuasive factors is the shrinking half-life of knowledge. The “half-life of knowledge” is the time span from when knowledge is gained to when it becomes obsolete. Half of what is known today was not known 10 years ago. The amount of knowledge in the world has doubled in the past 10 years and is doubling every 18 months according to the American Society of Training and Documentation (ASTD). [Note: this is actually the American Society for Training and Development, of which I am a member. The information cited is in Meister and Willyerd (2010) which is drawn from an unnamed study at the University of California at Berkeley.] To combat the shrinking half-life of knowledge, organizations have been forced to develop new methods of deploying instruction."

In his IDOL doctoral dissertation, Dr. Kris Jamsa (2008) notes that in that year, society would "produce an estimated 10 exabytes of information--roughly double the amount of data required to store all the words ever spoken by man. Further, the Web now consists of over 300 exabytes of information (35 times the size of the Library of Congress)." (p. i)

 The point of this little rabbit trail is that if computer-based teaching had grown in 2007, it has exploded in 2011. To use a word that has become ubiquitous, the use of computers has become ubiquitous. While there are still as yet unserved populations, e.g. the homeless and those either too rural to have service or to poor to afford it, computers have become a way of life. My cellphone has a 1-GHz, dual-core processor with 16 GB of storage onboard and another 8 GB SD card that I had laying around that I inserted inside. With bluetooth and wi-fi capabilities, it doesn't take much to get online, since even most McDonald's restaurants have free wi-fi connection to the Internet. Indeed, the latest craze is mobile learning via smart phones like mine.

So whatever their results were in 2007, when I began this doctoral excursion, the stakes have changed galactically (exponentially just doesn't cut it any more). Well, that was just the first sentence of the introduction. Let's see if I can wrap this up quickly. Their problem is that despite advances in technology, the effectiveness of computer-based learning remains a mystery. They claim research results are "mixed and conflicting" (p. 479). They recite a litany of studies that report conflicting results on the one hand and results that are contextualized an unable to be generalized on the other. Therefore, the authors decided to investigate "alternative modes of computer-based education and their effectiveness" (p. 480), focusing on the formal training and learning phase of the training/learning process.

They used a matrix to identify four different methods of training, including traditional, delayed, asynchronous, and synchronous. The topic of all four methods was the same: training in the use of  MicroSoft (T) Excel (R). The author distinguished this as computing literacy (the skills needed for using computers) as opposed to computer literacy (knowledge about computer fundamentals). The four methods were based on two main variables: whether or not the instructor had access to a computer during the instruction and whether or not the students had access to computers during the instruction. The traditional method referred to the fact that neither the instructor nor the students had access to computers during the instruction. This was identified as traditional instruction with delayed practice. In the delayed method, the instructor taught with a computer but the students did not have access to their own computers. This was characterized as computer-based instruction with delayed practice. The asynchronous method reversed these roles: the instructor taught without a computer but the students had access during the instruction. This was called traditional instruction with concurrent practice. Finally, in the synchronous method, the instructor used a computer to teach and the students also had access. This was labeled computer-based instruction with concurrent practice.

While students showed significant improvement in computing skills in all the methods, there was no significant evidence to support the hypotheses that either the students or the instructor having computers in the classroom improved their computing literacy. However, there was evidence to support both hypotheses on student satisfaction with regard to access to computers in the classroom by the instructor and/or the students.

REFERENCES

Gonzalez, C. (2004).  The role of blended learning in technology. Benchmarks Online, 7(9). Retrieved from http://www.unt.edu/benchmarks/archives/2004/september04/eis.htm

Jamsa, K. (2008). Implementing a distributed learning object registry and repository to measure Learning-Object Metadata (LOM) practices and use. (Doctoral dissertation, Capella University). Retrieved from ProQuest Dissertations and Theses.

Machlup, F. (1962). Knowledge production and distribution in the United States. Princeton, NJ: Princeton University Press.

Meister, J., and Willyerd, K. (2010). Looking ahead at social learning: 10 predictions. Learning Circuits. Retrieved from http://www.astd.org/LC/2010/0710_meister.htm

Palvia, S.C., and Palvia, P.C. (2007). The effectiveness of using computers for software training: An exploratory study. Journal of Information Systems Education, 18(4), pp. 479-489.

Saturday, October 29, 2011

SWTng Article No. 1

Okay, here we go again. This will be the first article of a literature review for the new topic of software training. It's by Boot, Merrienboer, and Veerman (2007) in Educational Technology Research and Development titled "Novice and experienced instructional software developers: effects on materials created with instructional software templates." Keywords are instructional software templates, instructional design, and authoring. There are 29 references listed. Here's the abstract:

"The development of instructional software is a complex process, posing high demands to the technical and didactical expertise of developers. Domain specialists rather than professional developers are often responsible for it, but authoring tools with pre-structured templates claim to compensate for this limited experience. This study compares instructional software products made by developers with low production experience (n = 6) and high production experience (n = 8), working with a template-based authoring tool. It is hypothesized that those with high production experience will be more productive and create software with a higher didactical quality than those with low production experience, whereas no differences with regard to technical and authoring quality are expected. The results show that the didactical quality was unsatisfactory and did not differ between groups. Nevertheless the templates compensated for differences in experience because the technical and authoring quality was equal for both groups, indicating that templates enable domain specialists to participate successfully in the production process."

The first question is "Why are the domain specialists (i.e., subject matter experts) the ones building the courseware?" The authors point out two reasons in the introduction: (1) many times very specific requirements for custom-made instructional software benefit from the use of domain specialists as the developers because "they already possess the necessary domain knowledge and have easy access to relevant--multimedia-- resources (Spector and Muraida, 1997)" (p. 648); and (2) "professional instructional designers and software producers are not easily available or too expensive to hire." (p. 648).

The second question is "What is didactical quality?" The authors define this as "the extent to which desired learning outcomes are attained in an efficient manner." (p.648) Combined with technical quality ("...the extent to which the software takes care of the input, information processing, and output as intended" (p. 648)), these two elements together are necessary to "stimulate the desired learning processes." (p. 648) Assessing didactical quality can be accomplished by using a Kirkpatrick Level 2 evaluation of the learners achievement of the learning objectives and/or an analysis of the use of specific instructional principles in the software. The authors reference Merrill's (2002) five learning principles: "(1) the use of real-life problems as the driving force for learning; (2) the proper activation of relevant prior knowledge; (3) the demonstration of useful problem-solving approaches and procedures by the learner; (4) the practical application of those approaches and procedures by the learner; and (5) the integration of what has been learned into real-world activities." (p. 648) However, the authors point out that it is not merely the presence of these five principles that determines the didactical quality of instruction but rather the way in which they are applied.

Why is this an issue? Because frequently domain specialists forget what it was like for them in the beginning, before they had 10, 15, 20, or even 30 years of experience in the field for which they are developing instructional software. Without an understanding of learning theory, let alone the learning traits of their target audience, or of instructional design theory, they frequently develop instructional materials that center around everything they know about the topic and end up "deep in the weeds," so to speak.

This study looked at the effects the templates had on the development process, the quality of the software, and the level of support perceived by the developers. As expected, the experienced group produced more software than the novice group and did so with more information and question elements, but the difference was small, possibly because of the shortened time span for the exercise. Also as expected, the templates helped to compensate for the novices' lack of experience with regard to authoring and technical quality. The novices were only inexperienced with regard to instructional software production, not their domain specialty. Finally, it was expected that the experienced group would incorporate a higher didactical quality than their counterparts, but that both groups' products would evidence sufficient didactic quality. However, although the developers' assessment of their own achievement in this area outstripped the evaluation by experts, neither group evidenced superiority in this regard. The only clear advantage the experienced group seemed to possess was in their questioning strategies, leading to a more active approach to learning with greater variety.

How does this apply to my topic? My population also consists of domain specialists engaged in the task of developing instructional software. Some characterizations of the population are thus transferable. My focus, however, moves back a step to the point before the novices begin development when they are just learning to use the authoring software. Still, the information in this research lays a strong foundation for description of the population.

REFERENCES
Boot, E.W., van Merrienboer, J.J.G., and Veerman, A.L. (2007). Novice and experienced instructional software developers: Effects on materials created with instructional software templates. Educational Technology Research and Development, 55, pp. 647-666. DOI 10.1007/s11423-006-9002-9

Merrill, M.D. (2002). First principles of instruction. Educational Technology Research and Development, 50, pp. 51-55.

Spector, J., and Muraida, D. (1997). Automating design instruction. In S. Dijkstra, N. Seel, F. Schott, and D. Tennyson (Eds.), Instructional design: International perspectives, Vol. 2 (pp. 59-81). Mahwah, NJ: Lawrence Erlbaum.

Change 3 of Change 4,972

 Update (11/2/11): I received an email from my advisor this week informing me that there is a third extension for which I can apply. Since the Scientific Merit Review is only the precursor to the milestone I am striving to achieve, namely the approval by my mentor of my proposal, i.e. the first three chapters of my dissertation, it looks like I will at least apply for the extra extension.

With less than five weeks to my second extension deadline, I have changed my topic (!). I am no longer looking at the impact of ISO 9000 registration on instructional design. I just couldn't get it past the committee. Instead, I am now working on a Scientific Merit Review for software training methods. Here's my concept feasibility proposal:

With the explosion of online learning, an increasing number of instructional software developers are the domain specialists themselves (Boot, van Merrienboer, & Veermen, 2007), whether in higher education, corporate, or military training development. In many cases, companies use proprietary authoring system software that may or may not come with tutorials, either printed copy or online. In addition, documentation may be written by the software engineers who designed the authoring system. Often, these scaffolding tolls are unintelligible to the untrained user.

So what is the most common method of training new hires in the use of the tool of their new trade? Frequently in a corporate setting, it is a form of peer coaching where an experienced user sits alongside the novice and guides them through the initial steps of learning their new job and its tool simultaneously. While behavior modeling is usually considered the most effective method of software training (citation), it may not always bee the most efficient. However, one possible explanation for the use of this method is the project managers' desires to get as much production out of the new hire as soon as possible. Thus, they set them to work on actual development under the tutelage of a developer-mentor. Yet, one could easily argue that instead of one developer producing at full capacity while the other sits through "unproductive" training, now two developers are producing at less that full capacity for even one developer while the novice "learns the ropes."

Managers frequently view training as "lost production time." However, it may be possible to supplement the peer coaching with online tutorials that are individually prescribed, problem-oriented, and authentic--in a word, constructivist in their theoretical foundation. Combined with a pre-test/post-test to ascertain the learner's level of knowledge and understanding both before and after the training and with an Electronic Performance Support System (EPSS) for scaffolding the learner's efforts, it may be possible to significantly increase the new hire's productivity in a shorter amount of time. In addition, levels of instruction can be scaled for basic, intermediate, and advanced instruction which can also be used as inservice or refresher training for more experience developers.

From a population of >500, volunteers will be given a pretest. Then, a random sample of roughly half will receive the training while the rest will go about their normal development duties. After the target sample has completed the training, both groups will execute a post-test and the results compared for correlation between facility with the authoring system and the training.