Sunday, November 6, 2011

SWTng 5: 3 Types of Interaction Treatments

Article #5 is by Bernard et al (2009) titled "A meta-analysis of three types of interaction treatments in distance education," published in the Review of Educational Research with 135 references, 72 of which are identified specifically as studies in the meta-analysis. Author-supplied keywords include distance education, meta-analysis, student interaction, and interaction treatment. Following is the abstract:

"This meta-analysis of the experimental literature of distance education (DE) compares different types of interaction treatments (ITs) with other DE instructional treatments. ITs are the instructional and/or media conditions designed into DE courses, which are intended to facilitate student-student (SS), student-teacher (ST), or student-content (SC) interactions. Seventy-four DE versus DE studies that contained at least one IT are included in the meta-analysis, which yield 74 achievement effects. The effect size valences are structured so that the IT or the stronger IT (i.e., in the case of two ITs) serve as the experimental condition and the other treatment, the control condition. Effects are categorized as SS, ST, or SC. After adjustment for methodological quality, the overall weighted average effect size for achievement is 0.38 and is heterogeneous. Overall, the results support the importance of the three types of ITs and strength of ITs is found to be associated with increasing achievement outcomes. A strong association is found between strength and achievement for asynchronous DE courses compared to courses containing mediated synchronous or face-to-face interaction. The results are interpreted in terms of increased cognitive engagement that is presumed to be promoted by strengthening ITs in DE courses."

Many reports of research regarding distance education (DE) (focusing on online or technology-based learning) compare it to classroom instruction (CI). This review focuses on reports of DE compared to other types of DE. The authors first cite results of DE versus CI comparisons:
  1. Because of a wide variability in effect sizes, DE can be both better and worse than CI, "based on measured educational outcomes and that some pedagogical features of DE design are related to increased student achievement." (p. 1245)
  2. The "research methodologies typically used to assess the phenomenon are woefully inadequate and poorly reported." (p. 1245) Comparison of "apples and oranges" led to casual inferences that lack any degree of certainty.
  3. Because the focus has been on comparing DE to CI, discerning the true nature and value of DE has been almost impossible, e.g. what makes good DE versus what is bad DE.
Obviously, the DE versus CI controversy relates to the early days of online learning. An earlier version of DE (correspondence courses) were not as much a threat to the brick-and-mortar establishment as they were not so prevalent. However, with the advent of the Internet and exploding access to technology, these same institutions began to sense their livelihood was threatened and that they no longer held a monopoly on higher education. Indeed, many institutions have adopted a "get on board or get left behind" attitude in building their own divisions of online learning. Thus, the authors felt it best, therefore, to compare "different but compatible types of DE technologies" (Clark, 2000, p. 4).

The meta-analysis begins with the assignment of a valence of the effect size, whether positive (+), i.e. the treatment performs better than the control, negative (-), i.e. the control performs better than the treatment, or no difference (0). However, this becomes more difficult when the control condition is not obvious. Therefore, the first task of the researchers was to "establish a rational and revealing way of determining the +/0/- valence of each calculated effect size" (p. 1246). They applied this methodology only to empirical studies that compared and contrasted different instructional treatments.

The researchers settled on three perspectives that "emerged as potentially useful dimensions for enabling comparisons between treatment conditions" (p. 1246). They include student interaction, student autonomy, and technological functionality. Student interaction became "the basis for effect size coding" and "the structure within which analyses would be conducted and results interpreted" (p. 1246). Interaction relates not just person-to-person but also between the learner and the content, such that "the goal of interaction is to increase understanding of the course content or mastery of the defined goals" (Thurmond and Wombach, 2004, p. 4). Some interaction is social in nature (e.g. student-student (SS) and student teacher (ST)), though its impact is probably felt more in attitude and course satisfaction than measures of achievement. There is also a distinction between asynchronous DE, mediated synchronous DE, and mixed, or blended, DE, as well as a distinction between asymmetrical and symmetrical DE. Asymmetrical DE refers to one-way communication, such as reading a book or watching a video. Symmetrical, on the other hand, refers to two-way communication that is equally balanced, such as a phone conversation or video chat.

The distinction in this report is that they are not measuring the actual interactions by the students but rather the way in which those interactions are designed into the instruction; thus, they are given the term "interaction treatments" or ITs. In this report, ITs represent the levels of the independent variable. The researchers focused on six research questions (p. 1249):
  1. What are the effects of the three kinds of interaction (SS, ST, and SC) on achievement?
  2. Does more overall IT strength promote better achievement?
  3. Do increases in treatment strength of any of the three different forms of interaction result in better levels of achievement?
  4. Which combinations of SS, ST,and SC interaction most affect achievement?
  5. Are there differences among synchronous, asynchronous, and mixed forms of DE in terms of achievement?
  6. What is the relationship between treatment strength and effect size for achievement outcomes in asynchronous only DE studies?
For the meta-analysis itself, there were 11 different criteria used to define the set of studies to be included (pp. 1250-1251):
  1. A comparison between two DE conditions, either on the basis of pedagogical differences or technological differences, was required.
  2. DE applications with some face-to-face meetings were included.
  3. A reported measure of achievement outcomes was required in the experimental and the control condition.
  4. Sufficient data for effect size calculation or estimation, the reporting of sample sizes so that a standard error of effect size could be calculated, and the explicit direction of the effect was required.
  5. Only whole courses were included.
  6. A report of the same or closely equivalent achievement measures for each condition was required.
  7. An identifiable grade or age level of learner was required.
  8. The studies could come from publicly  available scholarly articles, book chapters, technical reports, dissertations, or presentations at scholarly meetings.
  9. The inclusive dates of the studies were from January 1985 to December 2006.
  10. Courses that were not institutionally based were excluded.
  11. Only interventions that lasted 15 or more hours were included.
"The major conclusion from this review is that designing ITs into DE courses, whether to increase interaction with the material to be learned, with the course instructor, or with peers, positively affects student learning" (p. 1264). Though they could not specify the mental processes being fostered by the ITs, the results indicated a moderate and significant increase in cognitive engagement and meaningfulness which they attributed to the different types and amounts of interactivity produced by the presence of ITs.

REFERENCES

Bernard, R.M., Abrami, P.C., Borokhovski, E., Wade, C.A., Tamim, R.M., Surkes, M.A., and Bethel, E.C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79(3), pp. 1248-1289. DOI: 10.3102/0034654309333844

Clark, R.E. (2000). Evaluating distance education: Strategies and cautions. Quarterly Review of Distance Education, 1, pp. 3-16.

Thurmond, V.A., and Wombach, K. (2004). Understanding interactions in distance education: A review of the literature. International Journal of Instructional Technology and Distance Learning, 1(1). Retrieved from http://itdl.org/journal/Jan_04/article02.htm

No comments:

Post a Comment