I've already expressed my feelings to Capella University, so this should be nothing new to them. In my previous post, I talked about my new, third mentor and her benefit to me in making progress. One of the members of my committee, whose class I had taken, took it upon themselves to block my every attempt at choosing a topic. Their objection was always "What's the problem?" even though I expressly stated what the problem was that I was focusing on. However, their focus seemed myopic to me while mine seemed too broad to them, I'm sure.
As an aside, this person was one of the old guard in the IDOL department. This group felt, in my opinion, that the rest of the University didn't know what they were doing and instead went their own way in designing the process of reporting on research. They changed the topic approval process completely three times during the course of my development. All the training we received as the three Colloquia we were required to attend was of no use as the principles we were taught were rejected by this group.
Back to topic, my second mentor had even sent one of my attempts to a former President of the University for his review. He found it acceptable. My committee member did not. At one point they even went so far as to impugn my character, that I was not fit to receive a PhD. After two and a half years of coursework with a 3.92 GPA and passing the Comprehensive Exam on the first try, their attitude more than puzzled me. It offended me.
And even with the new (third) mentor, we struggled with this committee member for another year trying to get their approval on the topic. Finally, my mentor called the committee member up and asked them point blank what they were looking for. What they came up with, in my opinion, was a milquetoast, pointless topic on "How Instructional Designers produce instructional materials for technical training." Yeah, that's right, they suckered me into a qualitative research project. This was one of the old guard's pet peeves, that not enough qualitative research was being done in IDOL, in their opinion.
I had a year and a half left to complete the research, do the analysis, and write the dissertation. My mentor was a champ. I was able to interview seven ID's and transcribe their responses to my questions. Now, you have to understand, another of the old guard's pet peeves was ADDIE. It's five stages of the ID process: Analysis, Design, Development, Implementation, and Evaluation. All five of the military services use it, with some modification, but the trunk of the tree is the same in each. Imagine my surprise when all seven of my interviewees reported that they use ADDIE in their development.
I was able to complete the first three chapters of the five required for the dissertation by the end of the year. I just had to perform the analysis, report on it in the fourth chapter, and then write the conclusions in the fifth. Then my mentor got sick. Really sick. Cancer. They had to quit. On top of that, I missed a milestone. The University had put me on scholarship, but when I missed the milestone, they had to rescind it. I had reached my limit on student loans and didn't have the money to continue on my own (I had been unemployed twice during this time).
The University gave me a new mentor, but I had run out of time, money, and energy. I was burned out. So in July 2016, I admitted defeat (gosh, it hurts just to write this), and gave up my lifelong dream of getting a PhD. I accepted the consolation of an MS in Instructional Design. I have the paper to certify my ability and training in the field.
Not that it has made much difference. I spent one year as a volunteer in a homeless shelter, another as a door greeter at Sam's Club, 9 months as a freight dispatcher for a trucking company (I earned a whopping $440 in commissions), and finally 6 months as a Consultative Sales Associate in the Hardware department at Sears, before they closed the store. I've been waiting 5 weeks for my unemployment to kick in only to find out that one of my responses in the initial claim is preventing it from being processed properly. Just go ahead and TRY getting hold of them now. I spent four hours doing that today, but the phone system was so overloaded it kept cutting me off.
I'm not ... I was going to say I'm not worried, that God is my supplier. And He has been. We have food and clothing. Jesus said to be content with this. But I have to admit, with unemployment and the pandemic lock down, I have serious trouble getting to sleep at night. Right now my wife is calling me to bed at 1 a.m. I guess I should get undressed and at least lie down. Good night, all.
Milton's IDOL Dissertation Blog
This is my mental chalkboard for me to collect and organize my sources, information, and thoughts with regard to my Ph.D. dissertation in Instructional Design for Online Learning. Any comments are appreciated.
Friday, May 29, 2020
Friday, March 27, 2015
Call for Research Study Participants
Sunday, February 16, 2014
Ah-wunnaful, Ah-wunnaful, ah-one-and-ah-two-and-ah...
Well, Chrome wouldn't give me the box to enter this text, so I switched to IE 11.
Here's the deal: on the second of January in the year of our Lord 2014, I received an email from the university saying "Surprise! You have a new mentor!" I thought, "Well, this could be interesting." Especially since the last one had been with me for 2 years and we hadn't been able to do this. So I contacted her, and man is she being helpful!
The first thing she did was set up a weekly phone conference appointment, which I have kept faithfully. Next, we began setting up some goals for completion of the new Research Plan form (this is the 3rd approval process I've experienced). I've been filling the items out on the form and she's been looking at them, then we talk about it. THIS IS THE WAY IT WAS SUPPOSED TO HAVE BEEN FROM THE BEGINNING! (Sorry, I didn't mean to yell).
Anywho, I feel like I've made more progress in the last 5 weeks than in the previous 3-1/2 years. I also have a new topic (effectiveness and efficiency of processes and procedures used by instructional designers and managers in the production of instruction) and a new methodology (phenomenography) thanks to my new mentor. Our mantra: "FOCUS! FOCUS! FOCUS!" This could turn out productive.
Oh, and my apologies to all you Lawrence Welk fans out there if my post title offended you.
Here's the deal: on the second of January in the year of our Lord 2014, I received an email from the university saying "Surprise! You have a new mentor!" I thought, "Well, this could be interesting." Especially since the last one had been with me for 2 years and we hadn't been able to do this. So I contacted her, and man is she being helpful!
The first thing she did was set up a weekly phone conference appointment, which I have kept faithfully. Next, we began setting up some goals for completion of the new Research Plan form (this is the 3rd approval process I've experienced). I've been filling the items out on the form and she's been looking at them, then we talk about it. THIS IS THE WAY IT WAS SUPPOSED TO HAVE BEEN FROM THE BEGINNING! (Sorry, I didn't mean to yell).
Anywho, I feel like I've made more progress in the last 5 weeks than in the previous 3-1/2 years. I also have a new topic (effectiveness and efficiency of processes and procedures used by instructional designers and managers in the production of instruction) and a new methodology (phenomenography) thanks to my new mentor. Our mantra: "FOCUS! FOCUS! FOCUS!" This could turn out productive.
Oh, and my apologies to all you Lawrence Welk fans out there if my post title offended you.
Friday, November 15, 2013
Assumptions will bite you in the end...
Okay, so here's the deal...(I know, for writing on intellectual research, that's not very esoteric, but hey, it's just a blog!). I've been making some pretty broad assumptions. First, I've only very broadly defined my independent variables, i.e. the use of craft, mass, and lean production techniques in the development of instructional products. My definition for Craft production is simply "the most efficient and effective production of limited run items or items with great variance..." usually involving "...a master producer who is highly skilled in many areas and who creates products, one at a time and by hand, that are individually unique (Womack, Jones, & Roos, 1990, p. 22)." Well, that can be said about almost any instructional intervention, if you begin with a needs assessment, because every situation will have it's own unique set of variables, sometimes even indicating the lack of a need for training. Instead, a job aid might be the least intrusive, least expensive, and most effective approach. But my point is that nowhere do I stipulate what characteristics are manifested in Craft production vis-a-vis instructional design.
The same can be said of my other two definitions for Mass production and Lean production. The former is defined in my proposal as "the most efficient and effective production of large quantities of identical or very similar items. It usually involves narrowly skilled workers creating identical iterations of a product in great volume from large stockpiles of raw materials and parts (Womack, Jones, Roos, 1990, pp. 22-33)." The latter is defined similarly as having "emerged from and [using] the advantages of both Craft production and Mass production (Womack, Jones, & Roos, 1990, p. 13), using broadly skilled workers performing several jobs in the creation of the product in response to need for the production with just-in-time (JIT) supplies of raw materials and parts." Yet nowhere do I identify what I'm really looking for.
At first, I thought I needed to identify a taxonomy of production, somewhat like Bloom's taxonomy of cognitive learning. But then I remembered that Bloom's taxonomy is hierarchical. I'm not looking at a hierarchy of development here; I'm looking for specific, individual characteristics that can be used to identify what actions used by instructional developers in the Development phase of ADDIE* can be recognized and categorized in each of the three production types, quantified, and then qualified via production metrics as most efficient and effective. That may be a stretch, given that my population has been initially defined as Defense contractors building courseware for the military and my sample as one office comprising one prime contractor and three sub-contractors working on four separate tracks of training for a single aircraft platform. Additionally, the development was subject matter expert-driven, both by the customer and the production shop. My sense was that most of them had little background in instruction, let alone instructional design or even development.
Some ideas I've been batting around in my mind include the following: stepping away from the mixed-method approach I've been pursuing (to what, I'm not sure); stepping away from the case study and broadening both my population and my sample; or focusing on the classification scheme (if a taxonomy can be non-hierarchical, then it's a taxonomy). This is where I seem to be stuck at the moment. If anyone is out there reading this, I'd sure appreciate some feedback.
*ADDIE--the acronym for "Analyze-Design-Develop-Implement-Evaluate," the process most commonly used in the development of military and other training.
Reference:
Womack, J.P., Jones, D.T., and Roos, D. (1990). The machine that changed the world: The story of lean production. New York: Rawson Associates.
The same can be said of my other two definitions for Mass production and Lean production. The former is defined in my proposal as "the most efficient and effective production of large quantities of identical or very similar items. It usually involves narrowly skilled workers creating identical iterations of a product in great volume from large stockpiles of raw materials and parts (Womack, Jones, Roos, 1990, pp. 22-33)." The latter is defined similarly as having "emerged from and [using] the advantages of both Craft production and Mass production (Womack, Jones, & Roos, 1990, p. 13), using broadly skilled workers performing several jobs in the creation of the product in response to need for the production with just-in-time (JIT) supplies of raw materials and parts." Yet nowhere do I identify what I'm really looking for.
At first, I thought I needed to identify a taxonomy of production, somewhat like Bloom's taxonomy of cognitive learning. But then I remembered that Bloom's taxonomy is hierarchical. I'm not looking at a hierarchy of development here; I'm looking for specific, individual characteristics that can be used to identify what actions used by instructional developers in the Development phase of ADDIE* can be recognized and categorized in each of the three production types, quantified, and then qualified via production metrics as most efficient and effective. That may be a stretch, given that my population has been initially defined as Defense contractors building courseware for the military and my sample as one office comprising one prime contractor and three sub-contractors working on four separate tracks of training for a single aircraft platform. Additionally, the development was subject matter expert-driven, both by the customer and the production shop. My sense was that most of them had little background in instruction, let alone instructional design or even development.
Some ideas I've been batting around in my mind include the following: stepping away from the mixed-method approach I've been pursuing (to what, I'm not sure); stepping away from the case study and broadening both my population and my sample; or focusing on the classification scheme (if a taxonomy can be non-hierarchical, then it's a taxonomy). This is where I seem to be stuck at the moment. If anyone is out there reading this, I'd sure appreciate some feedback.
*ADDIE--the acronym for "Analyze-Design-Develop-Implement-Evaluate," the process most commonly used in the development of military and other training.
Reference:
Womack, J.P., Jones, D.T., and Roos, D. (1990). The machine that changed the world: The story of lean production. New York: Rawson Associates.
Errata...
Mea culpa. I erroneously stated in my previous post that a case study would not satisfy a qualitative study. Actually, the opposite is true: a case study will satisfy only a qualitative study. Thanks to Traici Sexton for sending me a message with helpful information.
Monday, August 19, 2013
Transmogrification is imminent.
Yes! Change is coming! Just wait a little while and it will get here. Once again, nothing was broken, so Capella decided to fix it. We have new forms and new ways of doing things. However, the clarification will help (I think!).
It appears that I have erroneously combined a mixed method approach (quantitative and qualitative data) with a Case Study. According to the new parameters established by someone in the IT department, Case Studies will not satisfy a qualitative study. So, what I plan to do is remove references to my former office (I no longer work there and that was always the main contention anyway) and make the population industry-wide. My former connections may provide links to the sample, but to use them exclusively would not provide the randomness necessary to do bona fide research nor pass the IRB.
Anyway, I gotta get on it.
It appears that I have erroneously combined a mixed method approach (quantitative and qualitative data) with a Case Study. According to the new parameters established by someone in the IT department, Case Studies will not satisfy a qualitative study. So, what I plan to do is remove references to my former office (I no longer work there and that was always the main contention anyway) and make the population industry-wide. My former connections may provide links to the sample, but to use them exclusively would not provide the randomness necessary to do bona fide research nor pass the IRB.
Anyway, I gotta get on it.
Wednesday, January 25, 2012
SMR v1.0 Personal Review
The following are notes that I wrote down as I was reviewing my own SMR form.
There is a disconnect in the SMR. I keep slipping into teaching and learning because I’m still thinking like a practitioner and leaning toward Training and Performance Improvement (T&PI).
My Title is a good one – “Impact of a Constructivist Approach to Software Training Design.” I’m focused on the design of training for software and asking how a constructivist approach will affect that design.
My Research Topic shifts a little in that it says I will study the effect that results/is created when constructivist design elements are applied to training for productivity software. This could be approached in three ways: (1) the impact on the process, (2) the impact on the product, and (3) the impact on the learner. The last one again could lead to getting off design issues, but mostly if it is the only focus.
My Research Problem continues the slide into T&PI. Although I am comparing current design using behavior modeling to my proposed design using constructivist techniques, the focus is on the learner’s proficiency. Not gooder!
Research Purpose – More with the learner’s proficiency! Reigeluth (1999) says that instructional-design theory identifies methods of instruction and the situations in which those methods should and should not be used. Further, in all instructional-design theories, the methods of instruction can be broken down into more detailed component methods which provide more guidance to educators.
The Research Question is totally bogus. The focus should be “design,” not “user proficiency.”
The Literature Review seems less focused on user proficiency and more toward design.
The Need for the study is again borderline. As long as I am using a descriptive approach to research and not actually testing learners, I should be okay.
The Methodology again needs to be cleaned up a little with regard to how much of a product will be developed. Rather than willy-nilly applying a whole plethora of constructivist techniques to the whole training system simultaneously to see if the students become more proficient (T&PI), it would be more design-oriented to break the various techniques up and apply them separately to the whole lesson (a mammoth undertaking), different sections of the lesson (still elephantine), different parts of the same section of a lesson (now we’re getting somewhere) or even as different approaches to the same frame of information from a section (now we’re down to bite-sized!). Comparison could be made within the construction process, the appearance of the product, and/or Kirkpatrick Level 1 reviews by potential users of the training or those familiar with the topic.
SMR v1.0
Personal Review – Milton Bulian
Personal Review – Milton Bulian
There is a disconnect in the SMR. I keep slipping into teaching and learning because I’m still thinking like a practitioner and leaning toward Training and Performance Improvement (T&PI).
My Title is a good one – “Impact of a Constructivist Approach to Software Training Design.” I’m focused on the design of training for software and asking how a constructivist approach will affect that design.
My Research Topic shifts a little in that it says I will study the effect that results/is created when constructivist design elements are applied to training for productivity software. This could be approached in three ways: (1) the impact on the process, (2) the impact on the product, and (3) the impact on the learner. The last one again could lead to getting off design issues, but mostly if it is the only focus.
My Research Problem continues the slide into T&PI. Although I am comparing current design using behavior modeling to my proposed design using constructivist techniques, the focus is on the learner’s proficiency. Not gooder!
Research Purpose – More with the learner’s proficiency! Reigeluth (1999) says that instructional-design theory identifies methods of instruction and the situations in which those methods should and should not be used. Further, in all instructional-design theories, the methods of instruction can be broken down into more detailed component methods which provide more guidance to educators.
The Research Question is totally bogus. The focus should be “design,” not “user proficiency.”
The Literature Review seems less focused on user proficiency and more toward design.
The Need for the study is again borderline. As long as I am using a descriptive approach to research and not actually testing learners, I should be okay.
The Methodology again needs to be cleaned up a little with regard to how much of a product will be developed. Rather than willy-nilly applying a whole plethora of constructivist techniques to the whole training system simultaneously to see if the students become more proficient (T&PI), it would be more design-oriented to break the various techniques up and apply them separately to the whole lesson (a mammoth undertaking), different sections of the lesson (still elephantine), different parts of the same section of a lesson (now we’re getting somewhere) or even as different approaches to the same frame of information from a section (now we’re down to bite-sized!). Comparison could be made within the construction process, the appearance of the product, and/or Kirkpatrick Level 1 reviews by potential users of the training or those familiar with the topic.
REFERENCE
Reigeluth, C. M. (1999). What is instructional-design theory and how is it changing? In C. M. Reigeluth (Ed.) Instructional-design theories and models, vol. II. (pp. 5-29). Mahwah, NJ: Erlbaum.
Subscribe to:
Comments (Atom)