Collection: Artefacts

Artefact 2: MDDE617 Evaluation Plan

Evaluation is a critical component of both effective instructional design and program management. This artefact represents my growth in areas of problem-solving, analysis, evaluation, and decision-making.

Artefact 2: MDDE617 Evaluation Plan

Evaluation is a critical component of instructional design and instructional management, which are fundamental in my role as a Training Program Manager. This artefact is a plan I developed to assess the effectiveness of the instructional design diploma program at a fictitious university. The artefact illustrates my growth in areas of problem-solving, analysis, evaluation, and decision-making.

Overview

At the time that I enrolled in MDDE617, I was working as an instructional designer for Rogers Communications. My accountability was to design courseware, or small modules that fit into a larger training program; that is to say that I had never been accountable for designing an entire program. While I was familiar with the use of formative and summative evaluations for learners, I was not even aware that programs required evaluation. Within the first week of the course, it was as if the blinders were removed! Through repeat readings of the materials and interaction with the professor, I came to see that evaluation could be conducted at all levels of a training program: from as granular as verifying that a learner has understood a concept, to as broad as validating the effectiveness of an entire program.

As I learned about the nature of programs (succinctly summarized as ongoing, planned interventions), I started applying a systems lens to consider how modules I was developing fit into the broader context of a training program.

Analysis of the learning process

One of the core requirements of the Evaluation Plan was to select an appropriate form of evaluation. I was able to learn about evaluation forms through assignment 2 of MDDE617, in which I had to deduce which evaluation form was used in an existing plan. (This assignment was a great example of experiential learning!)

Project work and attainment of competencies

While building the evaluation plan, I had to choose the correct form of evaluation, along with the optimal approaches for the audience and their goals. My biggest point of confusion in learning the evaluation forms was this: Why are there both forms and approaches? It took significant text review to recognize that a “form” was really an overarching “category” of evaluation, or a grouping of multiple approaches for a specific goal (5.4). I wrapped my head around this concept by creating scenarios in which each form would be useful in my organization (see chart below). I further enriched my understanding of the forms through the completion of assignment 2 in the course.

Form Example
Proactive We are opening a call centre to support a new product, and we need to prepare the staff.
Clarificative We have inherited an Employee Engagement program from another team and need to understand it better.
Interactive Evaluation We are piloting a leadership training program and would like to assess it for opportunities to improve before our pilot finishes.
Monitoring Evaluation Our established New Hire program is in flight at multiple contact centres and we want to ensure it’s being delivered correctly.
Impact Evaluation We want to verify that the New Hire program is providing staff with the skills required to succeed in their roles.

 

To choose the correct form evaluation for the program, I contrasted the program requirements with the five forms (1.5). Because the evaluand was a completed program, and the purpose was to understand the program’s effectiveness, it was apparent that the program required an impact evaluation.

Throughout the MDDE617 course, we studied a variety of evaluation approaches. Some approaches were better suited to the impact evaluation form than others. Before picking out approaches, I considered the audiences for the evaluation, and their individual needs (1.6). I came to realize that each audience had different wants from the program:

  • University managers and administrators want a viable, financially sustainable program of high integrity
  • Students seek to become effective instructional designers
  • Faculty presumably want a program that is straightforward to teach
  • Support staff presumably want a program that is easy to maintain

By taking a systems view of the program, I was able to define hidden audiences and the systems involved in the success (or potential lack thereof) of the program (2.2).

Given the varied wants of the different audiences, I determined that a variety of approaches would be required to evaluate the program, including objectives-based, needs-based, and goal-free approaches (1.7).

In order to ensure the results would be both taken seriously and changes put into practice, I realized that I’d need to take a Utilization-Focused Evaluation approach; in this approach, it is believed that having the intended users participate in the evaluation process, they are likely to adopt the findings (Patton, 2002, p. 224). I first became aware of the Utilization-Focused Evaluation approach through course readings. However, it was in my workplace where I began to recognize that the more I could engage peers and leaders on content, the more likely any of my project work was to succeed (5.8).

Inquiry Standards, Criteria, and Tools

After identifying the audience of the evaluation and their stated desires, I was able to reflect on their perspectives to deduce these key issues and questions:

  1. Is the program profitable and sustainable?
  2. Is this an instructionally sound, quality program?
  3. Does the program meet the needs of participants (and other stakeholders)?
  4. What are the unintended outcomes of the program? (1.3)

After crafting these questions, it was quite logical to tease out standards, criteria, and methods. For each question, I determined a number of standards, and then I crafted criteria to validate that each standard was met. Here is an example of one standard and its corresponding criteria:

Standard 1: The IDD program is profitable for Ithika University

Criterion 1: Program revenues exceed expenses

Criterion 2: Program registrations remain stable or grow within thresholds

Criterion 2b: A recruitment program exists to drive registration

Criterion 3: Cost centres remain in line with other distance education programs at Ithika University

I then created a grid that compared the methods outlined in the course text online and in Fitzpatrick’s book against each of the standards and criteria (5.3, 5.6).

artefact2-measurements.PNG

Finally, I drafted the questions for the assessment instruments by following the guidelines in the course guide. 

artefact2-assessments.PNG 

Post-Activity Reflection

In the years since taking this course, I have moved into a role as a Training Program Manager; consequently, program evaluation has become a prominent function of my job. I regularly work with the forms of proactive evaluation in the planning of new programs; interactive evaluation when piloting programs; and impact evaluation at the completion of programs.

Every training program that I create now includes at least some component of summative evaluation. I also incorporate formative program evaluation for programs of high value and which run most frequently.  I have come to recognize that evaluation plans scale based on the perceived importance of the program; given that organizations are limited in terms of resourcing, the extent of evaluation for a given program may scale with the program’s perceived value risk.

Some observations I have made in the real world since taking this course include:

  • System inquiry is essential to understanding program evaluation. Without understanding the working system, it is difficult, if not impossible, to make meaningful program recommendations on a micro level.
  • Program evaluation is necessary in order to validate the effectiveness of learning initiatives.
  • Organizations are eager to implement objectives-based evaluation; that said, more quantitative, goal-free evaluation can often paint a richer picture of a program.
  • The scope of your project determines the extent of your program evaluation activities. Relatedly, it is almost impossible to validate the effectiveness of a program without evaluation.
  • Both formative and summative evaluation can benefit programs for different reasons.
  • Companies like to evaluate based on objectives; however, objectives must be carefully designed in order to drive the correct behaviour.

dt951113dhc0.gif

Image copyright Scott Adams. Available: http://www.dilbert.com/strip/1995-11-13?utm_source=dilbert.com/share-email&utm_medium=email&utm_campaign=brand-loyalty

Artefact Sample

Details

Comments

Connie Berkshire
11 March 2018, 10:52 AM

Hi David,

Very interesting artefact - wish I had taken this course!! I like how you linked in the application of systems thinking into your artefact description and reflection; nice way to unite learning from different courses and show application of learning and achievement of competencies. I especially like the post reflection section and how you unite the learning and competencies from this artefact to your work life (the comic would be a hit at my workplace :) ).

Connie

Sara Markin
12 March 2018, 10:40 AM
Hi, David,

I am really enjoying exploring your e-portfolio. Your Introduction page was appealing both visually and contextually. For this artefact in particular, I felt like I really understood this assignment and appreciated your real-world examples in your post-activity reflection. It's great to see how your current job applies things you learned in this course as well.
Susan Moisey
14 March 2018, 1:37 PM

David,

This is a good example of a critical reflection, but I have one suggestion. In the first part, you discussed the difficulty you had initially with regard to learning about the different forms of evaluation, but that you recognized the importance of each and its application. It's great that you are now in a position where evaluation is a large part of your job. It would be helpful if you talked a bit about the form of evaluation that takes place in your workplace, so that your learning from the course is more tightly connected to the evaluations you conduct in your workplace.

Susan

David Manning
19 March 2018, 9:28 PM

Thank you for the feedback. I have attempted to integrate this in my post-activity reflection. 

4 comments