Topic 10: Developing a Project Management Plan
Instructor’s Notes

 

"Research is formalized curiosity. It is poking and prying with a purpose."
—Zora Neale Hurston, Dust Tracks on a Road

Prior to publishing a course, two basic types of testing is recommended:

1) Quality assurance

2) Formative evaluation also know as usability studies

Quality assurance testing is done at the final stages of your project and is designed to catch technical problems and typos.

Formative evaluation helps you find out if you are achieving your goals and objects in the formative stages of your project, instead of waiting until you are have finished the project.

Formative evaluation provides:

  • A chance to test your ideas and implement changes before completing the entire course (i.e., a way to avoid finding out it could have been done more effectively if only you had known...)
  • A way to test both the form and content of your course

The American Dental Association describes formative evaluation as:

Formative evaluation is an important method to assure that the developed software meets requirements on several levels, such as usability, functionality, and instructional effectiveness. Formative evaluation allows developers to assess the program during development. For instance, developers may elect to test an early version of the program with users to identify weaknesses in the user interface or other areas. http://www.ada.org/prof/prac/stands/index.html

An important part of formative evaluation for online courses are ‘usability studies.’ "In Don't Make Me Think: a common sense approach to Web Usability" Steve Krug defines the importance of user testing:

"If you want a great site, you've got to test. After you've worked on a site for even a few weeks, you can't see it freshly anymore. You know too much. The only way to find out if it really works is to test it.

Testing reminds you that not every one thinks the way you do, knows what you know: uses the Web the way you do."

Ease of Use

Three questions developed by Robert Mager in "Making Instruction Work" help define key items evoked by formative evaluations:

  • Does your course accomplish what it is supposed to?
  • Is it of value?
  • Does it impose minimum obstacles between the student and the learning?

 

Additional testing of ease of use ensures the interface doesn’t get in the way of the learning:

  • Are students able to understand instructions?
  • Are students able to navigate through the course?
  • Is it obvious where they are supposed to click first?
  • Are the headings and buttons clearly labeled?
  • Is it visually appealing and easy to read?
Evaluating the Content

One of the most powerful reasons to conduct formative evaluation is to determine that your message is clear.

In designing your plan for conducting formative evaluation, strive to develop test methods that will answer the following questions:

  • Did I achieve my goals and objectives?
  • What were the obstacles?
  • Did the students benefit in the way I intended?
  • What were the unintended consequences?
Unintended Consequences

"The true worth of a researcher lies in pursuing what he did not seek in his experiment as well as what he sought."

–Claude Bernard, (1813–1878) French physiologist

During formative evaluation, be alert to notice the unintended consequences, which can be more powerful than the intended consequences. Unintended consequences are when you had the intention of providing one service or message and users interpret and practice it in ways you didn’t think of.

Examples of unintended consequences

1) During the early days of anti-drug films, young people learned new-and-improved ways to actually use drugs by watching movies that depicted behavior that was, ironically, designed to discourage drug use.

2) Under the Americans with Disabilities Act, television broadcasters are required to provide closed-captioned programming to make broadcasts accessible to people who are hearing impaired or deaf. As it turned out the appeal of closed-captions goes far beyond its intended use and is also enjoyed by people learning to read, people learning a second language, and people who want to watch TV but don’t want to disturb others with the sound in places like restaurants, bars and bedrooms.

The unintended consequences often have more significance than one might think.

When to Test

Formative evaluation should be conducted at every stage of development to ensure the course is achieving your goals and objectives. The earlier you begin formative evaluation the more likely you'll be to have an effective course in the end. Dr. Gerold Lesser, from the Harvard Graduate School of Education, recommends allocating ten to twenty percent of your project budget to formative evaluation.

What a Test Plan Should Include

  • Define what you are testing for. Asking students to perform certain tasks can help you determine the effectiveness of the instruction. For example, ask the student to find a particular piece of information or respond to a particular question.
  • Define when and how long each testing session will be.
  • Define where you’ll test and on what equipment.
  • Define your target audience members, including how you plan to select them.
  • Define your testing methods.
  • Ideally, you should test people who offer diversity in terms of age, experience, geographical location, educational level, gender, income, race, and ethnicity. Who you select as your group of testers is important. Even within a narrow target audience people differ greatly.
  • What type of computers will you use? What type of Internet connection and browser will you select?

Examples of commonly used testing methods:

  • Asynchronous discussion–via listserv or bulletin boards.
  • Focus groups–Have several people test the product and then observe their conversation about the product. You begin by giving them a set of general questions, and then observe and write down their behavior and comments. If they get off track you can redirect their conversation, but your input should be minimal.
  • Observation–This is usually done by observing two or more people using the product. Record where they click, how long things take to finish, at what point they ask questions, etc. Ask questions upon completion.
  • Online chat interview–Conduct one-on-one, or group discussions for direct feedback.
  • Pre- and post-tests–Test people before and after viewing a unit of instruction to measure strengths and weaknesses, as well as whether the instruction is necessary.
  • Example: Can the person write HTML tags before the class? Can they write the tags after the class? What were the problem areas?
  • Surveys–Users can complete a survey to rate the instruction. These are useful only if you are going to do mass testing of the product and are looking for trends (e.g., in things like appeal of the course, graphic design, or first impressions). In general, people do not give serious consideration or thought to answering survey questions.

 

Resources

Designing & Conducting Formative Evaluation
A Powerpoint presentation summarizing the Dick and Carey model of formative evaluation.
http://www.courses.psu.edu/trdev/trdev518_bow100/D_C10present/

Examples of Formative Evaluation Reports

This reports are much broader in scope than the reports required for this course but provide some examples of why people have conducted formative evaluation and how they write up their results.

A formative evaluation of distance education:
experiences of learners and instructors
http://node.on.ca/tfl/notes/herbeson.html

Formative Evaluation of a Multimedia Program (pdf file)
ttp://www.educ.sfu.ca/narstsite/conference/97conference/maor.pdf

Formative Evaluation for an Innovative Course in Social Work (pdf file)
http://www.arcaf.net/social_work_proceedings/ftp_files7/Hick.pdf

 

Developing an Effective Online Class
© Valerie Landau, 2001. All Rights Reserved