Friday, March 22, 2013

Evaluating Distance Learning Flashback flashback flashback.....
We tend to focus on the future of education without looking back, and eagerly seek predictions of growth and changes in the future without revisiting the accuracy of previous predictions.  Here is an excerpt from Wisher & Champagne (2000) with predictions and the state of the industry in distance learning gathered in late 1997. The full article is "Distance learning and training: An evaluation perspective".  Many of the findings are still relevant to this day - see Embedded Assessment Series #10

How far we have come in terms of programs, students, and technology in 15 years!

From Wisher & Champagne: ... On another front, Web-based training for the information technology workforce will grow from $92 million in 1996 to $1.7 billion in 2000, a growth of over 1,800%, with an emphasis on Intranet-based, asynchronous, self-paced instruction (Web Week, Sep. 8, 1997).  Internet tools emerging in the training marketplace... include internet relay chat, multi-user dimensions, and multi-user simulation environments (Kouki & Wright, 1996).  Phillips (1998) summarizes recent trends in distance learning marketplace:
  • Number of students taking distance learning courses from higher-education institutions: 7,000,000
  • Number of accredited degree and certificate distance learning programs: 1,200
  • Number of accredited distance learning colleges: 900
  • Percentage of corporate training delivered online in 1997: 16%
  • Percentage of corporate training estimated to be delivered online in 2000: 28%


Friday, February 8, 2013

The 100% Response Rate Myth (part 1)


A question posed by an attendee of last week’s webinar prompted this response, and it is worth repeating:

The Chair of the Physics Department of a large research university was not convinced of the benefits of moving to a web-based course evaluation system.  He said, “we get 100% response rate using paper and pencil and I don’t want to disturb that”.  I replied that I understood he did not want to make changes, but I asked for a favor: after the evaluations are collected and scanned and reported at the end of the term, examine the final response rate and then deduct those student responses where the comment boxes were left blank or consisted of just a one-word or vague comment such as “none” or “all was ok”.

To his great credit, I received a call some months later.  “62%”, he said.  “Only 62% handed in the forms and wrote at least one useful comment about their instructor or the course.  That number is even lower if I were to exclude those students who mostly filled in all the N/A choices or circled all 5s without apparently putting in much thought.”

Nearly 15 years ago, my colleague Dr. Robert Wisher and I established that students provided 400% more comments about their instructor and course when using a computer than when using paper.  Blind raters also judged the online comments as being more honest, specific, and informative than the paper-based comments.  This held across various learning environments and types of students, including military training, corporate training, and college classrooms (Champagne & Wisher , 1999, 2000, 2001).  This finding was replicated many times, each comparison showing between 200%-700% more comments in favor of web-based evaluation (as summarized by Donovan, 2007).

A more recent examination of 336,000 student responses across 80 campuses over a one-year period yielded similar results for mobile devices.  Students submitting course evaluations using a Tablet provided nearly 300% more comments than paper, and students using Mobile Phones provided 250% more comments than paper (Champagne, 2012).  Allowing students to complete course evaluations by computer, cell phone, and tablet generated responses rates of 72% to 86% per term across this sample.

Have you heard administrators at your institution speak of the mythical 100%?  Has your institution closely examined the ratings and comments submitted by students to determine if 100% were useful for improving the course or instructor delivery?  Do you have stories to tell from your experience?  Please share here or write me at matt@DocChampagne.com.

Tuesday, December 4, 2012

2012 National Survey discussion - Week #2



Today’s Question

From the National Survey: A Deputy Chief Academic Officer asks: “How do academic health science centers with widely diverse professional and undergraduate programs gain meaningful evaluation reports of faculty and courses, often which utilize 10-20 faculty to teach each course?

One practice that has had success involved frequent (weekly) evaluations with a small common core of items, displaying a picture of each instructor, and average ratings of each item listed in a table for comparison.  That is, students visited the same URL each week, typed in a unique code, and the names and pictures of the participating faculty for that week were shown.  Seven instructor-related items and one comment box (of "best/worst" variety) were submitted by students.  Instructors that taught multiple weeks were evaluated each week.  Chairs and Deans viewed at-a-glance web display of mean ratings for each item and instructor, chronologically and alphabetically, and could click on "comments" icon for verbatims.  This helped them determine, while course was still in session, which instructors were particularly effective or ineffective, based on student perceptions, with comments helping to interpret those ratings.  Most important to decision makers was whether individual instructors receiving lower ratings in early exposure to students then received higher ratings near end of the term (i.e., were instructors using student feedback to address barriers to understanding).  Results were shared with instructors each week and conversations about the feedback were held between instructors and their Chair.

This approach may be useful for other programs.  What methods and processes has your institution put in place to address this question?


Today’s Concern

From the National Survey: A respondent writes: Students are inundated with surveys and program evaluations. Students, especially in engineering and science, have near-overwhelming workloads. Requests to complete surveys are not welcome.”  Another wrote: "My main concern is evaluation fatigue for the students and how to keep them interested in completing the surveys." Other respondents wrote of similar concerns.

What do you do in your department, school, or institution to address this issue?  Combine surveys for several purposes into one instrument?  Coordinate among programs so as to spread out the delivery of surveys?  Illustrate to students how feedback is used to encourage their participation? Please share your answers this week. 

Monday, November 5, 2012

2012 National Survey discussion - Week #1


Today’s Answer

From the National Survey: A Planning & Assessment Coordinator from a Technical College asks: “What is the one best question to ask of all students?

Bending the rules a bit here, but since the respondent did not specify this question to be fixed choice or open-ended, I’ll suggest one of each, both “best” for different purposes:

1. "Would you recommend this instructor to your friends or colleagues?” (Alternately: “Would you recommend this COURSE to your friends or colleagues?”)
I prefer a 4-point forced choice scale (Definitely Yes, Probably Yes, Probably No, Definitely No) but could also use the 11-point Net Promoter Score format.  If the latter, be sure to preserve the actual numeric responses so can have a mean rating in addition to the usual 3 categories (promoters, detractors, passives).  The mean rating will provide much richer information.

    2. "What is the one question you did not get the opportunity to ask of your instructor?
This question never fails to yield interesting material.  We use it on the “one minute survey”, which is given multiple times each term.  This is most effective if instructors can respond to these questions while the class is still in session to ensure students that their feedback was heard.



Today’s Question

From the National Survey: A Chief Academic Officer from a private college in the Northeast asks: “Do you involve faculty in the ongoing administration/supervision/oversight of your course evaluation system?

In discussions with colleges, it is usually assumed that this IS going on: that faculty are distributing and promoting, reminding students, and have some level of control over the process.  But is that not the case at your institution?  Aside from the ubiquitous situation where a handful of instructors tune out and do not participate, are your faculty fully involved, or does the evaluation process occur outside of the classroom and outside of instructor oversight?  I have seen several unfortunate scenarios where schools drive students to the school portal, where they have to click through several layers to arrive at the evaluation form.  Faculty become removed from the process, resulting in lower response rates and less meaningful student comments.



Today’s Concern

From the National Survey: A respondent writes (paraphrased):  “Some faculty give class-wide incentives to complete the online evaluation forms (e.g., if 70% of students fill it out, all students get bonus class credit).  Other faculty see this as a “bribe”.  Our university policy is silent on this issue, although I know other institutions forbid it...”

What do you think of the practice of anonymously tracking student response rate and then rewarding the class as a whole for levels of participation in the evaluation process?  Is this practice welcomed or prohibited at your school?  Share your answers this week. 


Thursday, November 1, 2012

The 2012 Survey of Course Evaluation in Higher Education



The 2012 Survey of Course Evaluation in Higher Education has now ended and the results are enlightening and surprising.  Deans and Directors from 280 colleges and universities participated, answering questions about the content, costs, procedures, and reporting used in institute-wide course evaluation.  Open-ended questions yielded hundreds of ideas and concerns about response rates, mobile technology, incentives, and faculty participation. 

When asked, “What one question about course evaluation practices would you pose to your colleagues?”, 39 of the 137 unique answers centered around response rate (e.g., “What are the best ways to increase response rate for electronic evaluations?”) and 13 of the answers involved incentives (e.g., “What incentives do you offer students to complete an online course evaluation?”).

When asked “What concerns do you and/or your colleagues have about the evaluation process in place at your institution?”, 92 of the 191 answers centered around response rate.

Clearly, participation in web-based course evaluation is the #1, #2, and #3 concern for college administrators.  I will continue to post tried-and-true solutions to this problem on both the blog and the Most FAQ page.  Each week I’ll also post both a Question and a Concern submitted by a College Dean or Director to facilitate discussion on these topics.  I hope you will join in with answers based on your own experience and practices at your institution.