Showing posts with label response rate. Show all posts
Showing posts with label response rate. Show all posts

Friday, February 8, 2013

The 100% Response Rate Myth (part 1)


A question posed by an attendee of last week’s webinar prompted this response, and it is worth repeating:

The Chair of the Physics Department of a large research university was not convinced of the benefits of moving to a web-based course evaluation system.  He said, “we get 100% response rate using paper and pencil and I don’t want to disturb that”.  I replied that I understood he did not want to make changes, but I asked for a favor: after the evaluations are collected and scanned and reported at the end of the term, examine the final response rate and then deduct those student responses where the comment boxes were left blank or consisted of just a one-word or vague comment such as “none” or “all was ok”.

To his great credit, I received a call some months later.  “62%”, he said.  “Only 62% handed in the forms and wrote at least one useful comment about their instructor or the course.  That number is even lower if I were to exclude those students who mostly filled in all the N/A choices or circled all 5s without apparently putting in much thought.”

Nearly 15 years ago, my colleague Dr. Robert Wisher and I established that students provided 400% more comments about their instructor and course when using a computer than when using paper.  Blind raters also judged the online comments as being more honest, specific, and informative than the paper-based comments.  This held across various learning environments and types of students, including military training, corporate training, and college classrooms (Champagne & Wisher , 1999, 2000, 2001).  This finding was replicated many times, each comparison showing between 200%-700% more comments in favor of web-based evaluation (as summarized by Donovan, 2007).

A more recent examination of 336,000 student responses across 80 campuses over a one-year period yielded similar results for mobile devices.  Students submitting course evaluations using a Tablet provided nearly 300% more comments than paper, and students using Mobile Phones provided 250% more comments than paper (Champagne, 2012).  Allowing students to complete course evaluations by computer, cell phone, and tablet generated responses rates of 72% to 86% per term across this sample.

Have you heard administrators at your institution speak of the mythical 100%?  Has your institution closely examined the ratings and comments submitted by students to determine if 100% were useful for improving the course or instructor delivery?  Do you have stories to tell from your experience?  Please share here or write me at matt@DocChampagne.com.

Tuesday, December 4, 2012

2012 National Survey discussion - Week #2



Today’s Question

From the National Survey: A Deputy Chief Academic Officer asks: “How do academic health science centers with widely diverse professional and undergraduate programs gain meaningful evaluation reports of faculty and courses, often which utilize 10-20 faculty to teach each course?

One practice that has had success involved frequent (weekly) evaluations with a small common core of items, displaying a picture of each instructor, and average ratings of each item listed in a table for comparison.  That is, students visited the same URL each week, typed in a unique code, and the names and pictures of the participating faculty for that week were shown.  Seven instructor-related items and one comment box (of "best/worst" variety) were submitted by students.  Instructors that taught multiple weeks were evaluated each week.  Chairs and Deans viewed at-a-glance web display of mean ratings for each item and instructor, chronologically and alphabetically, and could click on "comments" icon for verbatims.  This helped them determine, while course was still in session, which instructors were particularly effective or ineffective, based on student perceptions, with comments helping to interpret those ratings.  Most important to decision makers was whether individual instructors receiving lower ratings in early exposure to students then received higher ratings near end of the term (i.e., were instructors using student feedback to address barriers to understanding).  Results were shared with instructors each week and conversations about the feedback were held between instructors and their Chair.

This approach may be useful for other programs.  What methods and processes has your institution put in place to address this question?


Today’s Concern

From the National Survey: A respondent writes: Students are inundated with surveys and program evaluations. Students, especially in engineering and science, have near-overwhelming workloads. Requests to complete surveys are not welcome.”  Another wrote: "My main concern is evaluation fatigue for the students and how to keep them interested in completing the surveys." Other respondents wrote of similar concerns.

What do you do in your department, school, or institution to address this issue?  Combine surveys for several purposes into one instrument?  Coordinate among programs so as to spread out the delivery of surveys?  Illustrate to students how feedback is used to encourage their participation? Please share your answers this week.