Tuesday, December 4, 2012

2012 National Survey discussion - Week #2



Today’s Question

From the National Survey: A Deputy Chief Academic Officer asks: “How do academic health science centers with widely diverse professional and undergraduate programs gain meaningful evaluation reports of faculty and courses, often which utilize 10-20 faculty to teach each course?

One practice that has had success involved frequent (weekly) evaluations with a small common core of items, displaying a picture of each instructor, and average ratings of each item listed in a table for comparison.  That is, students visited the same URL each week, typed in a unique code, and the names and pictures of the participating faculty for that week were shown.  Seven instructor-related items and one comment box (of "best/worst" variety) were submitted by students.  Instructors that taught multiple weeks were evaluated each week.  Chairs and Deans viewed at-a-glance web display of mean ratings for each item and instructor, chronologically and alphabetically, and could click on "comments" icon for verbatims.  This helped them determine, while course was still in session, which instructors were particularly effective or ineffective, based on student perceptions, with comments helping to interpret those ratings.  Most important to decision makers was whether individual instructors receiving lower ratings in early exposure to students then received higher ratings near end of the term (i.e., were instructors using student feedback to address barriers to understanding).  Results were shared with instructors each week and conversations about the feedback were held between instructors and their Chair.

This approach may be useful for other programs.  What methods and processes has your institution put in place to address this question?


Today’s Concern

From the National Survey: A respondent writes: Students are inundated with surveys and program evaluations. Students, especially in engineering and science, have near-overwhelming workloads. Requests to complete surveys are not welcome.”  Another wrote: "My main concern is evaluation fatigue for the students and how to keep them interested in completing the surveys." Other respondents wrote of similar concerns.

What do you do in your department, school, or institution to address this issue?  Combine surveys for several purposes into one instrument?  Coordinate among programs so as to spread out the delivery of surveys?  Illustrate to students how feedback is used to encourage their participation? Please share your answers this week. 

Monday, November 5, 2012

2012 National Survey discussion - Week #1


Today’s Answer

From the National Survey: A Planning & Assessment Coordinator from a Technical College asks: “What is the one best question to ask of all students?

Bending the rules a bit here, but since the respondent did not specify this question to be fixed choice or open-ended, I’ll suggest one of each, both “best” for different purposes:

1. "Would you recommend this instructor to your friends or colleagues?” (Alternately: “Would you recommend this COURSE to your friends or colleagues?”)
I prefer a 4-point forced choice scale (Definitely Yes, Probably Yes, Probably No, Definitely No) but could also use the 11-point Net Promoter Score format.  If the latter, be sure to preserve the actual numeric responses so can have a mean rating in addition to the usual 3 categories (promoters, detractors, passives).  The mean rating will provide much richer information.

    2. "What is the one question you did not get the opportunity to ask of your instructor?
This question never fails to yield interesting material.  We use it on the “one minute survey”, which is given multiple times each term.  This is most effective if instructors can respond to these questions while the class is still in session to ensure students that their feedback was heard.



Today’s Question

From the National Survey: A Chief Academic Officer from a private college in the Northeast asks: “Do you involve faculty in the ongoing administration/supervision/oversight of your course evaluation system?

In discussions with colleges, it is usually assumed that this IS going on: that faculty are distributing and promoting, reminding students, and have some level of control over the process.  But is that not the case at your institution?  Aside from the ubiquitous situation where a handful of instructors tune out and do not participate, are your faculty fully involved, or does the evaluation process occur outside of the classroom and outside of instructor oversight?  I have seen several unfortunate scenarios where schools drive students to the school portal, where they have to click through several layers to arrive at the evaluation form.  Faculty become removed from the process, resulting in lower response rates and less meaningful student comments.



Today’s Concern

From the National Survey: A respondent writes (paraphrased):  “Some faculty give class-wide incentives to complete the online evaluation forms (e.g., if 70% of students fill it out, all students get bonus class credit).  Other faculty see this as a “bribe”.  Our university policy is silent on this issue, although I know other institutions forbid it...”

What do you think of the practice of anonymously tracking student response rate and then rewarding the class as a whole for levels of participation in the evaluation process?  Is this practice welcomed or prohibited at your school?  Share your answers this week. 


Thursday, November 1, 2012

The 2012 Survey of Course Evaluation in Higher Education



The 2012 Survey of Course Evaluation in Higher Education has now ended and the results are enlightening and surprising.  Deans and Directors from 280 colleges and universities participated, answering questions about the content, costs, procedures, and reporting used in institute-wide course evaluation.  Open-ended questions yielded hundreds of ideas and concerns about response rates, mobile technology, incentives, and faculty participation. 

When asked, “What one question about course evaluation practices would you pose to your colleagues?”, 39 of the 137 unique answers centered around response rate (e.g., “What are the best ways to increase response rate for electronic evaluations?”) and 13 of the answers involved incentives (e.g., “What incentives do you offer students to complete an online course evaluation?”).

When asked “What concerns do you and/or your colleagues have about the evaluation process in place at your institution?”, 92 of the 191 answers centered around response rate.

Clearly, participation in web-based course evaluation is the #1, #2, and #3 concern for college administrators.  I will continue to post tried-and-true solutions to this problem on both the blog and the Most FAQ page.  Each week I’ll also post both a Question and a Concern submitted by a College Dean or Director to facilitate discussion on these topics.  I hope you will join in with answers based on your own experience and practices at your institution.