Evidence of Teaching Excellence (narrative)

I take tremendous pride in the work I have done to this point as a “teacher.”  It should be noted that in many instances, I taught classes that had never been taught before. That is, given my particular expertise, I was given opportunities to develop new, relevant, meaningful courses (see Course and Program Development page). This is a tremendous opportunity for any scholar, but it also adds additional burdens of course development and the need to work through pedagogical decisions. It would not be unexpected for a first-time course to be a bit bumpy and for student reviews to be mixed.

However, the end-of-semester course and teacher ratings I have received from my students have been consistently positive. That said, there is also clear evidence of pedagogical improvement in those data.  All of the data from course and teacher evaluations are displayed on the Teacher Evaluations page. A few notes about those data:

  • The ratings system used at Hofstra (see description that immediately follows) is different from the system used at VCU, so the data are hard to compare.
  • In the last few years, the course evaluations at VCU changed and moved online. As a result, more data are readily available and some have questioned whether students respond qualitatively differently to an online rating system.
  • For courses or modules that are part of the Ed.D. program in the VCU Educational Leadership Department, we use a more extensive evaluation system that is posted to SurveyMonkey. There are some overlapping questions, so where possible, those have been presented on the Teaching Evaluations page.

Hofstra University

At Hofstra University, for each course in which six or more students are enrolled, the Course and Teacher Ratings (CTR) consist of multiple choice items (quantitative data) and a separate form for students to write about the course and the teacher in a more open-ended format (qualitative data).  The 21 items on the quantitative portion of the CTR comprise four scales: Overall evaluation of instructor and course, Workload/Difficulty, Grading/Feedback Quality, and Interaction/Encouragement.  All scales are scored on a range of one to five; for all but the workload/difficulty scale, lower scores (closer to one) represent better ratings.  By Hofstra’s standards, for the workload/difficulty scale, the optimal score is a three.

The figure below displays the data for the courses for which CTR data were accumulated and analyzed during my time at Hofstra.  The scores represent the averages for the main scale: Overall Evaluation of Instructor and Course.  While the scores have never been anything but above average, it is quite clear that the scores in my earliest semesters were not as “good” (close to one) as the more recent scores.  It is also worth noting that of the four scores over 1.5, three were from an undergraduate class.  I do not know if undergraduates tend to evaluate instructors differently than graduate students, but I do know that the learning experiences I created for undergraduates felt qualitatively different for me as an instructor. The figure shows clear pedagogical improvement.  It is also interesting to note how the trajectory of the scores, for the most part, mirrors that of the program faculty as a whole.  However, since 2004, I scored at least as well as my colleagues, if not better (my scores are in blue (diamonds); program faculty in red (squares)).

 

Virginia Commonwealth University

Course evaluations at VCU use a different set of items than at Hofstra.  Unlike Hofstra, though, there are no scales.  Instead, there are 11 separate items.  The means and medians for the courses I have taught at VCU are in the table below. All of these data can be found on the Teaching Evaluations (data) page, but I have reproduced the VCU data table below.

SEMESTERFALL 2007SPRING 2008FALL 2008SPRING 2009FALL 2010SPRING 2011FALL 2011SPRING 2012
COURSEADMS 611ADMS 611ADMS 611EDUS 710EDUS 710EDUS 890Ed.D. ModuleEDUS 717ADMS 707ADMS 647ADMS 647WEIGHTED MEAN (ALL)WEIGHTED MEAN (F2F ONLY)
ENROLLMENT1313152013720131059
The instructor was well prepared for this course.4.54.84.34.64.854.74.84.63.84.34.604.66
The instructor presented course material in an organized and informative manner.4.44.23.94.54.14.94.84.83.92.63.64.194.46
The instructor's choice of instructional materials facilitated my learning in this course.3.83.53.64.33.64.94.93.92.83.63.563.57
The instructor's teaching techniques helped me learn the material in this course.3.63.53.24.23.454.93.82.83.23.874.07
The instructor was available outside of the classroom.4.33.63.33.93.554.754.64.23.94.214.19
The instructor encouraged discussion, participation and questions in the course.4.64.54.24.74.554.84.94.8444.584.68
The instructor treated students with courtesy and respect.4.64.33.94.84.754.74.94.82.43.14.344.63
The instructor clearly presented evaluative criteria for assessing my work.3.73.83.13.84.24.94.74.22.22.83.503.55
The instructor graded and returned students' written work in a timely manner.3.22.922.93.34.854.22.42.73.513.58
The course helped me understand what will be expected of me as a professional.3.93.53.44.44.44.94.83.833.84.124.32
As a result of this course, my knowledge and skills were increased in the subject matter.4.14.23.84.54.354.84.23.64.24.334.43

Those data are there, and there is plenty of meaning to be made of them. I have always valued the open-ended responses (i.e. the qualitative data) from the students as much as, if not more than, the Likert-scale items. Those data give me the information I need to improve my teaching. To that end, I encourage you to consider perusing the Testimonials page to see what students have written about my teaching. That said, what follows are some of my own brief thoughts on the quantitative data.

I have always questioned the use of standardized assessments of teaching and learning experiences. If the instructor has been thoughtful and planful about creating the learning experience, the course design should be attuned to the particular subject matter and student population. That likely renders certain “standardized” items from the course evaluations irrelevant. For example, one item is “[t]he instructor’s choice of instructional materials facilitated my learning in this course.” In the courses I taught most recently, I provided very few instructional materials; within the confines of the topic of a weekly session, students were empowered to find their own readings and learning artifacts and to share them with their classmates through various media. Another item is, “[t]he instructor clearly presented evaluative criteria for assessing my work.” I have recently used both self- and peer-assessment. I offer regular feedback throughout the semester, but I don’t formally assess in ways students are most used to. Thus, these items are not particularly meaningful for evaluations of my courses.

I also note that my “scores” tend to be higher in doctoral-level course than masters or post-masters level courses. It is not clear that those differences are statistically significant, and scores are generally high across the board. But, I do think I have developed a bit of a reputation as a demanding instructor. I expect my students to really commit to the learning experience, and I think this is harder for masters students who have a more instrumentalist purpose for graduate study than doctoral students.

Overlapping both of those first two points, I believe the course evaluations as currently constructed are less applicable to fully online courses. Some of the items are not as meaningful as applied to an online course (e.g. “The instructor was available outside of the classroom.”). Also, it has been my experience that some students bring all kinds of assumptions and negative attitudes towards online learning.  Despite my best efforts, some of those students never quite come around and their attitudes negatively color the experience.  I feel confident that course evaluations for fully online courses are likely to include lower ratings than face-to-face courses, at least until students come to embrace the possibilities for and affordances of the Web for learning.  Thus, in the first table on the Teaching Evaluations (data) page, I included not only the weighted means across the semester, but also a weighted mean for the face-to-face courses only. You will notice that the overall weighted means are a tad lower than the means of just the face-to-face courses.

Finally, I feel a need to point out what I believe is anomalous. In the Fall of 2011 and Spring of 2009, I taught a section of ADMS 647 that included 5 students and 9 students respectively. In each of those semesters, I had one or two students that came to the course with an extremely negative attitude about the course content and about online learning. One student, in particular, made it very clear from the beginning that he was not open to even exploring the affordances of technology. He claimed to be adopting a stance a critic, which would have been fine (even welcome), had his attitude not turned nasty towards the class and towards me. In the course evaluations for those two semesters, you see a few students who rated the course as low as possible. Despite my persistent efforts to meet the needs of these difficult students, they never bought in to the course and its design. [It is worth noting that other professors in the department have had similar negative students with students in this particular cohort.] Those are outweighed by students on the other end of the spectrum, but those few low scores greatly weigh down the overall ratings, especially given the low enrollment in those courses.

Nevertheless, overall, I am pleased with scores on the course evaluations. Additionally, to repeat, there is more “qualitative” evidence of teaching effectiveness on the Testimonials page. Combined with my extensive work on dissertation advisement and course and program development, I truly believe that I have become an excellent teacher. That additional teaching work is discussed in the next sections in the narrative.

 

NEXT: Dissertation Advisement (narrative) →

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.