Home Print this page Email this page Users Online: 510
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
ORIGINAL ARTICLE
Year : 2014  |  Volume : 2  |  Issue : 3  |  Page : 166-172

Students evaluating teaching effectiveness process in saudi arabian medical colleges: A comparative study of students' and faculty members perception*


Deanship of Quality and Academic Accreditation, University of Dammam, Dammam, Saudi Arabia

Date of Web Publication11-Oct-2014

Correspondence Address:
Ahmed A Al-Kuwaiti
King Fahd Hospital of the University, University of Dammam, P.O. Box 40065, Al Khobar 319 52
Saudi Arabia
Login to access the Email id

DOI: 10.4103/1658-631X.142513

Rights and Permissions
  Abstract 

Introduction: Students evaluating teaching effectiveness (SETE) is highly topical world-wide, including Kingdom of Saudi Arabia (KSA). The literature review highlighted the focus of this study, namely, students' and instructors' perception of the SETE process, not SETE data as such.
Setting: Medical colleges in seven Governmental universities in KSA. A group of randomly drawn final year students and a group of their teaching faculties were studied.
Materials and Methods: A researcher-constructed 26 items questionnaire on 5-point Likert- type scale was used to generate data. Proportion test and Mann-Whitney U-test were used to compare the differences between the perceptions of the two groups.
Results: A total of 600 completed questionnaires were retrieved and analyzed. There were statistically significant differences between instructors' and students' perception of SETE. Whereas, students registered disapproval in three of the four areas studied, the pattern of instructors' response was a mirror image of the students'. It showed disapproval in one of four areas.
Conclusion: Sample size was satisfactorily fair as compared with other articles with similar research focus. Evidence of objectivity and data authenticity was demonstrated. The differences and similarities between the opinions in the two groups, as well as in the literature, were identified. It can be safely concluded that the findings in this study agreed broadly with others. Future research was also signposted.

  Abstract in Arabic 

ملخص البحث :

تعنى هذه الدراسة بتقييم طلاب الطب لفعالية التدريس في سبع جامعات سعودية. وتضمنت عينة عشوائية لمجموعتين من الطلاب والأساتذة. استخدم مقياس (trekiL) لجمع البيانات والبرنامج الإحصائي (SPSS) لتحليلها. كانت هناك فروق إحصائية واضحة بين تصور الأساتذة والطلاب لتقييم فعالية التدريس. أتضح أن أساليب التدريس الفعالة تحسن نتائج التقييم وأن طريقة التدريس تتحسن عندما يشارك الأساتذة في ورش عمل تطوير الكليات.




Keywords: Evaluation, instructors, medical college, perception, students, teaching effectiveness


How to cite this article:
Al-Kuwaiti AA. Students evaluating teaching effectiveness process in saudi arabian medical colleges: A comparative study of students' and faculty members perception*. Saudi J Med Med Sci 2014;2:166-72

How to cite this URL:
Al-Kuwaiti AA. Students evaluating teaching effectiveness process in saudi arabian medical colleges: A comparative study of students' and faculty members perception*. Saudi J Med Med Sci [serial online] 2014 [cited 2019 May 25];2:166-72. Available from: http://www.sjmms.net/text.asp?2014/2/3/166/142513

FNx01Funded by the Deanship of Scientifi c Research at University of Dammam (Project #244).



  Introduction Top


The topic of students evaluating teaching effectiveness (SETE) has received much attention from researchers in terms of reliability, validity, and the potential uses of its results. [1] As such, the role of SETE and its variants in high education quality management and accreditation is well-recognized worldwide. This includes the Kingdom of Saudi Arabia (KSA), where the advent of the National Commission for Academic Accreditation and Assessment (NCAAA) in 2005 has provided a solid platform. [2],[3] NCAAA is charged with responsibility to ensure that all programs offered in KSA universities attain international standards.

Nevertheless, the perception of SETE by faculty members and students remains a highly emotive issue. This current study explores the perception of university faculty members and students about the SETE process in KSA.

Literature review

Studies indicate that faculty members do not always make use of the evaluations they receive from students. Some faculty members believe that student ratings are invalid fail to provide effective input for the appraisal of teachers' professional performance, and lead to "personality contests." [4] Another recent study revealed that, from lecturer's perspective, SETE has more demerits than merits. The authors suggested that SETE results must be treated with extreme caution. [5]

Smith and Carney presented a seminal paper on the process of SETE [6] in which they reported that "students were uncertain regarding the uses made of teaching evaluations," but took "the opportunity to evaluate their professors seriously." The authors argued that the purposes of SETE "need to be made clear to students." They concluded that students' "apparent lack of knowledge may contribute to the cynicism, which they - and perhaps the public at large - feel toward the work university professors are doing as teachers." [6]

Chen and Hoshower, [7] using expectancy theory reported that, from the students' standpoint, the two most attractive outcomes of SETE were improvement in teaching as well as course content and format. Students were dis-interested in the use of SETE for professors' promotion and students' decisions on how academics selected courses or instructors. Students' motivation to participate in SETE was also impacted by their expectation that SETE would provide them with adequate feedback.

Heine and Maddox [8] describe a study, not of SETE data, but of student perceptions of the entire SETE process." and found significant differences related to students' gender. Female students took the evaluation process more seriously than their male counterparts, whereas male students were cynical and believed that leniency bias was common.

Kogan et al. [9] found differences based on faculty members' gender. Overall, female faculty members were more negatively impacted by SETE than their male counterparts. The authors observed that these gender differences support previous research [9] that suggests males and females receive and react differently to personal evaluation. Amr et al. [10] from University of Dammam (UD, formerly called King Faisal University), in a cross-sectional study of 110 medical students, reported that most students disagreed that teachers used their evaluations to revise assessment methods, and to promote learner-centered teaching. Most students thought that valid criteria for evaluating teaching ability included promoting critical thinking and providing encouragement and motivation. Amr et al. concluded that students perceived the SETE process positively, however problematic areas remained. [10]

In a South African study. [5] The authors found that lecturers had negative perception of SETE, but were sometimes positive about the use of SETE results for formative purposes. Lecturers were strongly opposed to the use of such information for summative purposes. The authors [5] recommended that SETE must always be triangulated with other multi-dimensional evaluation methods so as to increase the validity and reliability in the overall evaluation of teaching effectiveness in higher education.

Research questions

This study was designed to investigate:

  1. How KSA university faculty members and students perceive the SETE process and
  2. Any significant differences in the perception of the two groups.



  Materials and methods Top


Study design

A cross-sectional study design of the perception of the students and faculty members' perception on Students evaluating teaching process adopted at selected medical colleges in KSA.

Study setting

Seven governmental universities located at four different geographical regions in KSA. Samples of final year students and their respective teaching faculty were selected as samples using Convenient Sampling method. The sample sizes ranged from 12 to 82 for students, and 18-181 for faculty [Table 1].
Table 1: Distribution of respondents from seven
selected health sciences colleges in KSA universities


Click here to view


Data collection tool

A questionnaire (Academic Staff and Students opinion on Teaching Evaluation Process) was developed. It consisted of 26 items in four areas as follows:

  1. Students evaluation of teaching process (7 items);
  2. Impact of SETE (7 items);
  3. Potential action by instructors (6 items); and
  4. Changes suggested by instructors and students (5 items). It also contained three open-ended questions and rating of overall satisfaction.


All items are typically "Likert type item." Each was in five points, indicating the degree of agreement with a statement in ascending order: 1 = Strongly disagree; 2 = Disagree; 3 = True Sometimes; 4 = Agree; 5 = Strongly Agree.

The reliability and validity of this self-designed questionnaire was established. It indicated that there was high level of internal consistency (Cronbach's α = 0.87) for this scale with this specific sample (N = 26).

Methods

The researcher provided orientation to the potential respondents about aspects of SETE in each of the seven medical schools. It took the form of power point presentations, and included the rationale and components of SETE. The main aims were to overcome the knowledge bias and to gain homogeneity of awareness among the respondents about the technical details of the content of the questionnaire.

Sets of questionnaires were distributed to participants, with the request to kindly return the completed sets during the same sitting. Separate sessions were arranged for faculty and students. Throughout the study, care was taken to protect anonymity of all respondents (students are selected from different programs). Furthermore, faculty who were teaching a particular group of students were excluded at the time of administration of surveys to their students. Respondents were given sufficient time to respond without prompting, or, undue pressure.

Statistical analysis and interpretation

All the categorical data are presented as percentages. Area-wise distribution was determined by median because the scores were not normally distributed. Differences between students and faculties were tested using Mann - Whitney U-test. Proportion test was used for the items comparison. All the analyses were performed using SPSS version 19.0 (Illinois, USA). P < 0.05 was considered to be significant.

For the purposes of this study, median scoring was interpreted as follows: ≥4 = Full Agreement, 3-4 = Acceptable approval, and <3 = Disapproval. Similarly, for individual items, the measure deployed was cumulative % of respondents who selected 4 = Agreed) or 5 = Strongly agreed. The following key was applied: ≥80% = Full agreement; 60-80% = Acceptable approval; <60% = Disapproval. The degree of disapproval was further subdivided into three: 40-60% = Moderate; 20-40% = Severe; <20% = Critical disapproval.


  Results Top


A total of 600 completed questionnaires were received out of 650 distributed (92%) and subjected to statistical analyses.

[Table 2] is a summary of the comparison of median for the four areas. It can be seen that students registered disapproval in three of the four areas. On the other hand, the pattern of response from instructors was a mirror image of that of the students": t0 hat is, disapproval in one of four areas. Between the two groups, concurrence occurred in one area, viz.: "Impact of SETE." Even here, the instructor approved more strongly than the students (4.01 vs. 3.12; P = 0.001). The students disapproved most strongly in Area 4 (2.13 vs. 3.02; P = 0.003), as well as the overall assessment (2.17 vs. 3.76; P = 0.004).
Table 2: Comparison of the median (interquartile range) scores of the faculty and students' opinion on teaching evaluation process (by taking all items in each area separately) in seven selected health sciences colleges in KSA Universities

Click here to view


[Table 3] is a summary of the items analysis. Attention is drawn to striking findings only. The two groups are reported in turn.
Table 3: Comparison of the observed agreement scores of faculty and students

Click here to view


Instructors' perception

Instructors agreed on four of the 7 items in Area 1, the SETE process. Thus, they take the SETE process seriously (92%) and agree that changes in SETE can be useful (81%). They reject the notion that teaching easier courses leads to higher SETE grading (81%), and they accept that active teaching techniques result in more positive evaluation (85%).

Concerning the impact of SETE, instructors agreed in four of the 7 items as follows. Neither students nor the instructors themselves take SETE seriously (85% each). SETE can severely jeopardize instructors' career prospects (84%). Instructors change course content based on SETE results (80%).

Potential action taken by instructors is the theme of Area 3. Instructors agreed on only two of the 6 items. Both deal with improvement of teaching skills. This can occur by attending faculty development workshops (93%) and by means of "friendly critique" (91%). Area 4 targeted changes suggested in the SETE process. Instructors agreed with only two of the 5 items: SETE should focus on their field of teaching rather than generic questions (91%), and the way of conducting SETE should be changed (83%). They gave 'acceptable approval' to two other items : t0 he addition of peer evaluation (69%), and teaching portfolio (78%).

Instructors disagreed strongly with 3 items. SETE is based on instructors' character rather than teaching skills (47%); "grade easy" (26%). SETE does nothing to improve teaching (18%).

Students' perception

In Area 1, students agreed on three of the 7 items : v0 iz. : a0 nd that changes in SETE can be useful (90%), faculty members are taking this student's evaluation seriously (80%), and that, active teaching techniques lead to positive evaluation (80%). In Area 2, students agreed with only three of the 7 items, namely, most instructors fail to take SETE seriously (92%), the administration relies on SETE results for instructors' promotion (80%), and leniency bias exists in that instructors demand less of students in order to get favorable rating (80%). When it came to the potential action taken, students agreed with only one of the 6 items, namely, teaching skills can be improved when instructors attend faculty development workshops (83%). In the fourth and last area, students agreed with none of the 5 items.

Students disagreed strongly with 4 items: Instructors make changes in teaching style based on SETE (10%). SETE is based on instructors' character (21%). SETE makes courses easier for students (39%). SETE does nothing to improve teaching (39%).


  Discussion Top


In this study, the sample size, 600, was comparable larger than those in the previous six studies identified from the literature review as having similar research focus where the range was 60-320. The authenticity of data in this study was explored in two ways. It is a feature of data authenticity that, instructors and students agreed on some items and disagreed on others. In three of four such items of agreement, agreement is intuitively expected. First, SETE can be useful. Second, active teaching techniques lead to positive SETE ratings. Third, participation in faculty development workshops can lead to improved teaching skills. The 4 th item of agreement was a paradox, but, at the same time, it is another evidence of robust objectivity. Students and the instructors themselves stated that most instructors fail to take SETE seriously.

Both groups jointly rejected 2 items. First, SETE is based on instructors' character rather than their teaching ability. The degree of students' rejection was more severe than instructors' (21% vs. 47%; P < 0.001). Secondly, SETE does nothing to improve teaching. A reverse pattern was observed: The degree of instructors' rejection was more severe than students' (18% vs. 39%; P < 0.001).

There was strong but intuitively acceptable disagreement between the two groups on the issue of easy courses being taught. Whereas, students believe that instructors who teach easier courses receive favorable SETE rating. Instructors reject this notion (P < 0.001).

In Area 2, as many as six of the 7 items received high approval rate between instructors and students, it indicates the importance of SETE evaluation. Instructors perceive that SETE can jeopardize their promotion prospects, but students did not see things that way (P < 0.001). Students implied the existence of leniency bias by agreeing that "instructors demand less from students to get favorable evaluation." Instructors disagreed (P < 0.001).

Whereas instructors accept that, as the result of SETE, they make changes in teaching style as well as course content, students disagreed on both counts (P < 0.001). In the opinion of instructors, students fail to take SETE seriously. Perhaps not unnaturally, students disagreed (P < 0.001).

Both instructors and students agreed that teaching skills can be improved by participation in faculty development workshops. However, the impact of "friendly critique" on teaching skills elicited different ratings. Whereas instructors agreed that teaching skills can be improved by means of friendly critique, students did not (P < 0.001). This is another robust evidence of authenticity, because, there is no way students could have rated highly the positive impact of "friendly critique" on teaching skills. That insight is beyond students' sphere of competence.

Finally in Area 4, instructors were in favor of changing the way the SETE is done, and to evaluate instructors' field of teaching rather than generic questions. Students disagreed on both counts (P < 0.001).

In this study, instructors and students both rejected the notion that male and female instructors are equally competent in all fields of instruction.

Two items were considered to be beyond students' immediate domain of competence because they address fine points : c0 hanges in teaching style, and the impact of "friendly critique" on teaching skills.

The impact of SETE instructors said that a change for teaching style was rejected by both students and faculty. However, in the next item, where the focus was on changes in course content, the perception was positive by both instructors and students. The issue of leniency bias is contentious. In this study, it was addressed by 2 items. One was that instructors who teach easier courses receive favorable SETE rating. The other was instructors demand less from students in order to extract favorable SETE. This issue remained unresolved.

Aspects of these findings are compared with the references cited. This study agreed with Smith and Carney [6] that students took the opportunity to evaluate their professors seriously. [6] Further, the results of this study also indicated that the faculty fails to take SETE seriously. A previous study also indicated that the students have little confidence as their faculty and administrators pay attention to their evaluations. [11] On the other hand, unlike the theoretical model of Chen and Hoshower, [7] in which students were dis-interested in the use of SETE results in decisions on promotions, the students believe that 'the administration relies on SETE results for instructors' promotion (80%). Such findings are supported by previous studies that concluded the utility of students rating for administrative purposes. [12],[13],[14]

In this and the previous work carried out in University of Dammam (Amr et al. [10] ) students disagreed strongly that, as a result of SETE, instructors make changes in teaching style, including promotion of learner-centered teaching, and assessment methods. And agreed that SETE would be helpful.

Finally, instructors' opinion as elicited in this study is now compared with others' findings. The results of this study indicated that most instructors failed to take SETE seriously and in agreement with the findings of the previous study which indicated that the lecturer's did not accept students evaluation particularly when it is for summative evaluation. [15] However, the instructors were sometimes positive about the use of students' evaluations results for formative purposes. [5] Recently, Machingambi and Wadesango [5] recommended that the students' evaluation of teaching must always be subjected to triangulation with other multi-dimensional evaluation methods such as peer evaluation and teaching portfolios.


  Conclusion Top


This study addressed the perception of students and faculty on the SETE process in KSA. Sample size was satisfactory when compared with six articles with similar research focus. Robust evidence of objectivity and data authenticity was demonstrated. The differences and similarities between the opinions in the two groups as well as in the literature were identified. It can be safely concluded that the findings in this study agreed broadly with others. Future research regarding this issue is recommended.


  Acknowledgments Top


The author places on record his thanks and appreciation to the deans of the seven colleges for their permission to explore the environment for this study, and, the Deanship of Scientific Research, University of Dammam, for official support. He sincerely thanks colleagues and students who participated for their time, effort and objective opinion. He hopes that, since their candid responses have provoked a follow-up study, he can count them in the future.

 
  References Top

1.
Gravestock P, Gregor-Greenleaf E. Student Course Evaluations: Research, Models and Trends. Toronto: Higher Education Quality Council of Ontario; 2008.  Back to cited text no. 1
    
2.
Al Rubaish A. On the contribution of student experience survey regarding quality management in higher education: An institutional study in Saudi Arabia. J.Service Science & Management, 2010;3:464-9.  Back to cited text no. 2
    
3.
Al-Musallam A. Higher Education Accreditation and Quality Assurance in the Kingdom of Saudi Arabia. Paper Presented at the First National Conference for Quality in Higher Education. Riyadh, Saudi Arabia; 2007 May 15-6.  Back to cited text no. 3
    
4.
Spooren P, Mortelmans D. Teacher professionalism and student evaluation of teaching: Will better teachers receive higher ratings and will better students give higher ratings? Educ Stud 2006;32:201-14.  Back to cited text no. 4
    
5.
Machingambi S, Wadesango N. University lecturers' perceptions of students evaluation of their instructional practices. Anthropologist 2011;13:167-74.  Back to cited text no. 5
    
6.
Smith MC, Carney RN. Students Perceptions of the Teaching Evaluation Process. Boston: American Educational Research Association; 1990.  Back to cited text no. 6
    
7.
Chen Y, Hoshower LB. Student evaluation of teaching effectiveness: An assessment of student perception and motivation. Assess Eval High Educ 2003;28:71-88.  Back to cited text no. 7
    
8.
Heine P, Maddox N. Student perceptions of the faculty course evaluation process: An exploratory study of gender and class differences. Res High Educ J 2009;3:1-10.  Back to cited text no. 8
    
9.
Kogan LR, Schoenfeld TR, Hellyer PW. Student evaluations of teaching: Perceptions of faculty based on gender, position, and rank. Teach High Educ 2010;15:623-36.  Back to cited text no. 9
    
10.
Amr M, Al Saeed U, Shams T. Medical students' perceptions of teaching evaluation in psychiatry. Basic Res J Educ Res Rev 2012;1:81-4.  Back to cited text no. 10
    
11.
Spencer KJ, Schmelkin LP. Students perceptives of teaching and its evaluation. Assess Eval High Educ 2002;27:397-409.  Back to cited text no. 11
    
12.
Pinto M, Mansfield P. Thought processes college students use when evaluating faculty: A qualitative study. Am J Bus Educ 2010;3:55-60.  Back to cited text no. 12
    
13.
Carrell S, West J. Does professor quality matter? Evidence from random assignment of students to professors. J Polit Econ 2010;118:409-32.  Back to cited text no. 13
    
14.
Pennstate University Guidelines for Use and Administration of the SRET Forms: Available from: http://www.psu.edu/dept/vprov/pdfs/srte_guidelines.pdf. [Last accessed on 2013 Nov 03].  Back to cited text no. 14
    
15.
Iyamu EO, Aduwa-Oglebaen SE. Lecturers' perception of student evaluation in Nigerian universities. Int Educ J 2005;6:619-25.  Back to cited text no. 15
    



 
 
    Tables

  [Table 1], [Table 2], [Table 3]


This article has been cited by
1 Understanding the effect of response rate and class size interaction on students evaluation of teaching in a higher education
Ahmed Al Kuwaiti,Mahmoud AlQuraan,Arun Vijay Subbarayalu,Jody S. Piro
Cogent Education. 2016; 3(1): 1204082
[Pubmed] | [DOI]
2 Studentsí views on the block evaluation process: A descriptive analysis
Ntefeleng E. Pakkies,Ntombifikile G. Mtshali
Curationis. 2016; 39(1)
[Pubmed] | [DOI]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
   Abstract
  Introduction
   Materials and me...
  Results
  Discussion
  Conclusion
  Acknowledgments
   References
   Article Tables

 Article Access Statistics
    Viewed2254    
    Printed82    
    Emailed0    
    PDF Downloaded297    
    Comments [Add]    
    Cited by others 2    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]