logo
ResearchBunny Logo
Whose responsibility? Saudi university EMI content teachers’ language-related assessment practices and beliefs

Education

Whose responsibility? Saudi university EMI content teachers’ language-related assessment practices and beliefs

M. M. M. A. Latif and A. A. M. M. A. E. Deen

Discover how native-Arabic and non-native-Arabic faculty members approach English-related assessment practices in EMI scientific majors at Saudi universities. This insightful study by Muhammad M. M. Abdel Latif and Abdul Aziz Mohamed Mohamed Ali El Deen reveals interesting dynamics in prioritizing subject matter over language quality and the perceptions surrounding teachers' responsibility in language assessment.... show more
Introduction

The paper situates EMI within higher education internationalization and Englishization, noting rapid adoption in Saudi universities, especially in engineering, computer science, and sciences, amid challenges linked to students’ low English proficiency. Assessment in EMI remains under-researched, with ambiguity about the role of English in evaluating academic outcomes and the balance between content learning and language development. This study explores Saudi university EMI content teachers’ language-related assessment practices and beliefs, examining how cultural and linguistic factors shape assessment and addressing gaps regarding what to assess, how, and by whom.

Literature Review

The section on EMI assessment in higher education outlines concerns about accurately measuring EMI students’ performance, especially the role of English in assessment and whether language should be assessed alongside content. International evidence shows ambiguity over assessment language and practices, with suggestions to tolerate translanguaging depending on context, task type, and communication clarity. Prior studies indicate limited focus on language-related assessment by EMI content teachers and inconsistent views on L1 use during assessment. The review highlights the paucity of research on EMI assessment practices, particularly language-related aspects, and identifies gaps: common question types used in EMI, differences between teachers who can translanguage and those who rely solely on English, and the relationship between teachers’ language assessment self-efficacy and their practices. The study responds to these gaps by focusing on Saudi scientific majors and comparing native-Arabic and non-native-Arabic teachers.

Methodology

Design: Mixed-methods using a questionnaire and semi-structured interviews. Participants: 249 EMI content faculty from eight Saudi universities teaching computer science (n=47), engineering (n=66), and sciences (n=136). Of these, 191 were native-Arabic and 58 non-native-Arabic faculty; 90 females and 159 males; all PhD holders. Interviews involved 20 faculty (10 native-Arabic, 10 non-native-Arabic) from two universities across the same majors (5 CS, 8 engineering, 7 sciences). Instruments: Questionnaire sections included (1) written exam/test question types used (5 item checklist plus open-ended rationale), (2) language-related assessment practices (9 items, 5-point Likert: always–never), and (3) language assessment self-efficacy (3 items, 5-point Likert: strongly agree–strongly disagree). Average Cronbach’s alpha for Likert items was 0.82. Eight interview prompts targeted language considerations in oral/written evaluation, areas corrected, feedback practices, attitudes to Arabic use in testing, and perceived responsibility for language assessment. Data collection: 10-week online questionnaire distribution (Arabic and English via Google Forms); interviews conducted over 3 weeks in Arabic or English. Analysis: Descriptive and inferential statistics (including one-way ANOVA) for questionnaire; thematic analysis for interviews (independent coding, iterative theme refinement, and external validation by an expert on two protocols).

Key Findings
  • Written exam/test question types: Teachers most frequently used technical questions (formulas/drawings) and multiple-choice questions (71.1% and 68.3% respectively), followed by essay (56.6%) and true–false (51.8%); completion was least used (39.4%). Rationales included course nature, time constraints, aligning with departmental regulations, and enhancing reliability/validity through varied formats.
  • Language-related assessment practices (questionnaire): Teachers prioritized content over English quality in both written and oral assessments. Non-native-Arabic teachers gave relatively more weight to English than native-Arabic peers. Item-level highlights (means unless noted): focus on written content regardless of English quality higher overall (Arab 4.1, non-Arab 4.4; p=0.009); focus on oral content regardless of English (Arab 3.0, non-Arab 3.3; p=0.080); taking accuracy of English into account in written tests (Arab 2.9, non-Arab 3.4; p=0.005) and oral tests (Arab 2.4, non-Arab 2.6; n.s.). Drawing attention to English errors: in class (Arab 3.2, non-Arab 3.6; p=0.023); when marking written tests (Arab 3.1, non-Arab 3.3; n.s.). English-only requirements: written tests (Arab 3.1 vs non-Arab 4.7; p<0.001), oral tests (Arab 2.7 vs non-Arab 4.0; p<0.001), classroom oral answers (Arab 3.5 vs non-Arab 3.8; p=0.083). Native-Arabic teachers were more tolerant of Arabic use; non-native-Arabic teachers largely required English, especially in written tests.
  • Interviews: Teachers generally prioritized technical content over English quality; some considered language only for tasks like report writing, not for mathematical problem-solving. Feedback on English was typically holistic/advisory (e.g., encourage correct terminology and simpler English) rather than detailed corrective feedback. Non-native-Arabic teachers insisted on English-only due to limited Arabic proficiency; several native-Arabic teachers tolerated Arabic, citing students’ difficulty with English prompts and a need to simplify test language.
  • Justifications: Teachers believed they were not responsible for English instruction/assessment (role of language instructors) and cited the prevalence of numerous language errors making penalization unfair and time-consuming.
  • Self-efficacy: High perceived ability to evaluate and give feedback on English (means 3.9–4.4). Non-native-Arabic teachers reported higher self-efficacy for evaluating English in written tests (p=0.026). However, interviews showed a limited conceptualization of language assessment (focus on spelling [n=19], technical term accuracy [n=16], grammar [n=13], punctuation [n=7], typos in reports [n=6]) with little attention to organization, word choice, fluency, or pronunciation. There was a mismatch between high self-efficacy and limited actual practices.
Discussion

Findings address the research questions by showing that Saudi EMI content teachers in scientific majors predominantly use technical and multiple-choice items and prioritize content over language in assessment. Non-native-Arabic teachers, operating largely monolingually in English, assign slightly more weight to English and are less tolerant of Arabic in assessments than native-Arabic peers, who sometimes allow translanguaging. Teachers’ reluctance to assess English stems from a perceived lack of responsibility for language instruction and the ubiquity of students’ language errors, which they view as unfair to penalize and impractical to correct in depth. The discrepancy between high self-efficacy and limited, surface-level language assessment practices suggests insufficient language assessment literacy and unclear institutional expectations regarding the role of English in EMI assessment. These findings underscore the need to clarify EMI assessment policies and roles, and to develop collaborative practices between content and language specialists to better align beliefs, practices, and student needs.

Conclusion

The study contributes empirical evidence from Saudi universities showing that EMI content teachers in scientific majors rely primarily on technical and multiple-choice written question types, give limited weight to English language quality in assessment, and differ by language background in tolerance of Arabic use and attention to English. Despite high self-reported language assessment self-efficacy, actual practices focus on surface features and advisory feedback, revealing limited assessment literacy and unclear role conceptualizations. The authors argue for clearer institutional policies defining the place of English in EMI assessment, targeted assessment literacy development for content teachers, and structured collaboration between language specialists and content faculty. Future research should address role delineation between content and language teachers, effective methods for integrating language assessment in EMI courses without disadvantaging students, mechanisms to enhance content teachers’ assessment literacy, and frameworks for sustainable language–content collaboration across diverse EMI contexts.

Limitations
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny