
Linguistics and Languages
The effectiveness of translation technology training: a mixed methods study
W. Su and D. Li
This study by Wenchao Su and Defeng Li explores the effectiveness of translation technology training among MTI students in China, revealing a mix of satisfaction and suggestions for improvement. Many students appreciated their newfound knowledge of CAT tools but criticized practical limitations and outdated resources. Discover how enhancing training can bridge the gap between theory and practice!
~3 min • Beginner • English
Introduction
The study examines how effective translation technology training is within MTI programs in China amid the rapid growth of AI, big data and cloud-based tools in the language industry. Despite broad curricular adoption of translation technology courses, questions remain about their effectiveness, compounded by challenges such as shortages of qualified instructors, resources, and students’ uneven computer skills. The authors argue that evaluating training effectiveness is crucial to curriculum design and pedagogy, and adopt the Kirkpatrick model to assess outcomes. The study focuses on two research questions: (1) To what extent do students perceive the training as satisfactory (overall effectiveness, satisfaction with textbooks, teaching and assessment methods, and attitudes toward technology)? (2) To what extent has learning taken place (perceived growth areas, difficulties, and technological competences)?
Literature Review
Prior research has explored translation technology teaching from pedagogical approaches, curricula and competency frameworks, but few have systematically evaluated training effectiveness from students’ perspectives. Studies often address single courses or specific aspects: Rodríguez-Castro (2018) reports gains in CAT skills but gaps in project work; Sycz-Opoń and Gałuskina (2016) find positive views of MT and an evaluation protocol for MTPE; Doherty and Kenny (2014) show SMT training improved self-efficacy. Samman (2022), using the Kirkpatrick model, found MTPE training fostered positive attitudes and efficiency. However, most work lacks a comprehensive, clearly defined construct of “effectiveness.” The authors therefore adopt the Kirkpatrick four-level model (reaction, learning, behavior, results) to structure evaluation, focusing on reaction and learning. They operationalize reaction as satisfaction and attitudes; and learning as perceived growth, difficulties, and technological competences, triangulating open-ended growth reports with Likert-rated competences. This framework aims to support comparable, theory-grounded evaluations across contexts.
Methodology
Design: Mixed-methods with a cross-sectional survey and semi-structured interviews, guided by the Kirkpatrick model (reaction and learning levels). Data were collected in 2018–2019.
Participants: 385 MTI students in China (332 women, 86.23%; 53 men, 13.77%); mean age 23 (SD=2.87); from 52 universities across 23 municipalities/provinces. Cohorts: 56.1% first-year, 33.77% second-year, 10.13% third-year. Recruitment via convenience and snowball sampling through 60 translation technology instructors in the TTES/WITTA network. Ethics approval by University of Macau; informed consent obtained.
Instruments: A web-based questionnaire with three parts: demographics; Reaction (overall effectiveness; satisfaction with textbooks, teaching and assessment methods; attitudes; perceived importance of translation technology; Likert scales and multiple-choice); Learning (perceived growth areas via an open-ended question; perceived difficulties; self-rated technological competences across information technology, translation technology, and project management; 5-point Likert). Semi-structured interviews with 8 volunteers from 5 universities explored experiences and perceptions aligned to the two research questions.
Procedure: Interviews conducted in Chinese, recorded (20–40 minutes), transcribed verbatim and translated to English for reporting.
Reliability: Cronbach’s alpha for multi-item scales exceeded 0.70: textbooks (0.73), teaching methods (0.91), assessment methods (0.80), perceived difficulties (0.85), perceived competences (0.93).
Analysis: Quantitative descriptive statistics for Likert items. Open-ended responses analyzed via Braun and Clarke’s six-phase thematic analysis by two researchers, with independent coding, theme generation, consensus meetings, and frequency counts. Interview excerpts were identified for salient comments on reaction and learning to triangulate survey findings. Pseudonyms used for anonymity.
Key Findings
Reaction (attitudes, satisfaction, perceptions):
- Attitudinal shifts: 73.76% agreed/strongly agreed that training changed attitudes toward translation technology. 90.01% agreed/strongly agreed technology is important to future translators; 89.1% agreed/strongly agreed that the best strategy is to harness technology. 79.74% disagreed/strongly disagreed that technology will replace professional translators. Interviews revealed concerns about overreliance on MT potentially reducing creativity.
- Overall effectiveness: 7.53% very effective; 38.96% effective; 43.12% neutral; 7.27% ineffective; 3.12% very ineffective (mean ≈3.41/5), indicating slightly positive but mixed perceptions.
- Textbooks: Neutral overall; perceived shortcomings included insufficient online resources/companions (43.37%), insufficient authentic cases (38.18%), and outdated content (28.05%). Positives: systematic coverage of knowledge (53.76%) and detailed operational procedures (61.3%).
- Teaching methods: Project-based teaching rated most effective (69.61% effective/very), followed by instructor PPTs (68.83%). Positive views also for student reports, brainstorming, researching local LSPs; project contests and flipped classrooms were less welcomed/less used.
- Assessment methods: Most effective perceived were in-class tests on technology application (85.46%), class exercises/assignments (81.82%), online platforms monitoring learning (75.84%), and in-class presentations (67.8%). Open-/closed-book exams and essays were rated less effective.
Learning (growth, difficulties, competences):
- Perceived growth areas (open-ended): Knowledge of CAT tools (66.8% mentions), knowledge of translation memories and termbases (44.4%), use of translation technology (24.4%), translation speed (20.3%), translation competence (15.8%), searching (9.4%), computer skills (9.1%), QA (6.8%), text editing/DTP (6.2%), AVT/interpreting (4.2%), collaboration (3.4%). Interviews stressed the need for human decision-making and selection ability.
- Perceived difficulties: Insufficient training in application of technology to tasks (68.05%), too few training cases (66.49%), few industry experts involved (59.74%), insufficient training hours (57.15%); also inadequate teacher guidance, poor lab conditions, overly complex content. Interviews cited limited practice time, breadth-over-depth coverage, low motivation for extra practice due to “fake” projects.
- Perceived technological competences: Basic computer skills good/excellent 57.4% (avg 39.74%); programming weaker (good/excellent 22.07%, average 42.6%, poor/inadequate 35.33%). CAT tools competence good/excellent 42.07% (average 50.39%); terminology management average 51.43%; translation memories average 49.35%; MT post-editing good/excellent 41.29% (average 47.53%). Project management competences were modest: translation PM good/excellent 31.68% (average 53.25%); localization PM good/excellent 30.39% (average 51.17%).
Discussion
Findings indicate students view translation technology training as only slightly effective overall: many report neutral effectiveness alongside positive attitudes toward technology. Students appear at a beginner stage, with foundational knowledge but limited applied proficiency. They value instructor guidance (e.g., PPTs) yet literature supports discovery learning, suggesting the need to calibrate guidance versus autonomy to foster competence. Concerns about creativity loss with MT post-editing highlight the need to emphasize uniquely human skills and “doing what the machine can’t do.”
Learning gains centered on knowledge of CAT tools and core resources (TMs, termbases). However, many students report only average competences in using CAT, TMs, terminology, and MTPE, reflecting a gap between “knowing what” and “knowing how.” Insufficient practice, limited authentic cases, scarce industry involvement, and time constraints impede skill transfer to real tasks. Students recognize the importance of selection/critical evaluation skills when using CAT/MT outputs. Transferable skills (searching, collaboration, DTP) were less frequently reported as gains, despite their market relevance and the fact that nearly half of students do not plan to become translators.
Programming and project management competences lag, likely due to limited relevance or depth in curricula and the complexity of PM content without authentic contexts. Aligning programming content (e.g., Python) with automations for real translation workflows, and embedding authentic, team-based project experiences could strengthen these areas.
Conclusion
This study provides a comprehensive, student-centered evaluation of translation technology training in Chinese MTI programs using the Kirkpatrick framework (reaction and learning). It shows that while knowledge transmission (CAT tools, TMs, termbases) is reaching students, experiential practice and applied competence remain limited. Students generally do not feel highly competent in using CAT/MT or managing translation/localization projects, and report insufficient authentic practice, limited industry engagement, and time constraints. The study recommends: (1) shifting course time toward hands-on application through strategies such as flipped learning and in-class problem solving; (2) integrating authentic, real-world translation projects and industry collaboration to develop project management and transferable skills; (3) emphasizing human-centered skills (e.g., selection, creativity); and (4) aligning programming content to practical automation of translation tasks. Future research could extend the evaluation to other stakeholders (e.g., instructors), explore optimal blends of guidance vs. discovery learning, and further test flipped learning and industry-integrated models.
Limitations
Nonprobabilistic sampling (convenience and snowball) may introduce bias and limit generalizability beyond the surveyed population. The study focuses on the Kirkpatrick reaction and learning levels rather than behavior and results, and relies on self-reports for many measures. Students were drawn from Chinese MTI programs, which may limit applicability to other educational and market contexts.
Related Publications
Explore these studies to deepen your understanding of the subject.






