logo
ResearchBunny Logo
Introduction
The concept of interpreter visibility, introduced by Angelelli (2004), challenges the traditional ideal of the 'invisible' interpreter. While previous research has focused on interpreter-initiated visibility through their own contributions to the discourse, this study examines speaker-initiated visibility through references to interpreters and their output. It focuses on consecutive interpreting in AIT press conferences, a context under-researched compared to simultaneous interpreting in settings like the European Parliament. The study aims to understand the extent and circumstances of interpreter visibility in this specific context and how speaker-interpreter interactions contribute to both visibility and communicative environment. The research questions are: (1) To what extent and in what circumstances do the interpreters at AIT become visible? (2) How can speaker-interpreter interactions contribute to the interpreter's visibility and communicative environment?
Literature Review
Early research, such as Anderson (1976/2002) and Wadensjö (1998), hinted at interpreter visibility, challenging the notion of invisibility. Angelelli (2004) formally defined interpreter visibility as establishing a separate speaking position, sparking debates about the ethics of various interpreter actions. Subsequent studies examined interpreter-initiated visibility using concepts like 'non-renditions' (Wadensjö, 1998) and 'text ownership' (Angelelli, 2004), focusing on contexts like medical, legal, and educational settings. However, research on speaker-initiated visibility, primarily in simultaneous interpreting at the European Parliament (Bartłomiejczyk, 2017; Diriker, 2004; Duflou, 2012, 2016), remains limited, particularly in consecutive interpreting contexts in Asia. This gap necessitates the current study.
Methodology
This study employed a multimodal conversation analysis (CA) approach to analyze 98 excerpts from eight AIT press conferences (2006-2012). The data included video recordings and transcriptions of speeches by three AIT Directors (Stephen Young, William Stanton, and Christopher Marut) consecutively interpreted from English to Chinese by three Taiwanese interpreters. The methodology involved four steps: (1) Data delimitation focusing on speaker references to interpreters; (2) Division of the data into 98 units of analysis, each including a speaker's reference and the interpreter's response; (3) Multimodal annotation and transcription using ELAN software, encompassing verbal and nonverbal cues (gaze, gestures, posture, etc.); (4) Topic identification within the units of analysis based on the nature of speaker-interpreter interaction. The study used a micro-analytical perspective, examining the interplay of verbal and nonverbal resources to identify overarching topics related to interpreter visibility.
Key Findings
The analysis revealed six distinct topics representing speaker references to interpreters: (1) Appreciation: Speakers expressed gratitude, often with nonverbal cues like gaze and gestures, leading to interpreter responses like non-renditions and shifts in footing. (2) Reminders: Speakers reminded journalists to pause for the interpreter, demonstrating awareness of interpreting constraints and fostering collaboration. (3) Confirmation: Speakers confirmed interpreter understanding and accuracy, using both verbal and nonverbal cues to monitor interpretation. (4) Repair: Speakers corrected interpreter errors, showing high linguistic competence and active engagement in ensuring accuracy; interpreter responses included apologies and corrections. (5) Humor: Speakers used humor to create a relaxed atmosphere, involving interpreters in the interaction. (6) Care: Speakers showed care by offering water or other gestures of consideration. Speaker-initiated repairs were the most frequent (31 times), followed by confirmations (25 times) and humorous interactions (21 times). Appreciation, reminders, and expressions of care were less frequent. Multimodal analysis demonstrated how nonverbal cues significantly enhanced interpreter visibility and contributed to a more relaxed communicative environment.
Discussion
The findings challenge the traditional notion of the invisible interpreter, particularly in high-stakes contexts like political press conferences. Interpreters at AIT, despite the formal setting, actively participated and collaborated with speakers, taking on roles like 'ice-breaker' and 'rectifier.' The frequent speaker references highlight the interpreters' active role in ensuring effective communication. The study demonstrates the contextual nature of interpreter visibility and supports the adoption of multimodal conversation analysis as a valuable method for exploring interpreter-mediated communication. The high frequency of mentions, especially repairs, suggests a collaborative environment focused on accuracy. The informal humor and expressions of care highlight the development of a rapport between the interpreters and speakers. The findings highlight the shared responsibility of ensuring accurate and effective communication in these settings.
Conclusion
This study offers a novel perspective on interpreter visibility by focusing on speaker-initiated references in the context of AIT press conferences. The findings demonstrate that interpreters are active participants, collaboratively shaping the communicative environment. Multimodal CA proved effective in revealing nuanced aspects of speaker-interpreter interaction. Future research could explore interpreter perception of their own visibility and audience perception, extending the scope beyond the speaker's actions alone.
Limitations
The study's limitations include the limited scope of the data, primarily derived from press conferences featuring Director Stephen Young. The lack of data from other Directors and the small number of conferences analyzed might limit the generalizability of the findings. Moreover, the study lacks data from the interpreters' and audience's perspectives regarding their perception of interpreter visibility. Future research incorporating this data would enhance the understanding of the phenomenon.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny