Silke Gutermuth & Silvia Hansen-Schirra University of Mainz Germany Post-editing machine translation – a usability test for professional translation settings.

Slides:



Advertisements
Similar presentations
Dr. Stephen Doherty & Dr. Sharon O’Brien
Advertisements

KeTra.
The Chinese Room: Understanding and Correcting Machine Translation This work has been supported by NSF Grants IIS Solution: The Chinese Room Conclusions.
How to Use a Translation Memory Prof. Reima Al-Jarf King Saud University, Riyadh, Saudi Arabia Homepage:
Rating Evaluation Methods through Correlation presented by Lena Marg, Language Tools MTE 2014, Workshop on Automatic and Manual Metrics for Operational.
© Intercultural Studies Group Universitat Rovira i Virgili Plaça Imperial Tàrraco Tarragona Fax: (++ 34) What happens in translators’
Post-Editing – Professional translation service redefined
Social Comparison Direction Upward social comparison- compare to someone who is better than you. Downward social comparison- compare to someone who is.
Joke Daems PhD student Lieve Macken, Sonia Vandepitte, Robert Hartsuiker Comparing HT and PE using advanced research tools.
Evaluation State-of the-art and future actions Bente Maegaard CST, University of Copenhagen
MT Evaluation: Human Measures and Assessment Methods : Machine Translation Alon Lavie February 23, 2011.
EyeChess: the tutoring game with visual attentive interface Špakov Oleg Department of Computer Sciences University of Tampere Finland
REACH FOR POLYMERS EXPERT WORKSHOP BPF “REACH & CLP SEMINAR” WEDNESDAY, OCTOBER 19 th.
Experimental research design and methodology in TPR PhD Course in Translation Process Research Copenhagen, July 2014.
Carlos S. C. Teixeira Intercultural Studies Group Universitat Rovira i Virgili (Tarragona, Spain) Knowledge of provenance.
Gaze vs. Mouse in Games: The Effects on User Experience Tersia //Gowases, Roman Bednarik, Markku Tukiainen Department of Computer Science and Statistics.
THE TRANSLATION NETWORK Overview  Easily manage your multilingual sites  Synchronize content and manage changes  Translate content on the fly  Use.
Machine Translation Anna Sågvall Hein Mösg F
Predicting Text Quality for Scientific Articles Annie Louis University of Pennsylvania Advisor: Ani Nenkova.
The Use of Eye Tracking Technology in the Evaluation of e-Learning: A Feasibility Study Dr Peter Eachus University of Salford.
Student simulation and evaluation DOD meeting Hua Ai 03/03/2006.
C SC 620 Advanced Topics in Natural Language Processing Lecture 19 4/6.
The Use of Corpora for Automatic Evaluation of Grammar Inference Systems Andrew Roberts & Eric Atwell Corpus Linguistics ’03 – 29 th March Computer Vision.
MT Summit VIII, Language Technologies Institute School of Computer Science Carnegie Mellon University Pre-processing of Bilingual Corpora for Mandarin-English.
TimeCleanser: A Visual Analytics Approach for Data Cleansing of Time-Oriented Data Theresia Gschwandtner, Wolfgang Aigner, Silvia Miksch, Johannes Gärtner,
Importance of region-of-interest on image difference metrics Marius Pedersen The Norwegian Color Research Laboratory Faculty of Computer Science and Media.
The role of eye tracking in usability evaluation of LMS in ODL context Mr Sam Ssemugabi Ms Jabulisiwe Mabila (Professor Helene Gelderblom) College of Science.
Linda Mitchell Evaluating Community Post-Editing - Bridging the Gap between Translation Studies and Social Informatics Linda Mitchell PhD student.
I’m online, let’s chat! Neny Isharyanti-GloCALL 2007.
Automating Translation in the Localisation Factory An Investigation of Post-Editing Effort Sharon O’Brien Dublin City University.
Computer animation in electrical engineering education: Long-term effect on academic achievements A. Gero & W. Zoabi Technion – Israel Institute of Technology.
1. Human – the end-user of a program – the others in the organization Computer – the machine the program runs on – often split between clients & servers.
English-Persian SMT Reza Saeedi 1 WTLAB Wednesday, May 25, 2011.
Results of the WMO Laboratory Intercomparison of rainfall intensity gauges Luca G. Lanza University of Genoa WMO (Project Leader) DIAM UNIGE September.
Stephen Doherty, CNGL/SALIS
Translation process research at the Copenhagen Business School Seminar on Empirical and Experimental Research in Translation Facultat de Traducció i Interpretació,
Evaluation of the Statistical Machine Translation Service for Croatian-English Marija Brkić Department of Informatics, University of Rijeka
Carlos S. C. Teixeira Universitat Rovira i Virgili Knowledge of provenance: How does it affect TM/MT integration? New Research in Translation and Interpreting.
Kyoshiro SUGIYAMA, AHC-Lab., NAIST An Investigation of Machine Translation Evaluation Metrics in Cross-lingual Question Answering Kyoshiro Sugiyama, Masahiro.
Can Controlled Language Rules increase the value of MT? Fred Hollowood & Johann Rotourier Symantec Dublin.
Which way should it be: The personal or the general? Shulamith Kreitler and Michal Kreitler School of Psychological Sciences, Tel-Aviv University, Tel-Aviv,
1 Analysing the contributions of fellowships to industrial development November 2010 Johannes Dobinger, UNIDO Evaluation Group.
How do Humans Evaluate Machine Translation? Francisco Guzmán, Ahmed Abdelali, Irina Temnikova, Hassan Sajjad, Stephan Vogel.
Case Study Summary Link Translation entered a partner agreement with Autodesk to provide translation solutions integrating human and machine translation.
Ground Truth Free Evaluation of Segment Based Maps Rolf Lakaemper Temple University, Philadelphia,PA,USA.
Process Studies: Tools
Computational Linguistics. The Subject Computational Linguistics is a branch of linguistics that concerns with the statistical and rule-based natural.
The development and evaluation of an electric scooter simulation program Michiel JA Jannink (PhD)*, V. Erren (PT), A.C. de Kort (MD), Ir. H. vd Kooij (Phd),
Information Transfer through Online Summarizing and Translation Technology Sanja Seljan*, Ksenija Klasnić**, Mara Stojanac*, Barbara Pešorda*, Nives Mikelić.
1 Minimum Error Rate Training in Statistical Machine Translation Franz Josef Och Information Sciences Institute University of Southern California ACL 2003.
An Eyetracking Analysis of the Effect of Prior Comparison on Analogical Mapping Catherine A. Clement, Eastern Kentucky University Carrie Harris, Tara Weatherholt,
Participants  n = 77 trainees  Mean Age (SD) = 42 years (11.7)  72% European American, 22% Latino/a, 6% Other  21% Male, 79% Female  Attended one.
Eye-tracking and Cognitive Load in Translation Sharon O’Brien School of Applied Language and Intercultural Studies Dublin City University.
Review: Review: Translating without in-domain corpus: Machine translation post-editing with online learning techniques Antonio L. Lagarda, Daniel Ortiz-Martínez,
Post-editing: A Research Perspective
Speaker & Author: Victor Manuel García-Barrios
IUED Institute of Translation and Interpreting
Mingyu Feng Neil Heffernan Joseph Beck
Technical translation
Sarah Dahab Supervised by Stéphane maag Started on March 2016
Implications of interactive alignment
ACTFL's Core Practices for Effective Chinese Learning
Cognitive research on translation processes
BILC Professional Seminar - Zagreb, October 16, 2018 Maria Vargova
Statistical vs. Neural Machine Translation: a Comparison of MTH and DeepL at Swiss Post’s Language service Lise Volkart – Pierrette Bouillon – Sabrina.
Conducting a STEM Literature Review
Summarization for entity annotation Contextual summary
„Translation is the expression in another language (or target language of what has been expressed in another, source language, preserving semantic and.
The effect of labelling on infants’ object exploration.
Presentation transcript:

Silke Gutermuth & Silvia Hansen-Schirra University of Mainz Germany Post-editing machine translation – a usability test for professional translation settings

EyeTrackBehavior 2012 | October 9-10 | Leuven Post-editing? “term used for the correction of machine translation output by human linguists/editors” (Veale & Way 1997) “taking raw machine translated output and then editing it to produce a 'translation' which is suitable for the needs of the client” (one student explaining post-editing to another) “is the process of improving a machine-generated translation with a minimum of manual labour” (TAUS Report 2010)

EyeTrackBehavior 2012 | October 9-10 | Leuven Degrees of Post-editing light or fast post-editing  essential corrections only  time factor: quick full post-editing  more corrections => higher quality  time factor: slow (O‘Brien et al. 2009)

EyeTrackBehavior 2012 | October 9-10 | Leuven Background Motivation: evaluation of machine translation (MT), post-editing of MT, eye-enhanced CAT workbenches (e.g. O‘Brien 2011, Doherty et al. 2010, Carl & Jakobsen 2010, Hyrskykari 2006) Project: in cooperation with Copenhagen Business School ( centre/Institutter/CRITT/Menu/Forskningsprojekter) Experiment:  English-German  translation vs. post-editing vs. editing  6 source texts (ST) with different complexity levels (Hvelplund 2011)  12 professional translators, 12 semi-professional translators  eye-tracking (Tobii TX 300), key-logging (Translog), retrospective interviews, questionnaires

EyeTrackBehavior 2012 | October 9-10 | Leuven Translators‘ self-estimation

EyeTrackBehavior 2012 | October 9-10 | Leuven Translators‘ self-estimation

EyeTrackBehavior 2012 | October 9-10 | Leuven Translators‘ evaluation of MT quality

EyeTrackBehavior 2012 | October 9-10 | Leuven Translators‘ evaluation of MT quality

EyeTrackBehavior 2012 | October 9-10 | Leuven Translators‘ evaluation of MT quality Professional translators: conscious, subjective rating of machine translated output is extremely negative. Can eye-tracking tell a different story dealing with objective and measurable facts?

EyeTrackBehavior 2012 | October 9-10 | Leuven Processing time

EyeTrackBehavior 2012 | October 9-10 | Leuven Processing time edited texts quite often suffer from a distortion of meaning => source text needed for good quality translation => post-editing

EyeTrackBehavior 2012 | October 9-10 | Leuven Processing of ST vs. TT Translation

EyeTrackBehavior 2012 | October 9-10 | Leuven Processing of ST vs. TT Translation vs. Post-editing

EyeTrackBehavior 2012 | October 9-10 | Leuven Processing metrics Translation vs. Post-editing Translation: correlation between increasing ST complexity and TT processing metrics Post-editing: no significant influence of ST complexity on TT processing metrics

EyeTrackBehavior 2012 | October 9-10 | Leuven Processing of ST Translation vs. Post-editing => post-editing more efficientWHY?

EyeTrackBehavior 2012 | October 9-10 | Leuven Fixation Duration of clauses Average fixation duration (in milliseconds ) per clause

EyeTrackBehavior 2012 | October 9-10 | Leuven Fixation Duration of clauses Average fixation duration (in milliseconds ) per clause Good quality of MT for non-finite clauses ST: to end the suffering TT-P: um das Leiden zu beenden ST: Although emphasizing thatTT-P: Obwohl betont wird, dass ST: to protest againstTT-P: um gegen … zu protestieren ST: in the wake of fighting flaring TT-P: im Zuge des Kampfes gegen ein erneutes up again in Dafur Aufflammen in Darfur

EyeTrackBehavior 2012 | October 9-10 | Leuven Preliminary Conclusions Efficient post-editing is possible under the following conditions: good machine translation quality post-editors who are language experts, i.e. they need knowledge of the conventions of the source and target language knowledge of the text type and register

EyeTrackBehavior 2012 | October 9-10 | Leuven What‘s next? Analysis of other contrastive differences and gaps Analysis of ambiguities and processing problems Comparison of complexity levels Analysis of monitoring processes during TT production (with Translog) Comparison of professionals vs. semi-professionals Correlations between process data and quality of participants’ outputs Comparison with other translation pairs

EyeTrackBehavior 2012 | October 9-10 | Leuven Bibliography Carl, Michael and Jakobsen, Arnt Lykke (2010): Relating Production Units and Alignment Units in Translation Activity Data, In Proceedings of International Workshop on Natural Language Processing and Cognitive Science (NLPCS), Madeira, Portugal. Doherty, Stephen, O'Brien, Sharon and Carl, Michael (2010): Eye tracking as an MT evaluation technique. Machine Translation, 24, 1, pp1-13. Hvelplund, Kristian Tangsgaard (2011): Allocation of cognitive resources in translation an eye-tracking and key-logging study. PhD thesis, Department of International Language Studies and Computational Linguistics, Copenhagen Business School. Hyrskykari, Aulikki (2006): Eyes in Attentive Interfaces: Experiences from Creating iDict, a Gaze-Aware Reading Aid. Dissertation, Tampere University Press. O'Brien, Sharon and Roturier, Johann and De Almeida, Giselle (2009): Post-Editing MT Output Views from the researcher, trainer, publisher and practitioner. O'Brien, Sharon (2011): Towards Predicting Post-Editing Productivity. Machine Translation, 25, 3, pp Postediting in Practice. A TAUS Report, March 2010 p.6Postediting in Practice. A TAUS Report, March 2010 p.6 Veale, T. and Way, A. (1997). Gaijin: A Bootstrapping Approach to Example-Based Machine Translation. Recent Advances in Natural Language International Conference,

Contact Silke Gutermuth & Silvia Hansen-Schirra &