Download presentation
Presentation is loading. Please wait.
Published byLeslie Wilcox Modified over 9 years ago
1
s andrejs vasiļjevs chairman of the board andrejs@tilde.com data is core LOCALIZATION WORLD PARIS, JUNE 5, 2012
2
Language technology developer Localization service provider Leadership in smaller languages Offices in Riga (Latvia), Tallinn (Estonia) and Vilnius (Lithuania) 135 employees Strong R&D team 9 PhDs and candidates
3
MT machine translation
4
INNOVATION disruptive
5
rule-based MT statistical MT High quality translation in specialized domains Require highly qualified linguists, researchers and software developers Time and resource consuming Difficult to evolve Translation and linguistic knowledge is derived from data Relatively easy and quick to develop Requires huge amounts of parallel and monolingual data Translation quality inconsistent and can differ dramatically from domain to domain MT paradigms
6
CHALLENGE
9
one size fits all ?
10
DATA
12
The total body of European Union law applicable in the EU Member States JRC-Acquis http://langtech.jrc.it/JRC-Acquis.html
13
The DGT Multilingual Translation Memory of the Acquis Communautaire DGT-TM http://langtech.jrc.it/DGT-TM.html
14
Parallel data collected from the Web by University of Uppsala 90 languages, 3800 language 2,7B parallel units Opus http://opus.lingfil.uu.se
15
open European language resource infrastructure http://www.meta-net.eu
16
Data for SMT training
17
PLATFORM
18
Moses toolkit [ttable-file] 0 0 5 /.../unfactored/model/phrase-table.0-0.gz % ls steps/1/LM_toy_tokenize.1* | cat steps/1/LM_toy_tokenize.1 steps/1/LM_toy_tokenize.1.DONE steps/1/LM_toy_tokenize.1.INFO steps/1/LM_toy_tokenize.1.STDERR steps/1/LM_toy_tokenize.1.STDERR.digest steps/1/LM_toy_tokenize.1.STDOUT % train-model.perl \ --corpus factored-corpus/proj-syndicate \ --root-dir unfactored \ --f de --e en \ --lm 0:3:factored-corpus/surface.lm:0 % moses -f moses.ini -lmodel-file "0 0 3../lm/europarl.srilm.gz“ use-berkeley = true alignment-symmetrization-method = berkeley berkeley-train = $moses-script- dir/ems/support/berkeley-train.sh berkeley-process = $moses-script- dir/ems/support/berkeley-process.sh berkeley-jar = /your/path/to/berkeleyaligner- 2.1/berkeleyaligner.jar berkeley-java-options = "-server -mx30000m -ea" berkeley-training-options = "-Main.iters 5 5 - EMWordAligner.numThreads 8" berkeley-process-options = "- EMWordAligner.numThreads 8" berkeley-posterior = 0.5 tokenize in: raw-stem out: tokenized-stem default-name: corpus/tok pass-unless: input-tokenizer output-tokenizer template-if: input-tokenizer IN.$input- extension OUT.$input-extension template-if: output-tokenizer IN.$output- extension OUT.$output-extension parallelizable: yes working-dir = /home/pkoehn/experiment wmt10-data = $working-dir/data
19
build your own MT engine
20
Tilde / Coordinator LATVIA University of Edinburgh UK Uppsala University SWEDEN Copehagen University DENMARK University of Zagreb CROATIA Moravia CZECH REPUBLIC SemLab NETHERLANDS
21
Cloud-based self-service MT factory Repository of parallel and monolingual corpora for MT generation Automated training of SMT systems from specified collections of data Users can specify particular training data collections and build customised MT engines from these collections Users can also use LetsMT! platform for tailoring MT system to their needs from their non- public data
22
Stores SMT training data Supports different formats – TMX, XLIFF, PDF, DOC, plain text Converts to unified format Performs format conversions and alignment Resource Repository
23
Put users in control of their data Fully public or fully private should not be the only choice Data can be used for MT generation without exposing it Empower users to create custom MT engines from their data user-driven machine translation
24
Integration with CAT tools Integration in web pages Integration in web browsers API-level integration integration
25
Integration of MT in SDL Trados
29
use case FORTERA
30
EVALUATION
31
Keyboard-monitoring of post- editing (O´Brien, 2005) Productivity of MS Office localization (Schmidtke, 2008) 5-10% productivity gain for SP, FR, DE Adobe (Flournoy and Duran, 2009) 22%-51% productivity increase for RU, SP, FR Autodesk Moses SMT system (Plitt and Masselot, 2010) 74% average productivity increase for FR, IT, DE, SP Previous Work
32
Evaluation at Tilde Latvian: About 1,6 M native speakers Highly inflectional - ~22M possible word forms in total Official EU language Tilde English – Latvian MT system IT Software Localization Domain Evaluation of translators’ productivity
33
English-Latvian data Bilingual corpusParallel units Localization TM1 290 K DGT-TM1 060 K OPUS EMEA970 K Fiction660 K Dictionary data510 K Web corpus900 K Total5 370 K Monolingual corpusWords Latvian side of parallel corpus 60 M News (web)250 M Fiction9 M Total, Latvian319 M
34
MT Integration into Localization Workflow Evaluate original / assign Translator and Editor Analyze against TMs Translate using translation suggestions for TMs and MT Evaluate translation quality / Edit Fix errors Ready translation MT translate new sentences
35
Key interest of localization industry is to increase productivity of translation process while maintaining required quality level Productivity was measured as the translation output of an average translator in words per hour 5 translators participated in evaluation including both experienced and new translators Evaluation of Productivity
36
Performed by human editors as part of their regular QA process Result of translation process was evaluated, editors did not know was or was not MT applied to assist translator Comparison to reference is not part of this evaluation Tilde standard QA assessment form was used covering the following text quality areas: Accuracy Spelling and grammar Style Terminology Evaluation of Quality
37
QA Grades Error Score (sum of weighted errors) Resulting Quality Evaluation 0…9Superior 10…29Good 30…49Mediocre 50…69Poor >70Very poor Tilde Localization QA assessment applied in the evaluation
38
Evaluation data ► 54 documents in IT domain ► 950-1050 adjusted words in each document ► Each document was split in half: ► the first part was translated using suggestions from TM only ► the second half was translated using suggestions from both TM and MT
39
% productivity 32.9% * * Skadiņš R., Puriņš M., Skadiņa I., Vasiļjevs A., Evaluation of SMT in localization to under-resourced inflected language, in Proceedings of the 15th International Conference of the European Association for Machine Translation EAMT 2011, p. 35-40, May 30-31, 2011, Leuven, Belgium Latvian
40
Evaluation at Moravia ► IT Localization domain ► Systems trained on the LetsMT platform ► English - Czech translation 25.1% productivity increase Error score increase from 19 to 27, still at the GOOD grade (<30) ► English – Polish translation 28.5% productivity increase Error score increase from 16.8 to 23.6, still at the GOOD grade (<30)
41
% productivity 25% *For Czech and Polish formal evaluation was done by Moravia Foror Slovak productivity increase was estimated by Fortera 28.5% Slovak*Polish 25.1% Czech
42
MORE DATA
43
corpora collection tools comparability metrics named entity recognition tools terminology extraction tools ACCURAT TOOLKIT
44
use case AUTOMOTIVE MANUFACTURER
45
very small translation memories (just 3500 sentences) no in-domain corpora in target languages no money for expensive developments ?
46
Terminology extraction Web crawling parallel monolingual Parallel data extraction from comparable corpora data collection workflow
47
TMs Terminology glossary Parallel phrases Parallel Named Entities Monolingual target language corpus Resulting data
48
General domain data as a basis Domain specific language model Impose domain specific terminology, named entity translations Add linguistic knowledge atop of statistical components SMT Training
49
right data & right tools
50
tilde.com technologies for smaller languages The research within the LetsMT! project leading to these results has received funding from the ICT Policy Support Programme (ICT PSP), Theme 5 – Multilingual web, grant agreement no 250456
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.