Translation Quality: How to Deal with It?

KantanMTSelcuk Ozcan, Transistent, KantanMT started the New Year on a high note with the addition of the Turkish Language Service Provider, Transistent to the KantanMT Preferred MT Supplier partner program.

Selçuk Özcan, Transistent’s Co-founder has given KantanMT permission to publish his blog post on Translation Quality. This post was originally published in Dragosfer and the Transistent Blog.

 

 

Literally, the word quality has several meanings, one of them being “a high level of value or excellence” according to Merriam-Webster’s dictionary. How should one deal with this idea of “excellence” when the issue at hand is translation quality? What is required, it looks like, is a more pragmatic and objective answer to the abovementioned question.

This brings us to the question “how could an approach be objective?” Certainly, the issue should be assessed through empirical findings. But how? We are basically in need of an assessment procedure with standardized metrics. Here, we encounter another issue; standardization of translation quality. From now on, we need to associate these concepts with the context itself in order to make them clear.

Image 1 blog Transistent

Monolingual issues

Bilingual issues

As it’s widely known, three sets of factors have an effect on the quality of the translation process in general. Basically, analyzing source text’s monolingual issues, target text’s monolingual issues and bilingual issues defines the quality of the work done. Nevertheless, the procedure should be based on the requirements of the domain, audience and linguistic structure of both languages (source and target); and in each step, this key question should be considered: ‘Does the TT serve to the intended purpose?’

We still have not dealt with the standardization and quality of acceptable TT’s. The concept of “acceptable translation” has always been discussed throughout the history of translation studies. No one is able to precisely explain the requirements. However, a further study on dynamic QA models needs to go into details.There are various QA approaches and models. For most of them, acceptable translation falls into somewhere between bad and good quality, depending on the domain and target audience. The quality level is measured through the translation error rates developed to assess MT outputs (BLEU, F-Measure and TER) and there are four commonly accepted quality levels; bad, acceptable, good and excellent.

The formula is so simple: the TT containing more errors is considered to be worse quality. However, the errors should be correlated with the context and many other factors, such as importance for the client, expectations of the audience and so on. These factors define the errors’ severity as minor, major, and critical. A robust QA model should be based upon accurate error categorization so that reliable results may be obtained.

We tried to briefly describe the concept of QA modeling. Now, let’s see what’s going on in practice. There are three publicly available QA models which inspired many software developers on their QA tool development processes. One of them is LISA (Localization Industry Standards Association) QA Model. The LISA Model is very well known in the localization and translation industry and many company-specific QA models have been derived from it.

The second one is J2450 standard that was generated by SAE (Society for Automotive Engineers) and the last one is EN15038 standard, approved by CEN (Comité Européen de Normalisation) in 2006. All of the above mentioned models are the static QA models. One should create his/her own frameworks in compliance with the demands of the projects. Nowadays, many of the institutes have been working on dynamic QA models (EU Commission and TAUS). These models enable creating different metrics for several translation/localization projects.

About Selçuk Özcan

Selçuk Özcan has more than 5 years’ experience in the language industry and is a co-founder of Transistent Language Automation Services. He holds degrees in Mechanical Engineering and Translation Studies and has a keen interest in linguistics, NLP, language automation procedures, agile management and technology integration. Selçuk is mainly responsible for building high quality production models including Quality Estimation and deploying the ‘train the trainers’ model. He also teaches Computer-aided Translation and Total Quality Management at the Istanbul Yeni Yuzyil University, Translation & Interpreting Department.

Read More about KantanMT’s Partnership with Transistent in the official News Release, or if you are interested in joining the KantanMT Partner Program, contact Louise (info@kantanmt.com) for more details on how to get involved. 

Transistent KantanMT Preferred MT Supplier

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s