KantanMT Post-editing Machine TranslationSo far in this KantanMT blog series on Machine Translation post-editing we have looked at; automated post-editing, why it is becoming popular within the localization industry, how you can reduce your post-editing times, and the steps you can take to achieve both understandable or ‘fit for purpose’ and close to human levels of post-editing standards. In this post we are going to focus on perhaps one of the most difficult issues with regards to providing a post-editing service, and that’s pricing.

What’s the problem?
The problem, put simply, is that there is no set way for Language Service Providers (LSPs) to price post-editing projects for their clients. That’s because LSPs must contend with a range of variables in the post–editing process, each of which can effect the final cost. Lorena Guerra, writing in 2003, sums up one of the main issues, “Whereas Human Translation is mainly based on the unit “word” as a cost base, in the case of post-editing, as outlined by Spalink et al. the cost base “word” is much harder to justify”. LSPs cannot charge for post-editing a “word” when their post-editors may have just corrected a letter or perhaps even a broader stylistic problem.  There are also other items to consider, here are just a few

  • The time it takes to complete the post-editing process
  • The post-editing standards required by the client
  • The number of segments requiring higher post-editing quality compared to those requiring a lower post-editing standard
  • Varying segment lengths
  • The quality of the raw Machine Translation output
  • Varying degrees of post-editing effort required for different language pairs

LSPs and their clients must not only set a price, but also agree upon how that price is reached. Establishing a pricing framework that considers all parties is an imperative.

Pricing Machine Translation Post-Editing
So how can Localization Service Providers develop appropriate frameworks for pricing Machine Translation post-editing? TAUS, has recently published a public consultation entitled “Best Practice Guidelines for Pricing MT Post Editing” that features guidelines to help solve this problem. Let’s take a look at the key points. Note: These TAUS guidelines are preliminary and are subject to review while the public consultation is ongoing.

1. Things to Always Remember
TAUS says that no matter what kind of framework you use for pricing Machine Translation post-editing, there are certain things to always keep in mind.

Set a price up-front
Ensure that your framework can provide an estimation of the cost of post-editing a text at the outset; re-evaluate prices when you evaluate or roll out a new version of an engine.

Involve all parties
When building your pricing framework, include all parties involved in your Machine Translation process. This is to ensure that everyone agrees “that the pricing model reflects the effort involved”.

Take the content to be post-edited into account
Consider the variables outlined earlier in this post such as post-editing different language pairs and post-editing to various quality standards. All of these factors need to be assessed as part of your pricing framework.

easelly_visual(5)

2. Building a Pricing Model
TAUS recommends combining a number of approaches to build your pricing framework. These are Automated Quality Score (e.g. TER, BLEU, F-Measure), Human Assessment, and Productivity Assessment. TAUS adds that “Productivity Assessment should always be used” regardless of what approach is taken.

Automated quality scores
There a number of combinations of automated measurement tools, KantanMT currently deploys BLEU, TER, and F-Measure.

Human assessment
This involves steps such as human post-editors checking both the quality of raw Machine Translation output and post-edited content.

Post-editing productivity assessment
TAUS defines this as “calculating the difference in speed between translating from scratch and post-editing Machine Translation output”. Speeds may change if you deploy a new engine, so each time a “new ‘production’ ready engine” is rolled out make sure that you perform new productivity assessments.

To find out more about developing a Machine Translation post-editing pricing framework, check out TAUS’s  public consultation “Best Practice Guidelines for MT Post-Editing”. Note: The public consultation on these preliminary guidelines closes Tuesday July 30th 2013 and the official guidelines will be published on Tuesday August 6th 2013.

KantanAnalytics
This week, KantanMT has announced the forthcoming release of KantanMT Analytics. This technology, which has been developed in partnership with the CNGL Centre for Global Intelligent Content, provides segment level quality analysis for Machine Translation output.

By attaining a quality score for each segment of a Machine Translated document, post-editors can accurately identify segments that require the most post-editing time and those which already meet the client’s quality standards. This will help KantanMT members to calculate post-editing effort and price.

That brings us to the end of our blog series on Machine Translation post-editing. We hope you have enjoyed taking this “post-editing adventure” with us and are able to put the advice within this blog series to good use. Please feel free to comment on this post or any ones previous-we’d love to hear from you.

If you want to find out more about KantanMT and KantanAnalytics, visit KantanMT.com or mail info@kantanmt.com.