Using F-Measure in Kantan BuildAnalytics

What is F-Measure ?

KantanMT Logo 800x800 F-Measure is an automated measurement that determines the precision and recall  capabilities of a KantanMT engine. F-Measure measures enables you to determine the  quality and performance of your KantanMT engine

  • To see the accuracy and performance of your engine click on the ‘F-measure Scores’ tab. You will now be directed to the ‘F-measure Scores’ page.

F-Measure tab

  • Place your cursor on the ‘F-measure Scores Chart’ to see the individual score of each segment. A pop-up will now appear on your screen with details of the segment under these headings, ‘Segment no.’, ‘Score’, ‘Source’, ‘Reference/Target’ and ‘KantanMT Output’.

Segment

  • To see the ‘F-measure Scores’ of each segment in a table format scroll down. You will now see a table with the headings ‘No’, ‘Source’, ‘Reference/Target’, ‘KantanMT Output’ and ‘Score’.
  • To see an even more in depth breakdown of a particular ‘Segment’ click on the Triangle beside the number of the segment you wish to view.Triangle
  • To reuse the engine as Test Data click on the ‘Reuse as Test Data’. When you do so, the ‘Reuse as Test Data’ button will change to ‘Delete Test Data’.Test Data
    Delete Test Data
  • To download the ‘F-measure Scores’, ‘BLEU Score’ and ‘TER Scores’ of all segments click on the ‘Download’ button on either the ‘F-measure Scores’, ‘BLEU Score’ or ‘TER Scores’ page.download

This is one of the features provided by Kantan BuildAnalytics to improve an engine’s quality after its initial training .To see other features used by Kantan BuildAnalytics please click on the link below .To get more information about KantanMT and the services we provide please contact our support team at  at info@kantanmt.com.

Understanding BLEU for Machine Translation

KantanMT Whitepaper Improving your MT

It can often be challenging to measure the fluency of your Machine Translation engine,       and that’s where automatic metrics become very useful tool for the localization            engineer.

BLEU is one of the metrics used in KantanAnalytics for quality evaluation. BLEU Score is quick to use, inexpensive to operate, language independent, and correlates highly with human evaluation. It is the most widely used automated method of determining the quality of machine translation.

How to use BLEU ?

  1. To check the fluency of your KantanMT engine click on the ‘BLEU Scores’ tab. You will now be directed to the ‘BLEU Score’ page.bleu
  2. Place your cursor on the ‘Bleu Scores Chart’ to see the individual fluency score of each segment. . A pop-up will now appear on your screen with details of the segment under these headings, ‘Segment no.’, ‘Score’, ‘Source’‘Reference/Target’ and ‘KantanMT Output’.SEgment
  3. To see the ‘Bleu Scores’ of each segment in a table format scroll down. You will now see a table with the headings ‘No’, ‘Source’, ‘Reference/Target’, ‘KantanMT Output’ and ‘Score’.table
  4. To see an even more in depth breakdown of a particular ‘Segment’ click on the ‘Triangle’ beside the number of the segment you wish to view.
    Triangle
  5. To download the ‘BLEU Score’ of all segments click on the ‘Download’ button on the ‘BLEU Score’ page.download

This is one of the features provided by Kantan BuildAnalytics to improve an engine’s quality after its initial training .To see other features used by Kantan BuildAnalytics please click on the link below .To get more information about KantanMT and the services we provide please contact our support team at  at info@kantanmt.com.

Translation Quality: How to Deal with It?

KantanMTSelcuk Ozcan, Transistent, KantanMT started the New Year on a high note with the addition of the Turkish Language Service Provider, Transistent to the KantanMT Preferred MT Supplier partner program.

Selçuk Özcan, Transistent’s Co-founder has given KantanMT permission to publish his blog post on Translation Quality. This post was originally published in Dragosfer and the Transistent Blog.

 

 

Literally, the word quality has several meanings, one of them being “a high level of value or excellence” according to Merriam-Webster’s dictionary. How should one deal with this idea of “excellence” when the issue at hand is translation quality? What is required, it looks like, is a more pragmatic and objective answer to the abovementioned question.

This brings us to the question “how could an approach be objective?” Certainly, the issue should be assessed through empirical findings. But how? We are basically in need of an assessment procedure with standardized metrics. Here, we encounter another issue; standardization of translation quality. From now on, we need to associate these concepts with the context itself in order to make them clear.

Image 1 blog Transistent

Monolingual issues

Bilingual issues

As it’s widely known, three sets of factors have an effect on the quality of the translation process in general. Basically, analyzing source text’s monolingual issues, target text’s monolingual issues and bilingual issues defines the quality of the work done. Nevertheless, the procedure should be based on the requirements of the domain, audience and linguistic structure of both languages (source and target); and in each step, this key question should be considered: ‘Does the TT serve to the intended purpose?’

We still have not dealt with the standardization and quality of acceptable TT’s. The concept of “acceptable translation” has always been discussed throughout the history of translation studies. No one is able to precisely explain the requirements. However, a further study on dynamic QA models needs to go into details.There are various QA approaches and models. For most of them, acceptable translation falls into somewhere between bad and good quality, depending on the domain and target audience. The quality level is measured through the translation error rates developed to assess MT outputs (BLEU, F-Measure and TER) and there are four commonly accepted quality levels; bad, acceptable, good and excellent.

The formula is so simple: the TT containing more errors is considered to be worse quality. However, the errors should be correlated with the context and many other factors, such as importance for the client, expectations of the audience and so on. These factors define the errors’ severity as minor, major, and critical. A robust QA model should be based upon accurate error categorization so that reliable results may be obtained.

We tried to briefly describe the concept of QA modeling. Now, let’s see what’s going on in practice. There are three publicly available QA models which inspired many software developers on their QA tool development processes. One of them is LISA (Localization Industry Standards Association) QA Model. The LISA Model is very well known in the localization and translation industry and many company-specific QA models have been derived from it.

The second one is J2450 standard that was generated by SAE (Society for Automotive Engineers) and the last one is EN15038 standard, approved by CEN (Comité Européen de Normalisation) in 2006. All of the above mentioned models are the static QA models. One should create his/her own frameworks in compliance with the demands of the projects. Nowadays, many of the institutes have been working on dynamic QA models (EU Commission and TAUS). These models enable creating different metrics for several translation/localization projects.

About Selçuk Özcan

Selçuk Özcan has more than 5 years’ experience in the language industry and is a co-founder of Transistent Language Automation Services. He holds degrees in Mechanical Engineering and Translation Studies and has a keen interest in linguistics, NLP, language automation procedures, agile management and technology integration. Selçuk is mainly responsible for building high quality production models including Quality Estimation and deploying the ‘train the trainers’ model. He also teaches Computer-aided Translation and Total Quality Management at the Istanbul Yeni Yuzyil University, Translation & Interpreting Department.

Read More about KantanMT’s Partnership with Transistent in the official News Release, or if you are interested in joining the KantanMT Partner Program, contact Louise (info@kantanmt.com) for more details on how to get involved. 

Transistent KantanMT Preferred MT Supplier

 

Moses Use Case: KantanMT.com

Moses Core MT use case KantanMTJanuary 2015 marks the last month of the Moses Core project. The project started three years ago in 2012, as a collaborative effort by its members to improve translation processes and to create a competitive translation environment. Over those three years, the translation and MT landscape has changed significantly. This change and the project’s success is in no small part due to the hard work and diligence of the Moses Core project coordinator; TAUS  and with TAUS’s kind permission, KantanMT is republishing the MT use case for the KantanMT Community.

COMPANY NAME

KantanMT.com is a registered trademark of Xcelerator Machine Translations Ltd.

TIME IN MT BUSINESS

The platform was launched commercially in Q4 2013, however, we have been rigorously testing KantanMT.com in academic and commercial settings since 2012. In the beginning, the product was offered as a free trial to the KantanMT Community, and their feedback was instrumental in shaping and improving the platform to what it is today.

MOSES EXPERIENCE

The Moses technology has improved immensely over the past 12-18 months. Developer documentation and support materials, while initially very basic, have matured into a more structured, comprehensive and helpful resource. Additionally, the management of software distributions has made it easier to work with, understand and deploy. These are key elements in maintaining and supporting any open-source technology and have made Moses a key technology for the localization industry.

MosesCore

WHY MOSES?

The rise of the global economy and the driving demand for multilingual translation created a gap in the market for a sustainable translation method that could automatically scale to accommodate fluctuating translation needs. The KantanMT Development team was able to utilize the open source Moses decoder to develop a cloud-based Statistical Machine Translation (SMT) platform, where clients could build and manage their own customized MT engines without compromizing on the ownership of their data. The flexibility, scalability and security of the Moses toolkit made this possible.

The Moses toolkit offers the most flexibility in implementing an SMT solution for commercial purposes, as it allows the system’s training and decoding process to be modified. This has enabled the KantanMT team to create a high-value product that is dynamic and commercially relevant.

To ensure the product could scale and adapt to user needs the KantanMT team needed a decoder that could be built and managed on the cloud. The Moses system enabled this functionality.

Parallel language data is required to train an SMT engine. This data is an important resource for companies, and current generic SMT engines do not guarantee the security or safeguard the ownership of these assets. In using the Moses decoder, the KantanMT team created a product that could ensure its clients’ data was kept private, and not repurposed or reused in anyway.

Many global companies have large repositories of bilingual data, however, they often do not wish to deploy and maintain their own version of the Moses decoder. The KantanMT Development team was able to develop the sophisticated Moses SMT technology into a package that could be easily accessible to companies wishing to translate their content, and over time achieve localization cost savings.

MT STAFF

The current machine translation development team consists of four people, who maintain the platform and build machine translation engines for clients. Due to significant growth in the company over the past year, KantanMT.com will be hiring more staff over the course of the next few months to build engines for clients.

MT SYSTEM INFRASTRUCTURE

Insource or Outsource Moses/Implementation

Based on research, the demands of the language services industry and enterprise machine translation buyers, KantanMT has implemented and customized the Moses decoder in house to create a robust and commercially viable machine translation product that can scale and adapt to our clients’ needs. The original/base KantanAnalytics™ technology was co-developed with the CNGL Centre for Global Intelligent Content, an academic-industry research Centre based in Dublin City University, Ireland. However, all other KantanMT.com technologies have been developed in house by an in house expert development team.

Number of Engines

As of January 2015, the total number of MT engines built on KantanMT.com by the KantanMT community is 6,777 engines.

Volumes

As of January 2015, the total number of training words uploaded to the platform by the KantanMT Community has surpassed 50 billion, and the number of translated words on the platform is now more than 600 million.

USE SCENARIO

KantanMT preferred MT supplier bmmt
KantanMT.com Preferred MT Supplier

bmmt GmbH is a German language service provider with a strong focus on machine translation. It needed a Machine Translation provider, which would give the bmmt team full control of their Machine Translation training data and MT engine customization process at a low investment point. They also required a system which could correctly handle format-specific tagging and transparent transfer of mark-up information.

In early 2013, bmmt joined the KantanMT Community and began testing different customization processes using client specific training data. The team initially experienced minor problems with their SDLXLIFF files. However, the KantanMT development team were able to quickly solve this problem by restructuring some of its tokenizers.

The company began deploying production engines in mid-2013. These were showing particularly high Quality Evaluation (QE) scores due to the quality of their training data and resulted in a considerable increase in translation productivity. bmmt MT technicians found that domain specificity is a better basis for predictable output than sheer input size.

bmmt is currently using approximately 20 KantanMT engines in production across technical and automotive domains. These production ready engines are experiencing high quality metric scores for each language combination.

MARKET POSITIONING

KantanMT.com is one of the market leaders of cloud-based machine translation services. It provides cloud-based SMT services to major global enterprises and software companies wishing to translate large volumes of data. It works directly with companies to develop and implement a long term machine translation strategy, or it works with a select number of language service providers (preferred MT supplier partner program) to supply MT services to large enterprises.

VIEWS ON CURRENT STATE OF MT

Machine translation is now much more widely accepted in the industry, than it was just a few years ago. Since KantanMT.com entered the market in its testing phase in 2012, we have seen an enormous change in the attitudes and perception of MT in the language community. Access to technology such as smart-phones and tablets in non-English speaking nations has driven the global marketplace, and this in turn has increased the need for on-demand translation services – driving demand for MT services. The MosesCore Project has facilitated this demand with an open source solution that made it possible for smaller companies, and startups like us to compete against bigger MT providers, to solve the problem of language.

“The KantanMT platform sets a new industry benchmark in terms of analytics and development tools used to build and measure the quality of Statistical MT Engines. The KantanMT expert development team has introduced some of the industry’s most exciting and valuable technologies built on the Moses decoder, which are helping language and enterprise clients to translate more efficiently and reduce costs.” KantanMT.com founder and Chief Architect, Tony O’Dowd.

For more information on the Moses Core project or to access the original article, please contact TAUS (moses@taus.net) or to find out more about KantanMT.com contact Louise (info@kantanmt.com).

 

 

Sue’s Top Tips for Building MT Engines

Sue McDermott, KantanMTI’m new to machine translation and one of the things I’ve been doing at KantanMT is learning how to refine training data with a view to building stock engines.

Stock engines are the optional training data provided by KantanMT to improve the performance of your customized MT engine. In this post I’m going to describe the process of building an engine and refining the training data.

The building process on the platform is quite simple. From your dashboard on the website select “My Client Profiles” where you will find two profiles, which have already been set up. A default profile and sample profile; both of which let you run translation jobs straight away.

To create your own customized profile select ‘New’ at the top of the left-most column. This launches the client Profile Wizard.  Enter the name of your new engine; try to make this something meaningful, or use an easily recognizable standard around how you name your profiles. This makes it easier to recognize which profile is which, when you have more than one profile.

When you select ‘next’ you will be asked to specify the source and target languages from drop down menus. The wizard lets you distinguish between different variants of the same language for example Canadian English or US English. Let’s say we’re translating from Canadian English to Canadian French. If you’re not sure which variant you need, have a quick look at the training data, which will give you the language codes.

The next step gives you an option to select a stock engine from a drop down menu. The stock engines are grouped according to their business area or domain.

You will see a summary of your choices, if you’re happy with them select ‘create’. Your new engine will be shown in the list of your client profiles. However, while you have created your engine, you haven’t yet built it.

KantanMT Stock Engine Training data
Stock training data available for social and conversational domains on the KantanMT platform.

 

Building Your Engine

Selecting your profile from the list will make it the current active engine.  By selecting the Training Data tab you can upload any additional training data easily by using the drag and drop function. Then select the ‘Build’ option to begin building your engine.

It’s always a good idea to supply as much useful training data as possible. This ‘educates’ the engine in the way your organization typically translates text.

Once the build job has been submitted, you can monitor its progress in the ‘My Jobs’ page.

When the job is completed the BuildAnalytics™ feature is created. This can be accessed by clicking on the database icon to the left of the profile name. BuildAnalytics will give you feedback on the strength of your engine using industry standard scores, as well as details about your engines word count. The tabs across the page will give you access to more detail.

The summary tab lets you to see the average BLEU, F-Measure and TER scores for the engine, and the pie charts show you a summary of the percentage scores for all segments. For more detail select the respective tabs and use the data to investigate individual segments.

KantanMT BuildAnalytics Feature
KantanBuildAnalytics provides a granular analyis of your MT engine.

 

A Rejects Report is created for every file of Training Data uploaded. You can use this to determine why some of your data is not being used, and improve the uptake rate of your data.

Gap analysis gives you an effective way to improve your engine with relevant glossary or noise lists, which you can upload to future engine builds. By adding these terminology files in either TBX (Terminology Interchange) or XLSX (Microsoft Excel Spreadsheet) formats you will quickly improve the engines performance.

The Timeline tag shows you the evolution of your engine over its lifetime. This feature lets you compare the statistics with previous builds, and track all the data you have uploaded. On a couple of occasions, I used the archive feature to revert back to a previous build, when the engine building process was not going according to plan.

KantanMT Timeline
KantanMT Timeline lets you view your entire engine’s build history.

 

Improving Your Engine

A great way to improve your engines performance is to analyze the rejects report for the files with a higher rejection rate.  Once you understand the reasons segments are rejected you can begin to address them.  For example, an error 104 is caused by a difference in place holder counts. This can be something as simple as the source language using the % sign where the target language uses the word ‘percent’. In this case a preprocessor rule can be created to fix the problem.

KantanMT Rejects Report Error 104
A detailed rejects report shows you the errors in your MT engine.

A PEX rule editor is accessed from the KantanMT drop down menu. This lets you try out your preprocessor rules, and see the effect that they have in the data. I would suggest directly copying and pasting from the rejects report to the test area and applying your PEX rule to ensure you’re precisely targeting the data concerned. You can get instant feedback using this tool.

Once you’re happy with the way the rules work on the rejected data it’s useful to analyze the rest of the data to see what effect the rules will have.  You want to avoid a situation where using a rule resolves 10 rejects, but creates 20 more. Once the rules are refined copy them to the appropriate files (source.ppx, target.ppx) and upload with the training data. Remember that the rules will run against the content in the order they are specified.

When you rebuild the engine they will be incorporated, and hopefully improve the scores.

Sue’s 3 Tips for Successfully Building MT Engines

  1. Name your profiles clearly – When you are using a number of profiles simultaneously knowing what each one is (Language pair/domain) will make it much easier as you progress through the building process.
  2. Take advantage of BuildAnalytics – Use the insights and Gap analysis features to give you tips on improving your engine. Listening to these tips can really help speed up the engine refinement process.
  3. The PEX Rule Editor is your friend – Don’t be afraid to try out creating and using new PEX rules, if things go south you can always go back to previous versions of your engine.

My internship at KantanMT.com really opened my eyes to the world of language services and machine translation. Before joining the team I knew nothing about MT or the mechanics behind building engines. This was a great experience, and being part of such a smoothly run development team was an added bonus that I will take with me when I return ITB to finish my course.

About Sue McDermott

Sue is currently studying for a Diploma in Computer Science from ITB (Institute of Technology Blanchardstown). Sue joined KantanMT.com on a three month internship. She has a degree in English Literature and a background in business systems, and is also a full-time mum for the last 17 years.

Email: info@kantanmt.com, if you have any questions or want more information on the KantanMT platform.

RBMT vs SMT

Image

A commonly asked question within the localization industry is which is better: Rule Based or Statistical Machine Translations systems.  While both approaches have merits and advantages, the question in my mind is which offers the best future potential and best value for LSPs who are considering a future offering which includes an element of Machine Translation?

According to Don DePalma and his team at Common Sense Advisory, if you’re an LSP and haven’t been asked to provide an RFQ (Request for Quotation) that includes an element of Machine Translation, then you’re rapidly becoming the exception!

So as a successful LSP entrepreneur, which is the best wagon to hitch your horses to: Rule Based or Statistical Machine Translation?

First of all, what is Machine Translation?

Machine translation (MT) is automated translation or “translation carried out by a computer” – as defined in the Oxford English dictionary. It is the process by which computer software is used to translate a text from one natural language to another.

Machine Translation systems have been in development since the 1950s, however the technology required to develop successful MT systems was not up to par at this time and so research was largely put to the side. But in the last 15 years, as computational resources have became more mainstream and the internet opening up a wider multilingual and global community, interest in Machine Translation has been renewed.

There are three different types of Machine Translation systems available today. These are Rule-Based Machine Translation (RBMT), Statistical Machine Translation (SMT) and hybrid systems – a combination of RBMT and SMT.

Rule-Based Machine Translation Technology

Rule-based machine translation relies on countless built-in linguistic rules and gigantic bilingual dictionaries for each language pair. RBMT system works by parsing text and creating a transitional representation from which the text in the target language is generated. This process requires extensive lexicons with morphological, syntactic, and semantic information, and large sets of rules. RBMT uses a complex rule set and then transfers the grammatical structure of the source language into the target language.

In most cases, there are two steps: an initial investment that significantly increases the quality at a limited cost, and an ongoing investment to increase quality incrementally. While rule-based MT brings companies to a reasonable quality threshold, the quality improvement process is generally long and expensive. This has been a contributing factor to the slow adoption and usage of MT in the localization industry.

Surely, there must be a better approach!

Statistical Machine Translation Technology

Statistical Machine Translation (SMT) utilizes statistical translation models generated from the analysis of monolingual and bilingual content. Essentially this approach uses computing power to build sophisticated data models to translate one source language into another. This makes the use of SMT a far simpler option, and a significant factor in the broader adoption of statistical machine translation technology in the localization industry.

Building SMT models is a relatively quick and simple process. Using current systems – users can upload  training material and have an MT engne generated in a matter of hours. While it is genereally thought that a minimum of two million words are required to train an engine for a specific domain, it is possible to reach an acceptable quality threshold with much less.  The technology relies on bilingual corpora such as translation memories and glossaries for the system to learn the language patterns, and monolingual data is used to improve the fluency of the output as the engine has more text examples to choose from. SMT engines will prove to have a higher output quality if trained using domain specific training data such as; medical, financial or technical domains.

SMT technology is CPU intensive and requires an extensive hardware configuration to run translation models for acceptable performance levels. However, the introduction of cloud services, and the increasing availability of bilingual corpora are having a dramatic effect on the popularity of SMT systems, which is leading to a higher adoption rate in the language services industry.

RBMT vs. SMT

  • ŸRBMT can achieve good results but the training and development costs are very high for a good quality system. In terms of investment, the customization cycle needed to reach the quality threshold can be long and costly.
  • ŸRBMT systems can be built with much less data than SMT systems, instead using dictionaries and language rules to translate. This sometimes results in a lack of fluency.
  • Ÿ  Language is constantly changing, which means rules must be managed and updated where necessary in RBMT systems.
  • Ÿ  SMT systems can be built in much less time and do not require linguistic experts to apply language rules to the system.
  • Ÿ  SMT models require state-of the-art computer processing power and storage capacity to build and manage large translation models.
  • Ÿ SMT systems can mimic the style of the training data to generate output based on the frequency of patterns allowing them to produce more fluent output.

The Verdict

Statistical Machine Translation technology is growing in acceptance and is by far, the clear leader between both technologies. The increasing availability of cloud-based computing is providing a solution to the high computer processing power and storage capacity required to run SMT technology effectively, making SMT a game changer for the localization industry.

Training data for SMT engines is becoming more widely available, thanks to the internet and the increasing volumes of multilingual content being created by both companies and private internet users. High quality aligned bilingual corpora is still expensive and time consuming to create but, once created becomes a valuable asset to any organization implementing SMT technology, with translations benefiting from economies of scale over time.

Tony O’Dowd, Founder and Chief Architect, KantanMT.com