You have your finger on the pulse of latest technologies, and you are proud to use the latest automated technology for your localization needs. But, sometimes it might feel like you are still stuck in the 90s when it comes to reviewing your Machine Translation (MT) output for quality – especially, if you are using spreadsheets to collate your reviewers’ feedback on segments.
Traditionally language quality review for MT involves the Project Managers (PMs) sending copies of a static spreadsheet to a team of translators. This spreadsheet contains lines of source and target segments, with additional columns where the reviewers score the translated segments according to a set of predefined parameters.
Once the spreadsheets are sent off to the reviewers, PMs are completely in the dark – with no idea how the reviewers are progressing, when they might complete the review, or if they have even started the project.
If that sounds tiring, imagine what the PM has to go through!
Incidentally, if you think spreadsheets are the bane of your existence, you are not alone. Read this somewhat dramatically titled article on Forbes where the author Tim Worstall claims that we “wouldn’t have had the financial crash of 2007” if it wasn’t for Excel!
We listed 5 ways in which you can reduce your frustration and get the best out of the MT language review process:
Set exact parameters for reviewers
Identify what you want to get out of your translation – what is the purpose?
Setting very specific parameters, for e.g., capitalisations, spacing, etc. for your reviewers, based on the context of your translation will help streamline the review process.
This was a driving force behind developing KantanLQR, an online tool, which automates the language quality review process. It allows PMs to set custom parameters or Key Performance Indicators (KPIs), which are based on the Multidimensional Quality Metrics (MQM)* standards.
Collect reviewer feedback
Setting specific project parameters or KPIs for your reviewers is great. But in traditional review scenarios, reviewers may unintentionally skip entering the score for some of the parameters. This creates a gap, which can affect the quality of the MT engine you are building.
The trick to ensuring that reviewers give feedback for each parameter before moving on to a new segment is to send them reminders or notifications about the mandatory fields, before each new project. KantanLQR has built in logic to make sure reviewers enter scores for all segments, which can also help speed up the process.
Mention any unique stylistic preferences for your business
Each translation project is unique, and so, a set number of pre-defined KPIs may not be enough for some projects. Which is why, it is nice to be able to add new KPIs depending on the scope of a project. This means that if the text requires some unique stylistic translation, your reviewers will be able to score the content based on those new sets of KPIs.
Say for example, if you want the translated text to be in gender-neutral language, even though the source and target languages allow for gendered pronouns; you could add this as a KPI for your reviewers to check. If you are using KantanLQR, these can be set up automatically.
Provide reviewers with sample translations
Having a few sample texts of what you expect the reviewers to look out for in the translation will help speed up the process. Even a few thousand words would be enough for the reviewers to understand what is expected from the LQR project, and this will help them to progress with greater confidence.
Provide reviewers with your corporate style guidelines
Your corporate brand and style guide is the end-all-and-be-all for all your corporate communications. You can help your reviewers to know about your organisation, your products, and your customers better by giving them a copy of your brand and style guide. This in turn will help them review your MT output in a more nuanced fashion.
To know more about KantanLQR, visit our website or mail firstname.lastname@example.org for a free private demo.
Laura Casanellas, Carlos Collantes and Riccardo Superbo presented the webinar, ‘Improving Machine Translation with Automatic Language Quality Review’ on 15 September, 20016. You can watch the video below:
*MQM is a framework for describing translation quality metrics in a consistent fashion, helping you to keep to your company’s terminological standards.