Robot

Last month, Dr Dimitar Shterionov published an article on MultiLingual, a highly respected magazine on the localization industry, where he talks about the changing landscape of Machine Translation (MT) in the language industry, and how Neural Machine Translation (NMT) has very recently emerged as a revolutionary new paradigm in MT research.

ReadPost

Let us take a look at some of the key points put forward by Dimitar and what is happening with NMT. Thanks to increasingly robust technological advancements, Machine Translated content has proven to be incredibly accurate, especially when compared to the output of traditional MT models.

What is Artificial Neural Network (ANN) and Neural Machine Translation (NMT)?

Artificial Neural Network (ANN) was introduced in the late 1950s and is aimed at solving pattern recognition tasks. ANN is basically a paradigm for information processing, which is inspired by the way biological nervous systems (like the brain) process information. ANN is composed of a number of interconnected neurones that work together to solve a problem. Like any sentient being, ANNs will learn through experience or example.

Neural MT or NMT basically attempts to utilise recurrent neural network (RNN) to improve the quality of translations. Moving away from the traditional statistical machine translation, the NMT builds a single neural network that can be jointly turned to maximize the translation quality.

RNN models can be built by using a number of freely available tools for deep learning. Dimitar cites Caffe, Theano and TensorFlow in his article as some of the deep learning tools available out there, which can aid in building effective RNN models for MT.

Attempting to Mirror the Human Brain

During a TED talk in October, Stephen Larson talked about his company’s work on building a “live” organism within the computer. This novel research is based around a worm called C. elegans. The worm’s size and body properties have allowed scientists to completely map its neural network, study its properties and design computer models. The Open Worm project involves creating a new organism from computer code. It has the sensory and motor functions of the real biological counterpart, and can mimic its movements and actions. As Larson points out, the Open Worm project is all about understanding the brain as a machine by building models that can reproduce what it does.

While creating a machine that can completely mimic the immensely complicated human brain is still far off, RNN research can contribute immensely to MT by creating a system that can learn language the way humans can.

We learn a new language through three basic processes – we memorize the vocabulary of the language, understand the grammar of the language and finally adopt the stylistic tone of the language. While machines can be “taught” the basic vocabulary of a language by storing source and target words, an additional reasoning unit with advanced neural network model can help them to translate language more effectively.

State of the art in NMT research

In 1990 Greenstein and Penner from Stanford University studied RNN models for MT on the task of Japanese-to-English translation and observed that with “with relatively88148237_illustration [Converted] little training the model performs very well on a small hand-designed parallel corpus, and adapts to grammatical complexity with ease, given a small vocabulary.” They concluded that the “success of this model on a small corpus warrants more investigation of its performance on a larger corpus”.

Since then, RNN research for MT has come a long way. In 2014, Cho et al., 2014 published a study on RNN Encoder-Decoder where they show that the NMT performs relatively well on short sentences without unknown words, but its performance degrades rapidly as the length of the sentence and the number of unknown words increase. This was followed by a paper by Zoph et. al. in 2016 where they a transfer learning method that significantly improves BLEU scores across a range of low-resource languages.

At KantanLabs, the Research and Development division of KantanMT, Dr Dimitar Shterionov and his team are actively engaged in NMT research where they are looking at combining the Artificial Neural Network models with that of SMT, to develop a powerful language translation tool, which can adequately mimic the nuanced translations of a human translator.

To know more about KantanLabs, mail labs@kantanmt.com.

References

Greenstein, Eric, and Daniel Penner. “Japanese-to-English Machine Translation Using Recurrent Neural Networks.”

Luong, Minh-Thang, et al. “Addressing the rare word problem in neural machine translation.” arXiv preprint arXiv:1410.8206 (2014).

Shterionov, Dimitar. “Machine Translation with Brians.” MultiLingual. N.p., n.d. Web. <https://multilingual.com/view-article/?art=20160708-31&gt;.

Stergiou, Christos, and Dimitrios Siganos. “Neural Networks.” Neural Networks. N.p., n.d. Web. 05 Aug. 2016.

Zoph, Barret, et al. “Transfer Learning for Low-Resource Neural Machine Translation.” arXiv preprint arXiv:1604.02201 (2016).