The History of Machine Translation Pt.1

KantanMT Machine Translation HistoryAt KantanMT, we are working to change the future of the Machine Translation industry. As we create a new generation of MT technologies, it is important to acknowledge the work of earlier generations. In this blog series, we are going to take you through some of the key stages in the history of Machine Translation and talk about how KantanMT is contributing to its future.

Thanks to the folks at TAUS for providing such a detailed timeline on their website (link at the end of this post) to help us in writing this post! This first post focuses on developments from the 1940s to the end of the 1970s.

The 40s and 50s…
ENIAC (Electronic Numerical Integrator and Computer), the world’s first electronic general purpose computer is built in 1945 (see above image). In 1947, The Cold War between the West and Soviet Union begins and involves a computer technology race as both sides try to keep one step ahead of the other. In 1949, Director of the Natural Sciences Division at the Rockefeller Centre Warren Weaver introduces the idea of Statistical Machine Translation (SMT).

The 1950’s begin with the appointment of the first Machine Translation researcher Yehosha Bar-Hillel at MIT Boston. Shortly after this the first conference on Machine Translation is staged. Among the conference’s attendees is professor at Georgetown Léon Dostert (remember him from our blog The US and MT?) and he begins working with IBM on a practical experiment to see if Machine Translation is accomplishable.

The Georgetown-IBM Experiment is demonstrated publicly in 1954 and involves the IBM 701 rig translating 250 lexical items with 6 rules from Russian into English. In 1959, France introduces CETA (a centre for Machine Translation research) and the first book on the topic for general consumption, An Introduction to Machine Translation, is published by Emile Delavenay in Paris, France.


The 60s and 70s…
In 1960 the US Air force translates Russian to English with a 70,000 word dictionary using IBM technologies. The decade features the development of a number of research bodies and associations; the Association for Machine Translation and Computational Linguistics in USA (1962) and the TAM MT research group at University of Montreal (1965).

In 1966, the Automatic Language Processing Advisory Committee (ALPAC) finds that Machine Translation cannot compete with human translation and research funding for Machine Translation should be cut. 1968 brings Systran, the first official commercial Machine Translation company.

The 1970s begin with the French Textile Institute translating abstracts from and to French, Spanish, English, and German using the translation automation system TITUS. Logos Corporation begins development on a rules-based English to Vietnamese translation engine so that the US can give military technology to the South Vietnamese, however the US pulls out of Vietnam in 1973 and the Logos engine is never deployed on a full scale.

The European Commission in 1976 begins to develop a Systran English-French Machine Translation system. The end of the 1970s sees Machine Translation systems being rolled out by a number of governments and companies. For example, SIEMENs task Logos with developing a German-English system for telecoms manuals and the first Soviet Machine Translation Programme, AMPAR, is launched. EUROTRA, a high-spec Machine Translation system for the then-member languages of the European Community begins development in 1978.

In our next post, we will look at the key stages in the development of Machine Translation from the 1980s to the present day. It is in this period that SMT begins to develop and we will see how KantanMT is helping to shape the future of this branch of Machine Translation.

You can also find out more about Machine Translation and KantanMT by going to and signing up to our free 14 day trial.

The US and MT

KantanMT united_states_of_america_640To celebrate Independence Day and Bastille Day, we here in the KantanMT blogging workshop thought that we would use this opportunity to pay homage to the early contributions made by both American and French pioneers to the development of Machine Translation. In this first post, we are going to focus on America and one of the most important developments in the history of Machine Translation: The Georgetown-IBM Experiment. Background to the Experiment… Funnily enough, it all began with the Frenchman Léon Dostert who was Director of Georgetown University’s Institute of Languages and Linguistics. Dostert had previously worked as an interpreter for Eisenhower and liaison officer with Charles De Gaulle. Dostert also developed the translation system for the Nuremburg Trials. After attending the first ever conference on Machine Translation in 1952, an inspired Dostert decided to check out the feasibility of this new technology in a practical experiment. Dostert contacted the founder of IBM Thomas J. Watson, who agreed to support Dostert’s work. They established a team of both IT and linguistic specialists and the experiment was ready to begin. The Experiment… 12 machines, collectively known as the IBM type 701 electronic data processor, would translate 250 lexical items with six rules. The source language was Russian and the target was English. Why? Well, Russia was the biggest military threat to the US at the time, a machine that could translate Russian content to English would help the US to keep tabs on the Soviets. Watson said “I see this as an instrument that will be helpful in working out the problems (of world peace), we must do everything possible to get the people of the world to understand each other as quickly as possible”. Most of the sentences that were translated related to organic chemistry to show different uses of nouns and verbs. W John Hutchins, in his report The Georgetown Experiment-Demonstrated in January 1954, gives some examples:

  1. They prepare TNT
  2. They prepare TNT out of coal
  3. TNT is prepared out of coal

Associate Professor at the Institute Paul Garvin said that one of the major shortcomings of the experiment was that it was so limited – remember the experiment only consisted of 250 lexical items and six rules. But he defended its relevance, saying that the engine did have to make selection and arrangement decisions while translating the content.

Showing the World… The public demonstration of the experiment took place in 1954. It included a woman operator, without any Russian, inputting Russian language into the 701 rig and translating the Russian into English. Journalists also witnessed the machine translating segments on a range of subjects including politics, law, mathematics, and military affairs. One of the more catching newspaper headlines then was “Newest Electronic Brain Even Translates Russian”! The significance for Machine Translation… So what did The Georgetown-IBM experiment do for the Machine Translation industry? In his report, W. John Hutchins gives us the answer, “Before 1954, all previous work on MT had been theoretical. Considering the state of the art of electronic computation then, it is remarkable that anything resembling automatic translation was achieved at all. Despite all its limitations, the demonstration marked the beginning of Machine Translation as a research field seen to be worthy of financial support”. To find out more about Machine Translation and how KantanMT is continuing to change the way we see Machine Translation , go to and sign up to our 14 day free trial!