Machine Learning is Just like Pouring a Pint of Guinness

Artificial Intelligence, Machine Learning, and Deep Learning are just some of the technologies that today are driving the expansion of what Klaus Schwab has dubbed in a recent book as the “Fourth Industrial Revolution” (see previous blog for a review of his book). This lexicon of terms populates the scientific papers people read in an effort to educate themselves about this galaxy of sophisticated concepts. The technology is sophisticated and understanding it can be daunting, but it is possible to become enlightened without burning up too much grey matter.

Firstly, let me explain a few key terms to help you differentiate between Artificial Intelligence, Machine Learning and Deep Learning. This graphic shows that they are inextricably linked, and are often referred to as cousins of each other:

AI

What I would like to try to do in this blog is explain some of these terms, and in particular to try to demystify Machine Learning. In a few words, Machine Learning is the capability of a machine to learn from data patterns to estimate a predictive model of behaviour or wants. In other blogs I will address some of the other concepts and hopefully demystify those.

Let me begin with a few words of solace and encouragement; stripped down to the basics, the fundamentals of this technology can be understood by the average layperson. That’s not to say you can become an overnight expert in any of these fields, but you will be able to understand at least their design and purpose. And as I say in the heading on this piece, Machine Learning is just like pouring a pint of Guinness. Stick with me while I explain what I mean. Hopefully it will make sense.

In the 1980s, while at college, I worked as a barman. I was a pretty decent barman too. I liked the job and I enjoyed keeping my customers happy. Little did I realise that during this time in my efforts to keep the “punters” happy I was applying the basics of Machine Learning. As a term it was coined in 1959 by Arthur Samuel at IBM. Trust me, I’d never heard of it in the 1980s, but I was applying some of the concepts to my work with me, in this case, being the machine.

Every Sunday, without fail, one of my customers would come in after church for a few drinks before going home for Sunday lunch. He arrived same time. Took the same seat. Ordered the same drink. It never wavered. One Sunday I saw him through the window heading towards the bar. I anticipated his order and pulled a pint of Guinness. He had no sooner sat on the same stool as always when I slid the fresh creamy pint in front of him. He looked and smiled a big appreciative grin.

It became the norm from then on that every Sunday I would look out the window and on seeing him approach, put his pint on. That went on for years until I left the job. Throughout that time he was very happy customer. No queueing for him. A pint ready-made and set up. It was for him the acme of customer service. As a customer experience, it could not be bettered (other than getting it for free!).

But wait says you, what to heck has this got to do with Machine Learning? Good question: well, it has everything to do with it, I would argue. In doing what I did every Sunday for that customer I was applying the methodology of Machine Learning. To explain further: I was identifying someone; I was discerning their pattern of behaviour and calculating the probabilities of their taking a certain action; I was learning from that process, and based on all of this data I was processing and predicting what I should do to keep that customer happy. In short, I was building a little algorithm in my brain which kicked in to action every time I saw that gentleman approach the bar on a Sunday.

This fact was underlined to me when one Tuesday afternoon the same punter came in to the bar. My reaction was to stand there waiting for him to order his drink. A Tuesday visit was an anomaly. I had no experience of him visiting on a Tuesday. I could have guessed he wanted a pint of Guinness and set it up. But it would have been just that, a guess based on zero data with no pattern of behaviour to guide me. As it turned out, he ordered a coffee as he was taking a break from work. Of course, had he come in every Tuesday over a reasonable period I could have worked out a pattern of behaviour with certainty, and thus made sure to have his hot coffee waiting.

Just as in Machine Learning, I had a well-formed algorithm that allowed me to act with certainty on a Sunday, but the same algorithm could not be applied on a Tuesday. That is the core of Machine Learning. Machines can be trained to gather and assess data, allowing them to recognise patterns and predict peoples’ behaviours or wants. How do you think Netflix knows what films and box sets you like to watch? Or travel companies know which advertisements for sunny climes they should present to you in the middle of winter? Machines develop algorithms that can be fined-tuned to guess what you want to watch, where you want to go, what you are likely to buy, or what your tastes in music might be.

The foundation of all of this “magic” is the accumulation of personal data that can be sliced and diced, analysed and parsed and from it the machine can be taught (learns) how to keep you happy (Netflix), or how to get you to part with your money (the unplanned winter holiday). Of course, Machine Learning is everywhere now. It is even driving cars, and is a growing part of our world. Why has something that has its origins in the 1950s, and was available and used in a limited way during the 1980s and 1990s become so prevalent and growing?

Why? Big data. Within the last 10 years it has become possible for companies to harvest huge swathes of data, from multiple and varied online sources. The additional ascent of powerful computers with extraordinary computational power has allowed corporations to analyse and exploit this treasure trove of information. Whole industries have been built around the ability to gather, analyse and exploit big data. As a means for building algorithms, and propelling company sales, it is a powerful tool. It is unlikely to go away. One thing that can be said for certain, increasingly ways will be found to expand the use of Machine Learning over the coming years. As a technology, like a pint of Guinness, it is here to stay.

Aidan Collins is a language industry veteran. He works in the marketing department at KantanMT.

The Fourth Industrial Revolution – Klaus Schwab

The author of this book is Founder and Executive Chairman of the World Economic Forum. That is the forum where the movers and shakers of the world meet at the Swiss alpine location of Davos every year to discuss geo-global trends across a range of political, social and economic areas. In this slim book Schwab seeks to predict how impending technological changes will impact on our life; socio, political and economic.
The author argues that the fusing of the political, physical, digital and biological worlds will have a transformative impact on all facets of human existence. This will range from the way we live our lives, the manner in which we will work, the reconfiguration of economic models, the products we sell and even, how long we will choose to live.
The author outlines the drivers of this revolution and cautions the business readers to “get on board” as we are “already reaching an inflection point in [technological] development as they build on and amplify each other in a fusion of technologies”. In an appendix he conveniently lists what are termed 23 “Deep Shift” technologies; those must likely to impact the way we live. This list is headed up by implantable technologies, courtesy of nanotechnological developments; the widespread use of digital currencies such as Bitcoin, driven by Blockchain developments; and the surreal prediction that neurotechnology will allow for humans to have artificial memories implanted in their brains.
Of course, all of the above can not happen without society acquiescing (or one would hope so). The author does discuss how the revolution will throw up challenges on all fronts as to the ethics, morality and legality of many of the putative changes. He warns that society will be in a state of rapid change as the fusion of technologies will create an exponential growth that will make this revolution a much shorter and deeper period of impact that the societal revolutions than mankind has ever before experienced. Throughout the book Schwab posits a benign view on the power of the Fourth Industrial Revolution. He argues that its power can be used for good if harnessed by careful, democratic control by “good leaders” and “decision makers”. Of course the obverse is also a possibility, although the author does not discuss that likelihood.
The challenge to societies’ leaders will be to learn how to harness for good the changes, but controlling and curtailing them when they venture in to possible unethical or illegal terrain  (as experienced  already with the whole hoovering up of people’s data for selling on to others).
Schwab sets this revolution in a historical context referring to the previous societal upheavals such as the industrial revolution. Each revolution did a lot to transform society, and not always for the better.
The first revolution was 1760-1840 and was triggered by the construction of railroads and development of steam power. It heralded the beginning of the mechanical age. The second started in the late 19th century and was the beginning of mass production, factory workplaces, the production line and mass employment in often poor conditions. The third, and most recent, was caused by the Digital Age. The development of semiconductors, mainframe computing and the emergence of personal computing harnessing the internet.
The author concedes that there are those who argue that what he heralds as a fourth revolution is no more than the outworking of a more advanced part of the third industrial revolution. Schwab holds his ground and says the fourth industrial revolution began in 2000 when technologies began to converge, Artificial Intelligence became a reality and robotics made huge advancements. He also argues that the ubiquity of small, integrated technology available to all at low cost, plus the conquering of the language barrier through the use of machine translation, has made the global market available to all who choose to exploit it.
Disappointingly, for a book that deals with such a diverse range of concepts and technologies, it lacks an index. This was probably the result of the book having been the product of the compiling of a series of papers originally written for other forums. The book is heavy on jargon and management speak.
It is, nonetheless, a slim volume that is fairly accesible to the average reader. Schwab leaves you in no doubt that the Fourth Industrial Revolution is capable of creating a dystopian world of cloud power, AI, implanted brains and robots. The question – which Schwab hints at rather than elucidates on, is whether we as humans should meekly adapt to all technology, or whether as a society we say: “hold on; thus far and no further, thank you.”

Quick – Catch That Wave!

“It is tough making predictions: especially about the future.” Yogi Berra

In March 2018 www.Slator.com published a comprehensive white paper titled: “Slator 2018 Neural Machine Translation Report”. It is a timely report that sets out to examine the current status of the translation technology, Neural Machine Translation. The report weighs up the pros and cons of the technology, looks at the growing embracement of it by big and not so big players, and looks to answer the question as to whether NMT is here for the long-term, an if so, what paradigm shifts will it cause.

Ray Kurwell, a serial inventor, says in his fascinating and thought-provoking book: “The Singularity is Near” that: “Inventing is a lot like surfing: you have to anticipate and catch the wave at just the right moment.” Of course, Kurwell was not the first to point this out. He was predated by a writer called William Shakespeare who advised that: “There is a tide in the affairs of man when taken at the flood, leads on to fortune.” The Slator report sets out to examine whether there is a tidal flood with NMT that others should now be looking to ride to their fortune.

The 37 page report, which can be purchased at www.slator.com, boldly declares upfront in its Executive Summary that NMT has become the “new standard in machine translation”. It bolsters this assertion by pointing out that in the last 12 months the number of NMT providers has quadrupled from a base of 5 to 20. Though perhaps more pertinent than the numbers is the quality of the companies adopting the technology as their go-to language solution. More importantly, NMT is being seen as a solution by major entities in both the private and public sectors. All of this painting a rosy picture of health for NMT’s future.

The report does accept that as of March 2018, NMT was still a niche movement. But the report goes on to caution that this narrow status might be a fleeting one, with exponential growth being driven at an increasing pace by the energy and finances of the Big Tech companies and the monolithic public services entities now involved. Added to this financial potency is the fact that the technology required to run NMT is decreasing in cost. In addition, the availability of giant, clean data corpora is growing.

Many of the Big Tech behemoths such as Amazon, Google. Microsoft, IBM and SAP, to name just a few, have committed themselves to making NMT work as a solution for them. The market for these companies is global. In order to drill down in to the locales of this global opportunity they realise they need an affordable solution for the language challenges involved. It seems they have decided that NMT is their champion.

Yet, as the report points out, NMT only became a viable player a mere four years ago. And incredibly, in those four years NMT condensed the equivalence of 15 years of statistical research in to this short period of time. Google, for example, say they replaced a system that had taken them 12 years to develop by an NMT system that took them a mere 18 months to create – a staggering 1.5 percent of the time.

The report highlights a few 2017 milestones:

  1. There has been a number of key release announcements by Big Tech players such as Amazon, and by what the report calls “boutique” providers, such as KantanMT.
  2. The deployment of NMT in both public and private sectors as a solution. (The European Patents Office (EPO) told Slator of their satisfaction with the ability of NMT to translate huge volumes of text).
  3. NMT has proven to be effective in translating stringently controlled material such as required by the EPO, and is also proving suitable for translating massive amounts of text in a real-time environment, as with Booking.com.

As is the way of developing business ideas, the pricing model has not yet been standardised. For Big Tech companies, the provision of translated texts is a service they are willing to provide their market in order to gain market share. The report says of boutique suppliers that they tend to charge “bespoke and flexible” pricing, depending on the service required. As of yet, there is no stock pricing matrix. Although the report predicts that with all the “various ways technology is changing how translators work, the industry is likely to switch its pay model from a per word to a per hour [charge].” Tony O’Dowd, CEO of KantanMT warns the language industry that the traditional approach to translation is: “… dead (or in its twilight zone)”.

However, NMT is not being trumpeted as the panacea for all ills. For many, there are still a lot of known unknowns to be tackled. For example, there is a lively debate ongoing in the industry as to what sort of data is necessary to create a fluent NMT engine? Some argue that this is like the proverbial “how long is a piece of string” conundrum; with factors such as language pairs, subject matter, quality of data, the algorithm involved, and so on. However, Tony O’Dowd is a lot more sanguine in his approach to the debate: “…it’s all about the quality”, he asserts. Tony believes that it is highly cleansed and aligned data, and not huge volumes of data, that is the secret to producing quality NMT results.

And of course, as with all empirical matters, it is not surprising that another vigorous debate surrounds the topic as to what constitutes quality. The report examines this debate as to how exactly quality can be, and should be, assessed. The report gives the views of different players on this tricky subject. The debate looks at technical quality testing, such as the BLEU score, and asks whether the ultimate assessment of quality can only be done by a human? Underlying this debate is another one within the industry as to whether quality is and should be a gold standard; a never to be tarnished status; or whether quality is what the customers deems quality to be? For example, does an online retailer, using real-time translations to communicate, need the scientific precision of a life science company selling life-critical equipment?

The report also brings good cheer in predicting that there will be a whole industry of sub-markets required to service the behemoths with high quality corpora. This, the reports says, is already a multi-million dollar business, and is set to grow. For some global companies it makes more sense for them to buy in ready-made corpora, than trying to create them from scratch. It is also true that many of these global companies would see the “boutiques” as a way to go for their NMT services rather than trying to build it themselves. A good example of this outsourcing model is the recent creation of the iADAATPA Consortium, an EU initiative that is tasked with developing the next generation Machine Translation platform for European Public Administrations.

Finally, as to the future of NMT?  According to Kirti Vashee of SDL: “Those who best understand the overall translation production process and deploy NMT … will likely be the new leaders [of the language service industry].”

Don’t say you haven’t been warned!

Aidan Collins is a language industry veteran. He works in the marketing department at KantanMT.

In Technology, Things Never Slow Down – Quite the Opposite

I am of an age where I can recall the pre-email intra- and extra-office communications process. Both were served by what is now called snail mail. External communications got a stamp and where posted at the end of every day. Internal communications involved a dedicated person travelling around offices handing out memos in large brown envelopes tied with string. If you were on the recipients’ list, the brown envelop was placed in your In-tray by the office postal clerk for you to peruse, at your leisure.

Once you had read the contents, and added a note of comment to them, they were then placed in your Out-tray. You ticked off your signature to show that you had read the contents. The brown envelop was then moved on by the clerk who spent his day walking the corridors to carry out this vital task. You know what, the process worked – albeit at the pace of a crocked snail. But that’s how the world was back then. People did not expect, nor demand, things to be addressed immediately.

Then some office I.T. genius spotted this new technological advancement that was sweeping the world. It was called, Email. The technology was duly introduced and we all received training on how this new-fangled invention worked. The old brown envelopes disappeared and the postal clerk put on a lot of weight from lack of exercise. But for worse (or for better?), the pace of work in the office was ramped up immeasurably. Suddenly messages were being received in your electronic In-tray and expectations grew that a message received should be answered immediately, if not sooner. Decision-making became a nanosecond exercise.

Indeed, people sitting only feet from you would “ping” (that was a new word for us) an email to you, rather than simply shout across the office, or talk to you over the watercooler. The introduction of this new technological changed the face and the pace of every office. It put it in to an overdrive that it never really decelerated from. I tell you this “All of Our Yesterdays” anecdote by way of demonstrating to you how technology begets a change that is often one of speeding up processes. Seldom does new technology aim to slow things down.

This speeding up is being driven by the constant evolution and improvement in the capacity of computers to crunch and process data. As the physical hardware gains more computational power, with super processing chips, that power is used to process and spit out huge corpora of data at breathtakingly fast speeds. But even this power is not proving sufficient as companies hunger for faster and cheaper solutions to their growing need to process huge amounts of data at almost real-time speeds. Already research is at an advanced stage whereby the silicon chip will be replaced by a new technology called the carbon nanotube. And on and on it will go.

The evolution of NMT too has been evolving at a breakneck pace. Tony O’Dowd recently commented in the Slator 2018, Neural Translation Report that:

“It’s fair to say that the [language] industry has condensed 15 years of statistical research in to three years of NMT research, and produced systems that will outperform the best SMT can offer.”

In short, NMT development has moved at five times the pace of SMT research. And the developments in industry bear this out. Google replaced a system they had developed over 12 years by a new NMT system they developed in just over 18 months. And with these developments comes the improvement of outcomes and capabilities. The rapid evolution of NMT has been served by the huge amount of time and effort being put in to research by many of the giants of industry. This factor, married to the development of faster and affordable hardware, has facilitated the ongoing demands for more speed and computational power. Google is working with a start-up company called Nervana Systems that is developing the Nervana Engine, an ASIC processor that increase current processing speeds by a factor of 10. Not surprisingly, Nervana Systems was bought by Intel in 2016.

It is no surprise that NMT, which is a model inspired by the workings of the human brain, is greedy for the speedy processing of huge corpora of complex data. And it is a sobering thought that the average human brain processes data at 30 times the speed of the best supercomputers. Fortunately, with the advance of Deep Learning, SMT requires only a fraction of the memory needed for traditional SMT. Whereby Email was demanded because the world needed to speed up inter- and extra-office communications, the development of NMT is being driven by the proliferation of mobile devices, in-home control systems, the rise of social media and the demand for real-time communications, the growth of e-commerce as a market opportunity for companies and the growth of Big Data and its insatiable appetite to crunch and understand huge amounts of data now, in multiple languages and at an affordable cost.

The adoption of NMT by behemoths such as Google has meant that this language solution has been given the blessing that it is a technology worthy of investment and research. And as is the way in industry once one giant adopts a system the other equally powerful entities feel the need to develop their systems. Facebook too has joined this race. Indeed, the top companies in the world, including Microsoft, Google, Amazon, eBay and Facebook to name but a few, have ongoing investment and research in NMT. With R&D spending prowess of these companies it is no wonder that the development of NMT has gathered such a pace. In fact, NMT is expected to surpass all other MT models and to grow to a market share of $46 billion by 2023.

The objective of NMT development is no small one. In essence, it can be defined as advancing a system that will allow people from anywhere in the world to be able to connect with anyone, and understand anything in their own language. Add to that the need for quality and speed and you can see the mountain NMT has to climb, and has been successfully climbing. Yet achievement of that objective is getting closer. Google, for example, supports 103 languages, it translates a 100 billion words per day (you read that right!) and communicates with the 92 percent of its users who are outside of the USA.

Those are staggering figures. But if companies want to grow their brands, open up fertile new markets and keep their shareholders happy, then these are the levels that must reach to keep pace with developments in NMT. And we are not only referring to the written word, for more and more of the demands are for the spoken word with the growth of voice activated technology and household “gadgets” such as Amazon’s Alexa, Google’s Home and Apple’s HomePod (and that list is growing). And the future of NMT is further being cemented by its adoption by key industries such as Military & Defence, IT, Electronics, Automotive and Healthcare to name just a few.

NMT has now been taken up by all serious language service providers (LSPs). The debate is ongoing as to how this will impact on the current LSP model. Undoubtedly, the role of the human translator is evolving to one of being an editor rather than translator. Pricing models are changing from the traditional price per word based on word volumes, to pricing on a time-measured rate. An expert at eBay has predicted that the traditional translator will evolve to become “… date curators of corpora for MT.”  Our own Tony O’Dowd has a bleaker assessment for the human translator when he says, “... the traditional approach to translation is dead (or in its twilight zone)”. But one thing seems sure, NMT – like email – is not going to go away. Speed is of the essence – that is the eternal watch-cry of technology.

Aidan Collins is a language industry veteran. He works in the marketing department at KantanMT.

Meeting the Challenges of Bahasa Indonesia

Indonesia as a country has a huge, diverse landscape and vast cultural tapestry.  Along with this rich cultural mixture, it has more than 700 native regional languages and dialects spoken across the islands populated by 30 million people.  Bahasa Indonesia, as the official language, has its role as the lingua-franca for all of the hundreds of languages and dialects present in the country.

Bahasa Indonesia is a flourishing language.  Based on the colonial history of Indonesia, Bahasa Indonesia was developed by inheriting words from Sanskrit and Dutch. Because of the very large population, of which the majority speak Indonesian Bahasa, it is one of the most of the widely spoken languages in the world.

Bahasa Indonesia has been treated as an active commercial language by the East Asian countries.  Because of the strong economic market in Indonesia, other East Asian countries have found it necessary to understand Bahasa Indonesia in order to trade with the country. Yet unfortunately, Bahasa Indonesia is still considered a minority language by many companies in the wider world, leading to research and development of the language being under-resourced.

Perhaps as a by-product of this under development, there has been a lack of statistical data created in order for Bahasa Indonesia to qualify as a candidate for either Statistical Machine Translation (SMT) or Neural Machine Translation (NMT).

As a taught subject, Bahasa Indonesia is a very dynamic language.  Bahasa Indonesia is not only taught by its grammar and sentence structure, but also the usage of Bahasa Indonesia in proverbs, poems, and essay writing.  It can be said that Bahasa Indonesia is one of the most difficult subjects to learn as a student.  The dynamic of the language makes it difficult to find the ‘right’ equivalence during the translation process.  As it was mentioned above, Bahasa Indonesia is a mixture of Sanskrit and Dutch. Many of the concepts and definitions have a deep historical background behind their ‘meaning’.

It the recent years it has been getting even more difficult and complicated to translate Bahasa Indonesia in to other languages, and vice versa.  The over-whelming power of English as a global language, and the advances in education and technology, which mostly comes from English speaking countries, has greatly challenged the development of Bahasa Indonesia. This is particularly so in the evolution of new words, which has the knock-on effect of making the task of creating, improving and training better statistical data for SMT even more difficult.

Today, if we look at the Bahasa Indonesia and English language pair being processed through machine translation there will be a lot of fixes needing to be done. This is because English words tend not to be fully translatable in to Bahasa Indonesia.  In addition to this complication, the younger generations in Indonesia tend to intersperse English and Bahasa Indonesia in most of their conversations and writing.

This leads to Bahasa Indonesian lacking a purity of language needed for an optimum use of SMT. Even human translators will keep some of the original English words because they are more commonly used than the Bahasa Indonesian words. As a result, many translations in to Bahasa Indonesian become a hybrid combination.

This lack of linguistic purity means a lot of preparatory work is required in order for Bahasa Indonesia texts to be suitable for SMT. Many of the SMT products for use in the handling of the Bahasa Indonesian and English language pair are inconsistent in their translation, leading to a lot of incorrectly translated texts. The quality of these is such that they are misleading, and of little use to the intended readers.

Even with the growing use of English and Chinese as mandatory subjects in the Indonesian education system, Bahasa Indonesia has been able to hold its place as the official language. It has adapted to the imposition of other languages and cultures by evolving. Yet ironically, it is this lack of linguistic purity created by the adaption of words and concepts from other languages that has proven a challenge for SMT.

If SMT is to become a solution for Bahasa Indonesia, a lot of work will be needed to be done to create and make usable a suitable body of statistical data. Only this level of work will allow Bahasa Indonesia to use SMT and be recognised as a world language.

Janet Siska, a fourth-generation Chinese, was born and grew up in Jakarta, Indonesia. At the age of 15, Janet moved to the USA. In 2015, she graduated with a BSc in Biochemistry/Chemistry from California Polytechnic University, Pomona. Currently, Janet is attending Dublin City University where she is studying for a Master of Science in Translation Technology.  Janet is fluent in Bahasa Indonesia, English, Korean, and has a working knowledge of Chinese and German.

KantanMT Embraces Change to Grow with the New Era of Machine Translation

Pic2

KantanMT recently launched a new interface. In this blog Laura Casanellas, Product Manager at KantanMT explores the reasons behind the change and talks about the new functionalities that have been added.

Continue reading

Academic Use of Machine Translation with Universitat Autònoma de Barcelona

Our Academic Partner Universitat Autònoma de Barcelona (UAB) used the KantanMT platform for numerous projects and courses in the University. In this blog post, we caught up with Professor Olga Torres-Hostench where she describes her experience of using our custom MT platform for her course.

UAB Continue reading