Ekitai Solutions

Ekitai Logo White

+(91) 8076379790 || +(971) 56 741 3591

Call Us Now

How soon and effectively could machine learning replace human translators?

Written by

Ekitai Solutions AI Machine Learning

How soon and effectively could machine learning replace human translators?

Sometimes it seems like the internet is virtually awash in discussions about artificial intelligence, machine learning, and other technologies potentially replacing humans in the workplace.

A robot reporter used by The Washington Post published 850 articles in its first year in action, and Associated Press, a non-profit newswire based in New York City, has started automating some men basketball coverage and quarterly earnings reports using artificial intelligence.

Some journalists have even started to worry that their jobs could be replaced by robot reporters, a feat that would have seemed unthinkable – if not impossible –just a few years ago. The language or translation industries are no different in this regard.

However, it’s unlikely that artificial intelligence-powered translators will be replacing human translators any time soon. Some experts even think that the technology itself is already being over-hyped.

 

What does it mean?

Simply put, artificial intelligence-aided or machine translation automatically converts text from the source language to text in the output language without the need for a human translator. Theoretically, artificial intelligence can learn more languages than a human, and translate between them at a much faster speed.

There are several different types of machine translation programs that a business can use. The most common are statistical machine translation (SMT-based) – such as Google Translate – and rule-based machine translation (RMBT-based).

The SMT-based translation uses the concept of probabilities to refer to a set of target segments, then chooses the most likely words or phrases as a match to the source segment. I.E., it translates by finding the highest statistical probabilities that a translation is correct.

 

AI Machine Learning is transforming the translation landscape

 

Conversely, RMBT-based translation operates on the basis that language is all about syntax and grammar rules. Such programs refer to bilingual dictionaries for the specified languages that they are required to translate between.

These dictionaries are made up of linguistic rules that apply to the structure of sentences in each language, and rules to help the program link the sentence structure of each language to each other.

However, the process is time-consuming as new requirements have to be met each time a new language is paired with the source language.

It’s very effective when translating between languages with very different word orders, such as English and Chinese. But, it depends on a much more extensive list of sources that SMT-based programs, which can accommodate a much wider array of languages.

Google’s artificial intelligence-translator can now translate your speech while keeping your voice, according to the MIT Technology Review, after researchers trained a neural network to map audio “voice-prints” from one language to another.

The results aren’t perfect, but the translator was able to keep the voice and tone of the original speaker by converting audio input directly to the audio output without intermediary steps. More traditional systems would typically convert audio into text, translate the text, and then re-synthesize the audio, losing the characteristics of the original voice and tone in the process.

 

So is it possible that machine translators could replace humans in the workplace?

Artificial intelligence is ubiquitous today. Among other uses, it powers virtual assistants, such as Amazon’s Alexa and Apple’s Siri. It can recognize who and what is in a photo. It has ideas about what you should buy next when you’re shopping online, based on preferences and previous purchases.

There are two broad types of artificial intelligence: narrow and general. The former are intelligent systems that have been taught or have learned how to carry out tasks without specific programming. It has many emerging applications, such as interpreting video feeds from drones, organizing calendars, and helping radiologists spot tumors on x-rays.

General artificial intelligence is more theoretical. It doesn’t exist today but can be found in science-fiction, represented by the likes of Data in Star Trek, HAL in 2001: a Space Odyssey, and Skynet in the Terminator franchise.

It is the kind of intellect possessed by humans – a truly artificial consciousness – and would be capable of learning how to carry out a wide variety of tasks or reason about a wide range of topics based on accumulated experience.

artificial intelligence translator

This would be the kind of artificial intelligence translator that would have the ability to replace human translators in the workplace. Current machine translators are narrow artificial intelligence. They are capable of “learning” languages, but they cannot fully appreciate the nuance inherent in human communications.

 

Machines can’t understand the culture (yet)

Different cultures have different lexical items – like slang, idioms, and proper nouns – that are unique to that specific culture. Machines don’t yet have the complexity to understand or recognize them.

Meanwhile, native speakers who are well versed in the languages into which they are translating, and understand all the slang and idioms of the respective countries, are usually skilled enough to find appropriate equivalents.

Sometimes, a word can mean one thing in one culture, and something totally different in another. Context is important in translation and often completely dependent on human involvement as machine translators can only perform direct word to word translation.

In other words, just because they can translate a word directly, that doesn’t mean that they understand it’s meaning, which can completely alter the translation itself. Similarly, many languages have words with dual meanings, which can be a problem for machine translators.

Languages are, almost by definition, subjective. Artificial intelligence, on the other hand, typically excels at tasks that are rooted in objective reality, functioning best when confronted with clear mathematical or physical rules that govern their decision-making.

However, languages are subjective constructs invented by groups of humans to communicate with each other. While they often exhibit rule-like behavior, this is grounded in the convention, not objective reality, and continuously evolving.

 

Robots don’t do humor.

Even human translators find jokes, puns, nuanced cultural references, and the occasional bit of sly innuendo hard to get right. Different cultures find different things funny, and differences in spelling and vocabulary can render clever wordplay meaningless.

From an interpreter’s standpoint, tone of voice and body language also directly inform a speaker’s intent and have to be accurately analyzed and conveyed in the target language too. This is challenging for humans and – so far – impossible for a machine.

A move from statistical, phrase-based machine translation to neural networks has resulted in significant improvements in overall quality. But, neural machine translation is now even more dependent on huge sets of training data than previous models.

And since the most significant bilingual data sets available often come from official translations of government documents and religious texts, these algorithms have pitifully low exposure to humor, wordplay, and non-verbal expression.

Furthermore, machine translators can’t admit to – or correct – their mistakes. When Google Translate started offering biblical prophesies in exchange for junk input, for example, experts attributed the errors to neural networks’ preference for fluency over accuracy.

These “false positives” are far more problematic than more obvious mistakes, as audiences in the target language might not realize that a glitch has occurred and could attribute the outlandishness of the translation to the original text itself.

Machine based text translations

 

Machines can’t translate on the fly

The above challenges make it difficult enough to perform machine translation on the static text. Asking a computer to translate live speech simultaneously adds several extra layers of complexity, including automatic speech recognition (ASR).

When you have a conversation with Siri or Alexa, they seem to be pretty competent conversationalists, but that exchange is still constrained within a narrow set of context and conditions; short, command-based interactions with a finite vocabulary in a controlled environment.

Most situations in which real-time translation of live speech would be required – such as conferences and business discussions – however, feature speech that is spontaneous, continuous, and often highly dependent on context and specialist knowledge.

Perhaps unsurprisingly, these are traits that send the error rate of most ASR programs soaring. For example, during a speech given by hedge fund guru Ray Dalio, in which he reflected on his mis-forecasts as a young trader, a real-time subtitling machine translator rendered “How could I be so arrogant?” as “Aragon, I looked at myself and I.”

Recent advances in the field have shown some promise. Many experts predict that the word error rate of ASR software will reach parity with human transcribers soon. However, not all word choice errors are equal and won’t always be relatively inconsequential.

 

Machines aren’t coming for your job (yet)

All of this is to say that it seems unlikely that machine translators will be replacing their human counterparts tomorrow.

While translators and interpreters – alongside copywriters, journalists, and other language economy professionals – may very well lose their jobs to robots at some point in the future, this isn’t going to happen tomorrow.

However, some experts estimate that within the next one to three years, and machine translators will be taking on around 80 percent of corporate translation work.

This doesn’t mean that organizations like the United Nations and police forces won’t need translators, or that we won’t still need sign language interpreters. It is more likely that machine translators will increasingly become just another tool in the arsenal of a good translator.

Computer-assisted translation tools are already widely used among text translators, and while some may object to the idea, simultaneous interpreters could almost certainly benefit from a combination of speech recognition and translation memory technology.

For translators looking to machine-proof their careers – as best as possible – it might be wise to specialize in those languages that are less widely spoken, and encourage the industry to see artificial intelligence as a complement to human output, not a replacement.

[mc4wp_form id="8"]