Back in the days of yore, language translation was a highly specialized profession, critical for coordinating diplomacy or international trade. The first bilingual dictionary book, Vocabularius ex quo, was a German-Latin set of words published in 1467, while clay tablets containing lists of works in Sumerian and Akkadian date back as early as 2300 B.C.
Language translation has become easier over the years in many cases thanks to the work of linguists and other anthropologists. The development of computers and eventually, artificial intelligence, has given a massive push to language translation, taking it out of the hands of specialists or weighty books and into our phones.
But how has translation applications like Google Translate developed over time, and what were their predecessors?
How Computer Translation Started
People have tried to use computers to translate languages since the mid-20th century.
“The idea of online translation was something that people strived for when computers began,” says Jaroslaw Kutylowski, the CEO of DeepL, a company that provides translation services using neural systems.
In fact, one of the first uses of computers for something beyond numbers was an experiment conducted in 1954 after several years of work by researchers at Georgetown University and IBM. The demonstration only translated around 250 words using six grammar rules — mostly brief statements in Russian about science, law and military affairs that were converted into English in a matter of seconds.
The possibilities of this program were limited though, and the translation required a lot of post editing. Not much progress was made for the next decade. In fact, experiments were so underwhelming that a report released in 1966 by the Automatic Language Processing Advisory Committee, set up by the U.S. government and made up of linguists and machine translation researchers, determined that there wasn’t much hope in the near-term.
“The Committee indeed believes that it is wise to press forward undaunted, in the name of science, but that the motive for doing so cannot sensibly be any foreseeable improvement in practical translation,” the report said.
Read More: It Was Rare, But Ancient People Sometimes Needed to Translate Languages — Here’s How
The Beginning of SYSTRAN
The report effectively killed most work on translation for years, except for System and Translation (SYSTRAN). Peter Toma, a scientist who believed the road to world peace could be achieved through communication, started SYSTRAN in the late 1960s.
The company worked with the U.S. Air Force, and used machines to translate instructions from Russian to English for the Apollo-Soyuz Test Project, a joint U.S.-Soviet space mission launched in 1975.
SYSTRAN continued to develop their product, which was used by the online search engine AltaVista, which created Babel Fish in the late 1990s, and would later be bought by Yahoo! At least in the early days, the tool had limited translation abilities, only handling up to 150 words.
“That’s a lot of work, and it requires combined effort by linguists and coders,” Kutylowski says, adding that it gets even more complicated with languages with “incredibly complicated grammar” like Japanese or Finnish.
Google didn’t catch up to Yahoo! until 2006, when it launched Google Translate. The latter has gotten increasingly sophisticated, moving from simple text translation online to phone applications. In 2014, Google acquired Word Lens, which allowed users to point their phone camera at written words on things like road signs or menus and get a translation. Word Lens also helped Google improve its ability to listen to spoken words and translate them.
Read More: Your Native Language May Wire the Brain in Unique Ways
The Role of Artificial Intelligence
The trouble was language translation remained an imperfect science at best. In general, the world of computer translation progressed from doing one word at a time, to short phrases, to sentences over a period of roughly half a century, says Kutylowski.
But machines are still apt to miss things like context, making words with two or more meanings sometimes hard to translate. But in 2016, artificial intelligence had opened new possibilities for machine translation. Google began to work with neural machine translation, which essentially meant creating a neural network that wouldn’t just translate words or phrases, but entire passages.
DeepL started using neural networks to build a translation program to sell to international companies in 2017. “We’ve been in this gold rush of neural networks,” Kutylowski says. “Translation has such a broad application both in private and in business.”
To do this properly, he says that you need massive amounts of passages that have already been translated between one language and another. Experts ideally need scientific text, legal text, and all types of language that are translated to teach the machine learning systems how to better operate.
Neural machine translation is an attempt to create neural systems that operate more like a human brain. Rather than teach it grammar, Kutylowski says, it just picks up the text and learns more like humans, who can pick up a language without implicitly learning any grammar rules.
“That overall learning process, and the overall operation of systems, goes towards the way that we as humans learn a language,” he says.
These systems are getting better and better at translating, but they will likely always have flaws. The trouble is, communication is an imperfect science, even for two humans that speak the same language. An online translating program can only be as good at translating as humans are at communicating in the first place.
Article Sources
Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:
Joshua Rapp Learn is an award-winning D.C.-based science writer. An expat Albertan, he contributes to a number of science publications like National Geographic, The New York Times, The Guardian, New Scientist, Hakai, and others.