18/08/2022

Bias in machine translation: A wake-up call for the translation industry?

Bias in machine translation: A wake-up call for the translation industry?

Often, when we think of machines and AI, we tend to think of them as emotionless, logically-driven systems that are more objective than humans. This is one of its strengths compared to more subjective, more irrational human processes, but it can also be a weakness.

But it’s become increasingly clear that AI is not, in fact, as objective as we think, and they are increasingly shown to reflect biases in their operations that are all too human.

In this article, we will talk about the paradox that is machine translation bias and why it’s a problem that MT developers and stakeholders need to address. 

What is “bias” in the context of machine translation?

In common usage, bias is defined as the tendency to make prejudiced assumptions or have prejudiced inclinations toward someone or something. There is a negative connotation to the term, in that biases are often unfair and discriminatory.

In the context of machine translation—and language AI more broadly—the term bias refers to the tendency of a system to make the same assumptions repeatedly. At first, it might seem that there’s little overlap. AI systems are ostensibly built with neutral algorithms that don’t carry human biases. But algorithms aren’t the only factor that these systems are built on, and in practice, human biases can and do creep into them inevitably.

How can MT systems become biased?

But what kind of biases can machine translation have? It’s not as though MT has a mind of its own, like humans, that are capable of carrying prejudices, whether conscious or unconscious. But what machine translation has is a tendency to magnify human assumptions about language, both the good and the bad. 

While MT algorithms can be considered technically neutral and objective, the data on which language models are trained are not.

It’s a matter of statistics, first of all. Massive amounts of linguistic data, all generated by humans, is needed for language models to work. Without extensive intervention, that data will naturally reflect the biases in human thought and speech that already exist, and replicate it in whatever output is made.

MT systems make choices based on statistics

In the case of machine translation, the AI will tend to make assumptions about translation output that is statistically more likely to occur based on examples from the data provided.

This can have more innocuous results—for example, in translating the word lift into Spanish. In Spanish, this can translate to levantar and elevar, which can be considered synonyms. But levantar is the more commonly used, so most machine translations will choose it in translating text.

Here we see a machine translation system at work, making a choice based on the statistical probability of a translation occurring according to the data it has. But what happens when the MT system has to make a more difficult choice?

The problem of gender in machine translation

Gender is a major choice that MT systems need to consider for some languages. For example, doctor translates into doctor or doctora in Spanish, changing according to the gender of the person being referred to. It can also translate to medico or medica.

Most MT systems generally default to the male version. Google Translate provides alternative translations for both genders, but it also defaults to male for longer sentences.

Meanwhile, nurse translates to enfermero or enfermera. But here, most MT systems will default to the female form.

This is something that you can test yourself—try translating these terms by themselves, then try testing them with a long sentence like “The doctor/nurse said you should rest as much as necessary.”

All this hinges on the biases that exist in data that machine translation systems are trained on. MT systems don’t know gender per se. But because the available data tends to translate doctor as male and nurse as female, the MT system will also prefer these translations as a matter of statistical probability.

As such, machine translation’s technical “objectivity” becomes its own weakness in practice. Because of it, machine translation poses the danger of perpetuating gender stereotypes, and this is a problem that continues to exist even as machine translation’s quality continues to improve.

How to address bias in machine translation

Bias in machine translation is a complicated problem to solve. Because machine translation works according to statistical algorithms, the introduction of bias is a systemic problem from the very start.

It’s difficult to find a technological solution. Even the big players like Google, Microsoft, and DeepL still have difficulty with it. This means that human intervention remains an important part of working with machine translation.

What does this mean in practical terms?

The role of the post-editor becomes the front line of eliminating bias in machine translated output. Humans are able to manage translations taking context into account, something that machine translation still has difficulty doing.

But it goes deeper than that. Not just any post-editor will do. After all, at the root of things is bias as a human problem. This means post-editors need to be aware of both human and machine bias, and the risk of bias in machine translation. They need to be trained to deal with these biases in their line of work.

And dealing with bias isn’t just down to the role of post-editors. Anyone working with machine translation, including managers and consultants, need to be aware of the problem and the risks involved in machine translation. They need to be able to convey this information to their clients and other stakeholders so they know what to expect in terms of what can be done, and what needs to be done in order to use machine translation in an ethical way.

Parting thoughts

Bias in machine translation is a problem that isn’t going away soon. But we believe that it’s possible, even necessary, to look at it from a different perspective. Instead of a problem of technology, the problem of machine translation bias should be seen as an opportunity to empower humans working with machine translation. The human capacity for empathy, justice, and goodwill is something that should come front and center to compensate for what the technology lacks.

Machine translation bias should be a wake-up call for the industry to think not only of the risks that come with machine translation, but also the ways that we interact and build solutions around it with human-machine collaboration in mind.