08/02/2023

Creating an Inclusive AI Future: The Importance of Non-Binary Representation

Creating an Inclusive AI Future: The Importance of Non-Binary Representation

Hello all! 

Today, we'd like to shine light on a crucial issue in the world of technology: gender bias in machine translation and the pursuit of a more inclusive future in AI. As our reliance on technology grows, it has become increasingly important to address the biases that are built into these systems. 

And who better to share their thoughts on this topic than Michal Měchura, the visionary behind an exciting solution that promotes diversity and inclusivity in AI. This solution, called Fairslator, is an experimental application that removes biases in machine translation by surveying questions to detect and correct them. Unlike other solutions, Fairslator is designed to work hand-in-hand with humans, adhering to the philosophy that it's always better to ask than guess. Michal will provide valuable insights into the impact of gender bias in AI and what can be done to overcome it. Let's dive deep into this rabbit hole of a subject together!

 

Can you explain the concept of gender bias in machine learning and how it affects the performance and accuracy of language models?

Michal: In machine translation, gender bias happens when the machine makes an unjustified assumption about someone’s gender. For example, when it assumes that words like "director" and "pilot" refer to men and words like "flight attendant" and "nurse" refer to women–and then it translates them accordingly into a language that has separate male and female words for these occupations (which is practically all European languages except English). 

On one level, these biases are not wrong. The machine learning algorithm–through which the machine translator has been created–has correctly spotted that, in its training data, words like "director" usually refer to men and words like "nurse" usually refer to women. So, technically, the algorithm is doing exactly what it’s supposed to.

On another level, these biases very much are wrong. If I go to Google Translate and ask it to give me a translation of “I work as a flight attendant” into a language that has separate words for male and female flight attendants, and if the translation I get is worded as if I am a woman when in fact I am a man, then I have been given the wrong translation: wrong because it is different from what I intended. The fact that most flight attendants in the training data were women doesn’t make it less wrong for me here and now.

So yes, bias is a consequence of machine learning from biased data. Many types of bias can be straightened out by feeding the machine less biased training data and by teaching the machine to pay more attention to context and co-reference. For example, in a text such as "we are waiting for the pilot, she is late for work again" a well-trained machine translator should figure out that "pilot" probably refers to a woman, because of the “she.”

But, interestingly, this is not where the story ends. There are cases of bias where even a super-smart AI can’t figure out what gender someone is because the text simply doesn’t say. An example would be a sentence like "we are waiting for the pilot, who is late for work again." If this is all the input you have then you simply don’t know what gender the pilot is. Not even a human translator would know without asking or looking. Machine translators make biased choices under such circumstances too, but here you can’t make them unlearn that just by giving them better data or by teaching them about context. My project Fairslator is about dealing with exactly this kind of bias, but more about that later I guess.

 

In what ways do machine translation systems perpetuate gender bias and stereotypes?

Michal: I always like to talk about the effects of biased translations on two levels: an individual level and a social level.

On an individual level, a biased translation is wrong when it’s factually different from what I intended: I want to talk about somebody who is a woman but the machine is translating as if I’m talking about a man. I think it’s pretty uncontroversial that this is a bad user experience.

On a social level, biased machine translation has the same effect that biased language in general has: it plants in people’s minds the subconscious idea that certain things (such as certain occupations) are more for men than for women, or vice versa. If I go about my daily life talking about pilots and doctors like they’re always men, that’s bad enough. But if an automated tool, powered by machine learning, picks up on this prejudice and amplifies it into the world a million times over, then that’s a million times worse.

That’s what machine learning-powered automation does: it learns skills by observing human behaviour and then amplifies them. Most of the time the skill which is learned and amplified is good and desirable: OCR, early-onset diagnostics in hospitals, Shazam on your phone, etc. But in a minority of cases the "skill" or habit which the machine has learned is something sensible people would rather not encourage, like gender bias.

 

How does Fairslator's technology ensure that gender and race biases are eliminated in the translations produced?

Michal: As I said, Fairslator targets the kinds of biases that can’t be fixed by conventional methods. It detects bias-causing ambiguities in the source text and asks the human user to disambiguate them manually.

I’ll show you what Fairslator does through an example. You input an English sentence which is gender-ambiguous, such as "I am the director," and ask for it to be translated into some other language, such as French. Fairslator goes to a third-party MT provider such as Google Translate and obtains a translation from there: "Je suis le directeur." Then Fairslator runs the two sentences (the source plus the translation) through its ambiguity-detection algorithm and notices that we have an expression in the source text, "the director," which is gender-ambiguous but which has been translated using the gender-specific male expression "le directeur." Fairslator knows that this is just one of two options, so it asks you: "Who is the director? Is it a man or a woman?" If you select woman, Fairslator re-infects the translation accordingly: "Je suis la directrice." In some languages it even offers a third, gender-neutral option which rewords the whole sentence in a way which allows for both genders.  

So Fairslator is three things really. First, it is an algorithm for detecting bias-causing ambiguities in a pair of text segments where one is a translation of the other. Second, it is a user interface where humans can manually resolve these ambiguities by answering human-friendly questions such as "is this person a man or a woman?" Third, it is an algorithm for re-inflecting the translation in accordance with the user’s choices, typically by changing male words to female words.

I’ve been developing this technology for over a year now and I have it working reasonably well in four language pairs: English-French, English-German, English-Czech and English-Irish. Also, I want to add that Fairslator works not only on gender but also on forms of address, such as when it isn’t clear whether "you" refers to one person or many, or when it isn’t clear whether people are being addressed formally or informally–which is also an important factor in many European languages.

So, Fairslator isn’t just about those ambiguities that cause gender bias, it’s about ambiguities in general. Every time you have something "unsaid" in the source text which is important for translation but which the machine is unable to resolve from context, a tool like Fairslator should be able to jump in and solicit a clarification from the human user. I like to describe this model as "human-in-the-loop machine translation" or even "human-assisted machine translation."

 

How can we ensure that the data sets used to train machine learning models are diverse and inclusive of different genders?

Michal: This isn’t a question for me because Fairslator isn’t about that. The point of Fairslator is that even if you did have perfectly gender-balanced training data, machine translators would still run into gender ambiguities sometimes, and would be forced to make unjustified gender assumptions sometimes. However smart and unbiased the AI is, it can never know what gender somebody is if the source text doesn’t say. Take a sentence like "This is our new director" and imagine you have no context: no text before and no text after, you just have this one sentence which someone has typed into a machine translator. How can you know whether the director is male or female? You can’t. The only way is to ask the user. On the Fairslator project I’m developing tech for doing exactly this: detecting when the source text is ambiguous, formulating follow-up questions, and re-inflecting the translation accordingly.

Yes, you can get rid of a lot of gender bias by training on better balanced data. Some instances of bias are solvable like this, but others aren’t. Fairslator is here for those that aren’t.

 

What inspired the formation of Fairslator and what led to the creation of its technology to eliminate biases in language?

Michal: The idea had been brewing in my head for a long time, literally for years. I had always thought how silly it was that nobody had created a technology yet that could ask how you mean certain things when translating from one language to another. Without that the machine is just guessing, jumping to conclusions, making assumptions which may be unfounded. But we have the human user right there in front of the screen, so why not ask them instead of guessing? Surely this must be doable, I thought- until one day I rolled up my sleeves and started building such a tool myself.

I guess I need to say at this point that Fairslator is my own personal pet project. There is no investor or grant or anything like that behind it. I wanted to show the world that the problem of bias in machine translation can be solved and that my way of solving it can work. Fairslator is only a prototype at this stage, a partially working proof-of-concept. But I am working on a more robust version which I hope to commercialise in the future as a component in the machine translation post-editing process, so watch this space, as they say.

 

In your opinion, what role do you think the field of machine translation and natural language processing can play in promoting gender equality and social justice?

Michal: Never mind making the world better: I’ll be happy enough if we, the people who build language technology, stop making it worse. As things stand at the moment, we are building translation tools that have picked up biases from texts our ancestors wrote decades ago, and we are now feeding those biases back into the world at an amplified scale.

We need to build better filters to stop that from happening. There are filters we should be applying both before and after the translation step itself. The filters before, that’s things like gender-balanced training data. The filters after, that’s tools like Fairslator which detect bias-causing ambiguities and bring humans into the loop to resolve them.

My goal, my mission even, is to normalise the idea that machine translators ask follow-up questions. You can’t produce unbiased translations if you’re not doing that. It’s what good human translators do, and it’s time machines learned how to do that too.

 

In conclusion, we would like to extend our gratitude to Michal Měchura for sharing his insightful perspectives on the challenges posed by gender biases in AI and the great potential for change with the help of Fairlslator. Michal's passion and dedication towards creating a fairer, more inclusive future through the responsible development of AI technology is inspiring, and we are grateful to have had the opportunity to learn from him. We believe that initiatives like Fairlslator will play a critical role in addressing the biases that exist in AI and promoting equal opportunities for all individuals, regardless of gender or other personal characteristics.