A blog of the Middle East Women's Initiative
A Second-Order Simulacrum: Gender and Racial Biases in AI Data
MEP Global Fellow Sola Mahfouz interrogates the data used in artificial intelligence. She asks if our databases and references are already flawed through excluding gendered and racialized minorities, then AI will exacerbate these inequalities instead of fixing them?
“We can't expect positive outcomes unless we address the ‘power shadows’ embedded within the data.”
We already live in a hyper-reality. Very few of us disengage from our phones and television screens to observe what is genuinely real. More and more, we are losing our reference to the original and the natural.
With the advancement of AI, we are beginning to live in a hyper-hyper reality, a second-order simulacrum. This new reality is shaped by a select few, raising critical questions about the subjectivity embedded within ostensibly objective data. This detachment may be partly why we see extreme polarization in contemporary society. As Jean Baudrillard aptly noted, “the simulacrum is never what hides the truth—it is truth that hides the fact that there is none. The simulacrum is true.”
As an Afghan, I can already see how much data about Afghans is lacking in the English language. We do not have many novels, for instance. Whatever exists is largely when Afghans are ‘studied’ by outside ‘experts.’ When that is the data, is there accurate representation? How does AI exacerbate the lack of representative data? What are the implications for technology that is increasingly making human decisions?
The origins of data
AI models frequently draw from data sources biased toward Western perspectives. This data influences how AI perceives and interacts with the world, masking the absence of a more global reality. AI, a second-order simulacrum, conceals the diversity and complexity of actual human experiences.
"Power shadows are cast when the biases or systemic exclusion of a society are reflected in the data," said Dr. Joy Buolamwini, author of Unmasking AI. Even when efforts are made to de-bias these models, they often result in a simplified, or ‘toy.’ version of what diverse groups might look like, reinforcing stereotypes and marginalization.
Facial recognition technology, for example, has been found to not just malfunction but to produce stereotyped, racist, and sexist images. These persistent failures aren't glitches; they are the reality of a hyperreal world where discrimination is encoded within the very fabric of our technologies. We cannot expect positive outcomes unless we address the ‘power shadows’ embedded within the data.
This is reminiscent of Alan Turing's reflections in the paper “Computing Machinery and Intelligence”: “I now proceed to consider opinions opposed to my own. The Theological Objection. Thinking is a function of man’s immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think. I am unable to accept any part of this but will attempt to reply in theological terms. I should find the argument more convincing if animals were classed with men, for there is a greater difference, to my mind, between the typical animate and the inanimate than there is between man and the other animals. The arbitrary character of the orthodox view becomes clearer if we consider how it might appear to a member of some other religious community. How do Christians regard the Moslem view that women have no souls?”
When did Muslims ever say that women have no souls? That was the first question that came to mind as I read that paper. Even in the attempt to remove bias, there is a bias. Here, Alan Turing imagined what those opposed to his opinion would think.
There is no such thing as true objectivity. Every dataset, every algorithm, is imbued with the subjectivity of its creators. The debates around AI consciousness or the replication of human thought miss the point. AI will never truly emulate human consciousness because it lacks the lived experience that defines humanity. The real question is what happens when we increasingly rely on AI and their simulated versions of human behavior.
The implications of flawed data
If we impose these ‘toy’ models of humanity, like Turing did in his paper—shaped by narrow, Western perspectives—on the broader population, we enforce a constrained and often inaccurate understanding of diverse cultures and identities. These simulations then become the basis for critical judgments, like immigration and job applications, which is a dangerous precedent.
In immigration, AI might determine who is allowed entry based on patterns that reflect Western norms and biases, disregarding the complexities of applicants' backgrounds and cultures. Similarly, in job applications, AI-driven systems may favor candidates who fit a particular mold shaped by Western corporate values, thereby marginalizing those from different cultural contexts.
As we advance further into this hyper-hyper reality, we must critically examine the subjectivity embedded in our so-called objective technologies. We should push against and resist the imposition of ‘toy’ models on complex humans, ensuring that people do not have to prove their humanity against AI-perceived models.
The views represented in this piece are those of the author and do not express the official position of the Wilson Center.
About the Author
Middle East Program
The Wilson Center’s Middle East Program serves as a crucial resource for the policymaking community and beyond, providing analyses and research that helps inform US foreign policymaking, stimulates public debate, and expands knowledge about issues in the wider Middle East and North Africa (MENA) region. Read more
Middle East Women's Initiative
The Middle East Women's Initiative (MEWI) promotes the empowerment of women in the region through an open and inclusive dialogue with women leaders from the Middle East and continuous research. Read more