This creepy Microsoft algorithm helped stalk girls to predict who would get pregnant – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

AI

This creepy Microsoft algorithm helped stalk girls to predict who would get pregnant

A shocking revelation made by an in-depth WIRED investigation has exposed that AI was used in Argentina to obtain wholly invasive data about young girls situated in the northern province of Salta back in 2018. The backdrop in which this was occurring coincided with the Argentine Congress’ vigorous debate on the legalisation of abortion in the country—this context is vital to know. The bill to legalise reproductive rights in the country was ultimately rejected at the time, with many stating the influence of the Catholic Church as an overriding factor.

“Lawmakers chose today to turn their backs on hundreds of thousands of women and girls who have been fighting for their sexual and reproductive rights,” Mariela Belski, the director of Argentina’s Amnesty International, told The Guardian at the time. “All that this decision does is perpetuate the circle of violence which women, girls and others who can become pregnant are forced into.” Little did we know at this point that that violence extended into the world of AI. 

It is becoming increasingly clear that the realms of tech are just as unsafe as the real world for women—take the recent ‘gang rape’ that took place in Facebook’s metaverse—and now, very disturbing facts are emerging around Microsoft’s AI software. In 2018, the Ministry of Early Childhood, situated in the province of Salta, collaborated with tech corporation Microsoft to develop an algorithmic system powered by AI that would help ‘predict teenage pregnancy’. The program was titled ‘Technology Platform for Social Intervention’. Its purpose? To forecast which girls, specifically those from low-income areas, were most likely to fall pregnant within the next five years.

The system’s data was collated by looking at the age, country of origin, ethnicity, disability and even whether the person in question had hot water to predict if they were predestined for motherhood. As you can imagine, none of the individuals who were analysed gave their consent prior to this little experiment. Following a girl’s ‘predestined to pregnancy’ label, WIRED noted that it was unclear what would happen to her or how such data would be useful in preventing teen pregnancy. In an even more terrifying revelation, the publication reported that those analysed by the creepy AI software were also visited by “territorial agents,” who inspected the homes of the girls, took photos and registered their GPS locations—the girls in question were often poor, migrants or from Indigenous roots.

WIRED’s report makes note of the colonial history of the country and its impact on the many Indigenous communities in the country. Detailing the horrors of the 70s to 80s dictatorship where women were required to ‘populate the country’—with a ban on contraceptions while being murdered following the birth and children adopted into ‘patriotic Catholic families’—the publication highlighted the worrying similarities in rhetorics between then and the alarming eugenics thinking still motivating reproductive rights. Only this time, using AI.

Though it is unclear whether the use of these somewhat dystopian technologies has come to an end, it is thanks to the grassroots feminist activists in the country and their tireless work that such a gross violation of the rights of the women and girls of Salta was exposed. Some of those women include feminist scholars Paz Peña and Joana Varon. “The idea that algorithms can predict teenage pregnancy before it happens is the perfect excuse for anti-women and anti-sexual and reproductive rights activists to declare abortion laws unnecessary,” they wrote. The ‘Technology Platform for Social Intervention’ seems nothing more than yet another bid to control and remove the reproductive rights of women, girls and anyone with a uterus.

“It is also important to point out that the database used in the platform only has data on females. This specific focus on a particular sex reinforces patriarchal gender roles and ultimately, blames female teenagers for unwanted pregnancies, as if a child could be conceived without a sperm,” they continued. The scholars also cited the statement of conservative politician and governor of Salta, Juan Manuel Urtubey, who declared at the time that “with technology, based on name, surname and address, you can predict five or six years ahead which girl, or future teenager, is 86 per cent predestined to have a teenage pregnancy.” Peña and Varon revealed that the Ministry of Early Childhood joined forces with anti-abortion NGO the CONIN Foundation, along with Microsoft, to develop this algorithm.

“According to their narratives, if they have enough information from poor families, conservation public policies can be deployed to predict and avoid abortions by poor women. Moreover, there is a belief that, ‘If it is recommended by an algorithm, it is mathematics, so it must be true and irrefutable’,” Peña and Varon continued. Ana Pérez Declercq, director of the Observatory of Violence Against Women, shared similar sentiments with WIRED, “It confounds socioeconomic variables to make it seem as if the girl or woman is solely to blame for her situation. It is totally lacking any concern for context. This AI system is one more example of the state’s violation of women’s rights. Imagine how difficult it would be to refuse to participate in this surveillance.”

The algorithm—heralded by Microsoft as “one of the most pioneering cases in the use of AI”—presents a worrying display of how tech can be used by those in power to exacerbate the already evident disparities in equality across digital spaces and how that can be used to influence real world changes.

Woman speaks out after being ‘gang raped’ in Facebook’s metaverse

A woman has recently spoken out about being sexually harassed on Meta’s virtual reality (VR) social media platform. She’s not the first… and won’t be the last. Nina Jane Patel, a psychotherapist who conducts research on the metaverse, said she was left “shocked” after three to four male avatars sexually assaulted her in the umbrella company formerly known as Facebook’s VR platform.

“Within 60 seconds of joining — I was verbally and sexually harassed — 3-4 male avatars, with male voices, essentially, but virtually gang-raped my avatar and took photos — as I tried to get away they yelled — ‘don’t pretend you didn’t love it’ and ‘go rub yourself off to the photo’,” Jane Patel wrote in a Medium post on 21 December 2021.

The 43-year-old mother said it was such a “horrible experience that happened so fast” before she even had a chance to think about using “the safety barrier,” adding that she “froze.” She continued by confessing how both her “physiological and psychological” reaction was similar to it happening in real life. “Virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real,” Jane Patel wrote.

While the whole concept of the metaverse is still in its early stages, Meta opened up access to its virtual reality social media platform, Horizon Worlds, back in early December 2021. Those lucky enough to get a hold of the futuristic universe described it as fun and wholesome, with many drawing comparisons to Minecraft and Roblox.

In Horizon Worlds, up to 20 avatars can get together at a time to explore, hang out, and build within the virtual space. But not everyone’s experiences have been this pleasant. As first reported by the MIT Technology Review, “According to Meta, on November 26, a beta tester reported something deeply troubling: she had been groped by a stranger on Horizon Worlds. On December 1, Meta revealed that she’d posted her experience in the Horizon Worlds beta testing group on Facebook.”

Meta’s response was to review the incident and declare that the beta tester should have used a tool called ‘Safe Zone’ which is part of a suite of safety features built into Horizon Worlds. Acting as a protective bubble users can activate when feeling threatened, within it, no one can touch them, talk to them, or interact in any way—until they signal their preferences to switch the feature off.

Speaking to The Verge just after news of the incident started circulating, Vivek Sharma, Meta’s Vice President of Horizon Worlds and a man, called the incident “absolutely unfortunate” and added, “That’s good feedback still for us because I want to make [the blocking feature] trivially easy and findable.”

After first reporting about her assault on the metaverse, Janel Patel shared that most comments she received on her post were people trying to put the blame on her and not her aggressors, “The comments were a plethora of opinions from — ‘don’t choose a female avatar, it’s a simple fix’, to ‘don’t be stupid, it wasn’t real’, ‘a pathetic cry for attention’, ‘avatars don’t have lower bodies to assault’, ‘you’ve obviously never played Fortnite’, ‘I’m truly sorry you had to experience this’ and ‘this must stop’.”

While it hasn’t been confirmed whether this precise incident was Patel’s, one thing is obvious—it’s not the first time a user has been groped in VR, which further proves that until companies work out how to protect participants, the metaverse can never be a safe place.

In October 2016, gamer Jordan Belamire penned an open letter on Medium describing being groped in QuiVr, a game in which players—equipped with bow and arrows—shoot zombies. Belamire described entering a multiplayer mode, “In between a wave of zombies and demons to shoot down, I was hanging out next to BigBro442, waiting for our next attack. Suddenly, BigBro442’s disembodied helmet faced me dead-on. His floating hand approached my body, and he started to virtually rub my chest. ‘Stop!’ I cried … This goaded him on, and even when I turned away from him, he chased me around, making grabbing and pinching motions near my chest. Emboldened, he even shoved his hand toward my virtual crotch and began rubbing.”

“There I was, being virtually groped in a snowy fortress with my brother-in-law and husband watching,” she continued. At the time, QuiVr developer Aaron Stanton and co-founder Jonathan Schenker immediately responded with an apology and an in-game fix—avatars would be able to stretch their arms into a V gesture, which would automatically push any offenders away.

A recent review of the events around Belamire’s experience published in the journal for the Digital Games Research Association (DIRGA) found that “many online responses to this incident were dismissive of Belamire’s experience and, at times, abusive and misogynistic … readers from all perspectives grappled with understanding this act given the virtual and playful context it occurred in.”

It’s highly important for people to understand that sexual harassment does not have to be a physical thing. It can be verbal, and yes, more recently, it can be a virtual experience as well. The nature of virtual reality spaces is such that it is designed to trick users into thinking they are physically in a certain space. It’s part of the reason why emotional reactions can be stronger in that space, and why VR triggers the same psychological responses.

In the end, the burning question is: whose responsibility is it to make sure users are comfortable? Meta hands off safeguarding responsibility to its users, giving them access to tools to ‘keep themselves safe’, effectively shifting the blame onto them. And that’s not right.