Last week, FaceApp made an impressive comeback. It’s hard to deny, the app’s new filter is haunting. By using AI-powered technology, the app allows users to morph one person’s selfie into the future version of themselves. I haven’t tried it myself, but a friend sent me a selfie of mine where I scarily look like my mother. The technology has improved a lot since the app was launched in 2017 and the results are astounding, giving users the ability to see the future through this computer-generated imagery.
It’s disturbing yet alluring to be able to see what you could look like in 30 years, and it’s no surprise that last Wednesday, the smartphone app became the most downloaded in the U.S. But as soon as the news that an ‘unknown’ Russia-based company that has little data regulations developed the app came out, panic started to spread.
Truth is, the company named Wireless Lab is based in Russia, but the photos are stored on servers run by American companies. Dubious origins or not, the loose privacy rules of the app triggered more concerns—rumour has it that the app could access a user’s entire photo archive, thus entering and possibly controlling one person’s personal data. “It would be deeply troubling if the sensitive personal information of U.S. citizens was provided to a hostile foreign power actively engaged in cyber hostilities against the United States”, Senator Charles E. Schumer wrote in a letter to the FBI, continuing on Twitter, “Warn friends and family about the deeply troubling risk that your facial data could fall into the hands of something like Russian intelligence or military”.
The general panic mirrors some sort of awakening when it comes to the privacy of our data, but if this story can teach us something, it is that the problem doesn’t lie within FaceApp or Wireless Lab. The lack of information, regulations, and caution when it comes to AI and data-based technologies is what we should worry about. Not only because face recognition technology will soon be actively entering our lives (in shop aisles, airports, basically everywhere), but also because FaceApp has very similar corporate rights to its users’ data as Facebook or pregnancy-tracking apps like Ovia. In other words, if you are concerned about your digital privacy, then you should check all the apps you have downloaded and websites you have logged on in the past few years and start panicking for good.
Data privacy is no joke. In countries where freedom of expression is threatened by the government, (unknowingly) giving the authorities and private companies access to extremely personal data could put people in dangerous positions, threatening not only their freedom of speech but also their physical well-being. We are entitled to ask for more clarity on how our data is used by tech companies and for the law to keep up with corporate technology. That said, we, as users, should also be more responsive when it comes to our digital rights, and not freak out when we hear the words ‘Russia’ and ‘data’ in the same sentence.
It requires willpower to let go of online challenges and short-lived apps, among other services, to fully own and manage the AI technologies we hold accountable at the moment. The internet is a tempting place and it’s likely to keep on offering more and more indispensable services in exchange for our private information.
The trade between data and services is so systematically ingrained within the internet algorithm, that as much as we should be informed on how it works exactly, we should also be aware that there’s always a price to be paid. We’ve all been data-whores our entire digital lives, so suddenly wishing for a life of internet abstinence would be hypocritical. In today’s society, extremes don’t work.
If you are still worried about Russia using your wrinkled selfies for whatever reason, FaceApp accepts requests from users to remove their data from its servers. You can send the request through Setting>Support>Report a bug with the word ‘privacy’ in the subject line. Feeling better?
Does your makeup define you? I hate the thought of my lipstick choice being used by data scientists to decide who I am. I might feel a shade of Velvet Teddy on a Monday in the office, but I’ll be a smudged Lady Danger at 5am on a Saturday morning. I use makeup to explore my own identity, a slippery notion which is not at all easily expressed, even from my own lips. Those who wear or like makeup are almost always subject to prejudice. We are either wearing too much, not enough or we are doing it wrong. People who have been criticised for wearing, wanting to wear or not wearing makeup, will know that makeup has its own language.
A glance at Instagram (owned by Facebook) seems to testify to the growing commitment amongst beauty professionals to champion diversity and creative self-expression. As Sissi Johnson, an industry scholar who lectures on ‘Multiculturalism in the Beauty Industry’ tells me, “beauty commodities have always been well-positioned for expressing the nuances of identity as they can be highly personalised. Makeup is particularly intimate and its significance to consumers varies throughout cultures and age groups”. My own concern, however, is that all our hard work towards fearless self-expression could go to waste. Beauty professionals offer huge amounts of data up to be organised and analysed by the tech gods, but are the values of today’s beauty industry being heard? How do our beauty choices translate into the language of data science?
Remember the Cambridge Analytica scandal? Cast your mind as far back as March 2018 when the international press exploded with cries of data breaches. Mainstream media outlets reported that Cambridge Analytica had gathered data from approximately 87 million Facebook users and used this information to design “mind-reading software”. Speculation then turned to whether this technology had been used to successfully target individual voters with political ads and if it was responsible for both Trump and Brexit. In times of political turbulence, it is often simpler to blame technology than it is to blame people. While the idea of having your data used to read your mind is scary, there are even more sinister forces at play here. What if I told you that the scariest thing was that Cambridge Analytica had got people all wrong, but that they had taken it upon themselves to speak on other people’s behalf anyway?
When distinguishing the innermost thoughts and feelings of voters in the U.S. and U.K., the scientists at Cambridge Analytica used “psychographics”. It is reasonable that most wouldn’t question something that sounds so authoritatively sciencey. Here’s the thing though: the social science they used was problematic, to say the least. The original research behind Cambridge Analytica’s so-called “high-tech” methods was headed up by Michal Kosinski at the University of Cambridge in 2012. The team started by correlating personality features, for instance neuroticism, with online behaviour, such as displaying high levels of Facebook activity or, erm, having loads of friends. Fine. But then in 2013 they wrote another paper that lauded the “predictive power of likes”, and this is where things got silly. Kosinski and his pals wrote that liking groups such as “Sephora” and “I Love Being a Mom” were highly accurate indicators of low intelligence. Misogyny, anyone? It is unclear what these scientists deem to be intelligent, but it doesn’t seem that makeup-wearing mothers stand much of a chance of being counted as such. Shopping at Sephora and mothering children have diddly squat to do with my intelligence levels and I am upset by people who tell me that they do. Kosinski has since raised concerns that this type of research might be unethical and was quoted in the Guardian as having said, “I did not build the bomb. I only showed that it exists”. How easy it can be to assert that technology exists separately from those who created it.
The paper then goes on to assert that a “like” for MAC Cosmetics is a good predictor of male homosexuality. Bafflingly, the researchers couldn’t work out why “gay men” hadn’t “liked” such groups as “Being Gay” or, even, “I love Being Gay”. Instead, they had to make do with “likes” for makeup, this time as an imagined hallmark of gayness rather than stupidity. The kind of discourse that accounts for queerness in terms of being only a “gay man” is outdated and dangerous.
Kosinski and co. have since forayed into deploying deep neural networks to distinguish male sexual orientation from facial recognition. Basically, they are training computers to automatically spot which men look “gay” and which are not. The Stanford University study, which was based on data from white individuals only, was met with anger from GLAAD, the world’s largest LGBTQ media advocacy organisation. Jim Halloran, GLAAD’s Chief Digital Officer stated that Kosinski’s research “isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community, including people of colour and transgender people”. Kosinki’s weird ideas about beauty choices serve principally to demonstrate that biases against people definitely still exist and that they are reinforced by technologies such as psychographics. Beauty consumption is about expression, not crappy categorisation.
No algorithm could have predicted the wealth of diverse self-expression that much of the makeup industry and beauty-blogging world continues to nourish. But the problem is that these Instagram and Facebook “likes”, say a “like” for Sephora or MAC, are vulnerable to getting translated back into something we didn’t ask for. When we use social media platforms, we leave our data open to being used to misrepresent us and our consumer choices. This insult to lipstick wearers around the world is a gross mismeasure of fxmme. Your makeup bag is a weapon for positive change and those who ridicule your consumer choices need to know they have got it twisted.