Why we should worry that Facebook now knows when you’re on your period – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

Why we should worry that Facebook now knows when you’re on your period

In what seems like an endless string of invasions of privacy from Facebook, as of this week we now know that not even our menstrual cycles are safe. That’s right, Facebook may be using our periods against us through a period-tracking app called Flo, which was revealed to be sharing data with Facebook. 25 million active users were potentially affected by the breach.

It’s something we’ve heard before, from Cambridge Analytica to the revelations that Netflix and Spotify also use sensitive data. With Flo, the nature of the data-share is particularly shocking—as the issue of periods relates to marginalised people, with women and people assigned female at birth overwhelmingly affected. This is an example of how biotech is being exploited and weaponised against us as the closest, most embodied, most intimate parts of our lives are being intertwined with technology.

A good question that springs from this is: what is the purpose of sharing this data, exactly? When we consider the ethics of more targeted advertising, one of the clearest dangers is that it makes us buy more because it’s so accurate. But regarding data on how we menstruate, the answer is still unclear as to how brands use this for their advantage barring the obvious cliches of advertising things menstruating women might want, crave or need. Flo allows users to input information when they’re trying to conceive, and also has a ‘pregnancy mode’ for when users are pregnant—a function that would be incredibly lucrative for marketing in the big business of babies.

Zane-Priede

Another question commonly asked about Facebook’s relationship with data sharing is whether it’s right to only see it as damaging to consumers. Could the benefit of better advertising be mutual? After all, as consumers, we’re more likely to be exposed to content that we’re actually interested in buying, with less irritating, irrelevant clutter on our feeds. This carries to an extent; more than once I’ve had nudges to revisit pages for products I’ve been considering buying, and been thankful for the nudge, as I’d forgotten.

But in the case of Flo, there’s a clear difference between consensual data sharing and user exploitation. There’s also the ethical issue of consumers constantly and subconsciously being pushed to buy in social spheres. User rights lie at the crux of the issue. With huge corporate forces at play, as consumers and platform users, we deserve to know what data is going where, and to provide consent before it happens. It’s true that the general public, myself included, are largely unaware of the real-world meaning of the fine print and what it entails as far as our data. Yet it’s dangerous that this lack of knowledge is being exploited.

While it feels like a new revelation comes every week when it comes to the sharing of our data, is there anything we can do as users? There are some steps we can take, like utilising the scores of information that exist on in-built tools for protecting our data. We can also ensure we’re as literate as we can be in terms and conditions, and privacy policies, as well as actually ensuring we pay attention when scandals like this break, rather than rolling our eyes and muttering “typical”.

We can also continue to lobby organisations to do better. #DeleteFacebook sprung up around this time last year when Cambridge Analytica broke. While staying away from Facebook doesn’t exactly solve the issue, the Flo data-share affected Facebook users and non-users alike; dissenting from platforms that don’t respect our privacy can still symbolise protest.

It’s true that the tools in themselves may not be harmful, but unfortunately what we’re seeing is the same old structures churning out the same old inequalities. The burden to fix it should not solely lie on us—unless we hurl our smartphones into the sea, we cannot outright halt the exploitation of our data. But that’s not to say we shouldn’t weigh up what we can and can’t live without. Reflecting on Flo, I know I’ll personally be sticking with my calendar.

Micha Frazer-Carroll is arts and culture editor at gal-dem and writes for HuffPost U.K.

Opinion

Big Data vs The Sesh: The bigger problem with Uber knowing when you’re too drunk

By Jack Palfrey

Seemingly oblivious (well not at all, perhaps) to their key demographic, it was recently revealed that Uber is developing a new AI system that can tell if users are drunk, allowing the driver to choose whether to accept a ride based on a variety of metrics such as your walking speed, frequent typos and whether you’re swaying around or holding the phone at a weird angle. As information technology colonises all aspects of day-to-day life, what happens when our drunken behaviour—aka our worst selves—falls into the profit-focussed world of Big Data too?

Even before going into speculations on how this data could be used in sinister ways, there’s a more obvious reason that this can pose a risk for users. When you consider Uber’s pretty terrible track-record at reporting sexual assaults committed by its employees, the idea that drivers would be able to spot drunk and vulnerable people when choosing whether to take the job is obviously a dangerous move which could easily be abused. There’s also the issue that for young women in particular, if it’s late at night and you’re drunk and alone, Uber can be a safer and quicker alternative to public transport. If these users are unable to book a lift home because they appear to be too intoxicated—bear in mind this is using superficial digital data to measure a chemical imbalance—then it could be putting them at risk even further.

Of course, there’s also the chance that this won’t extend further than the development phase, after all, that’s one perk of being a multi-billion dollar tech company: you can pump a bunch of money and resources into developing ridiculous ideas and then if they don’t work, just move on to the next. Still, I think it raises some interesting questions about the dangers imposed by the accumulation of this kind of data and, in particular, how it could be used against us—by Uber or any other private company. After all, it’s virtually impossible, by its very nature, for any kind of AI or automation to be totally free from personal, political or corporate bias, instilled consciously or unknowingly at some stage along in its development and deployment.

Uber has presented this idea as a way of keeping their drivers safe, however, I think it would be pretty naïve to presume that this is the only motive at play. That’s just how the tech industry works, data is capital and we volunteer to give it all away for the taking. One way Uber could use this would be to apply surge pricing, ramping up the price for those that appear drunk—knowing they’re more likely to accept the additional charges because of their booze-tainted decision-making or, as I’ve mentioned earlier, to avoid having to travel home alone and late at night. It’s for this reason that the ability to target us when we’re drunk would inevitably offer huge opportunities to marketers too.

It’s here when we start looking at how this technology could be misused in a wider sense where more sinister scenarios arise, such as how this feature could take on a more disciplinary usage. It almost resembles some form of digital breathalyser, only those doing the policing are the same tech companies whose business models rely on a vast mining of behavioural data for capitalistic gain.

Since back in 2015 a handful of U.S. health insurance companies have started experimenting with how they can use wearable technologies to their advantage. Luring them in with reduced rates and discounts, and even cash prizes, some companies have begun getting customers to opt-in to giving away the medical data from their Apple Watches and FitBit’s. It’s not hard to see how continual access to your biometric information would be of value to insurance companies. In a similar way, if alcoholism falls into this kind of area, then so-called signs of it in our digital footprint could be used to prevent us from a variety of different services—be it just a taxi, health insurance, or even access to certain places and areas if deployed on a more municipal level within a ‘Smart City’ that uses real-life data to inform its infrastructure and services.

Regardless of whether it does indeed go down the route, it’s clear that there’s a lot to be gained for certain parties with our drunken behavioural traits being added to the swarms of data we already outpour—posing serious threats in terms of privacy, surveillance, discipline and user safety as a result. It’s a pessimistic vision but it feels like an inevitable step in the profit-driven quest for Big Data to colonise all corners of human social experience, carving out a whole new data set for any interested party to play with as they please.