Seemingly oblivious (well not at all, perhaps) to their key demographic, it was recently revealed that Uber is developing a new AI system that can tell if users are drunk, allowing the driver to choose whether to accept a ride based on a variety of metrics such as your walking speed, frequent typos and whether you’re swaying around or holding the phone at a weird angle. As information technology colonises all aspects of day-to-day life, what happens when our drunken behaviour—aka our worst selves—falls into the profit-focussed world of Big Data too?
Even before going into speculations on how this data could be used in sinister ways, there’s a more obvious reason that this can pose a risk for users. When you consider Uber’s pretty terrible track-record at reporting sexual assaults committed by its employees, the idea that drivers would be able to spot drunk and vulnerable people when choosing whether to take the job is obviously a dangerous move which could easily be abused. There’s also the issue that for young women in particular, if it’s late at night and you’re drunk and alone, Uber can be a safer and quicker alternative to public transport. If these users are unable to book a lift home because they appear to be too intoxicated—bear in mind this is using superficial digital data to measure a chemical imbalance—then it could be putting them at risk even further.
Of course, there’s also the chance that this won’t extend further than the development phase, after all, that’s one perk of being a multi-billion dollar tech company: you can pump a bunch of money and resources into developing ridiculous ideas and then if they don’t work, just move on to the next. Still, I think it raises some interesting questions about the dangers imposed by the accumulation of this kind of data and, in particular, how it could be used against us—by Uber or any other private company. After all, it’s virtually impossible, by its very nature, for any kind of AI or automation to be totally free from personal, political or corporate bias, instilled consciously or unknowingly at some stage along in its development and deployment.
Uber has presented this idea as a way of keeping their drivers safe, however, I think it would be pretty naïve to presume that this is the only motive at play. That’s just how the tech industry works, data is capital and we volunteer to give it all away for the taking. One way Uber could use this would be to apply surge pricing, ramping up the price for those that appear drunk—knowing they’re more likely to accept the additional charges because of their booze-tainted decision-making or, as I’ve mentioned earlier, to avoid having to travel home alone and late at night. It’s for this reason that the ability to target us when we’re drunk would inevitably offer huge opportunities to marketers too.
It’s here when we start looking at how this technology could be misused in a wider sense where more sinister scenarios arise, such as how this feature could take on a more disciplinary usage. It almost resembles some form of digital breathalyser, only those doing the policing are the same tech companies whose business models rely on a vast mining of behavioural data for capitalistic gain.
Since back in 2015 a handful of U.S. health insurance companies have started experimenting with how they can use wearable technologies to their advantage. Luring them in with reduced rates and discounts, and even cash prizes, some companies have begun getting customers to opt-in to giving away the medical data from their Apple Watches and FitBit’s. It’s not hard to see how continual access to your biometric information would be of value to insurance companies. In a similar way, if alcoholism falls into this kind of area, then so-called signs of it in our digital footprint could be used to prevent us from a variety of different services—be it just a taxi, health insurance, or even access to certain places and areas if deployed on a more municipal level within a ‘Smart City’ that uses real-life data to inform its infrastructure and services.
Regardless of whether it does indeed go down the route, it’s clear that there’s a lot to be gained for certain parties with our drunken behavioural traits being added to the swarms of data we already outpour—posing serious threats in terms of privacy, surveillance, discipline and user safety as a result. It’s a pessimistic vision but it feels like an inevitable step in the profit-driven quest for Big Data to colonise all corners of human social experience, carving out a whole new data set for any interested party to play with as they please.
Contemporary Western society has a problem with connection. It has been explored in various think-pieces and books. The British parliament has a dedicated Commission on Loneliness and there are now numerous awareness-raising campaigns around mental health. In fact, two-thirds of young adults in the UK feel they have no one to talk to about their problems.
The economic policy of austerity in the UK and elsewhere in Europe has had the effect of reducing an already inadequate mental health service. At this same moment in history, Silicon Valley has emerged with a few digital solutions: Talkspace and Betterhelp are paid apps that facilitate instant message exchanges with licensed professionals. The fees are lower than face-to-face therapy and it’s billed as a chance for therapists to work under more ‘convenient’ conditions.
The chillingly titled Invisible Girlfriend offers lonely men fictive pillow talk, powered by an army of remote, crowdsourced workers (don’t worry, there’s also an Invisible Boyfriend). The ‘Invisible’ conversation is maintained by numerous flexible workers rapidly clocking in and out. Their only qualification for the role is a short copyrighting test to verify their literacy.
There are also AI-driven caregivers. Woebot is a chatbot that allows users to share their problems without judgement to a robot that responds with understanding and cognitive tips. Woebot’s founder compares the predictable experience of the bot to a tennis ball hitting machine. In other words, users who usually have difficulty opening up about their woes can practice their swing.
Mend is a ‘personal trainer for heartbreak’. Enter the details of your break-up and the app provides a customised self-care routine, replete with Spotify playlists and reading suggestions. Mend performs the role of a friend with your best interests at heart, keeping you from indulging your sorrow. Facebook’s algorithms encourage us to impulsively message an ex, check their new partner’s timeline and generally scratch the itch. While Mend’s aims seem wholesome, it’s easy to forget that it is exactly this kind of technology which has accelerated our indulgences.
So yes, loneliness and heartbreak are two problems many of us can relate to, but what about work? As comfortable as gen Zers can be with technology and AI being implemented into their workday, few companies have yet to offer them easy to use and simple tools to improve their productivity. That’s where ZoomShift comes in by allowing companies to build their own work schedule in minutes, reduce payroll costs, and have confidence their team will actually show up on time, which sounds like an impossible task for most.
Because gen Zers rank their relationship with their work team as one of their top concerns, ZoomShift offers them more than just a scheduling and time tracking tool—it gives them a better way to work together and increases company productivity and organisation. Because better communication means more efficiency.
What all these products offer is a sense of being held; knowing someone is looking out for us is a fundamental need that begins as soon as we leave the womb. Loneliness is alleviated by a human, or at least, in this case, human-like, connection. But is it sustainable for this basic human need to be fulfilled by such inconsistent, data-driven services?
Alongside the rise of these care-economy apps, a few other trends have emerged. Available work has shifted from manufacturing to retail. Conditions have shifted from regular, contracted employment to flexible gig-economy freelancing. Generally speaking, capitalism has become very emotional, appealing more to our basic psychological needs. This is widely apparent, from the whimsical jokes on the side of Oatly cartons to the evident success of political campaigns which utilise affective rhetoric (does ‘Take Back Control, For The Many’ ring any bells?).
Uber drivers can now receive ‘compliments’, emphasising a positive aspect of their experience such as ‘entertaining driver’, ‘good conversation’ or the exalted ‘above and beyond’ for ‘drivers who go beyond expectations’. These badges appear on the driver’s profile and improve their chances of attracting more jobs. In a competitive environment, where supply outweighs demand, ‘above and beyond’ becomes the minimum requirement.
Online therapy company Talkspace has an advert that features a woman holding a miniature Sigmund Freud doll in the palm of her hand. The voiceover says: “The sooner you can get help, the more effective it is. So if something comes up, you can deal with it right there and then.” On the one hand, mindfulness apps like Headspace remind us that our thoughts are mere post-it notes in the wind, to be regarded with a calm detachment. On the other hand, the vast majority of social networks encourage us to share what’s on our mind immediately, reifying those emotional impulses at the click of a button.
Talkspace appeals to this desire for immediate recognition, promising to deal with the problem ‘right there and then’, without any reflection phase. Of course genuine mental health emergencies require immediate support. But in the case of phases of anxiety, which is what Talkspace aims to address, the users may get what they want but not what they need. A fast online response can satisfy short-term cries for help but it lacks the boundaries and body language of what Talkspace calls ‘Traditional Therapy’. On the physical couch (or armchair) involuntary behaviour is more visible and there’s a clear agreed-upon time limit to the interaction.
Social media companies driven by profit and market share cannot be trusted to ensure the wellbeing of their users. I believe that these platforms are wholly inadequate for addressing rising loneliness in our society. They exploit precarious workers only to provide highly compromised experiences of connection. Companies create new problems that other companies attempt to solve. With faster, larger networks for communication, our desire for connection is more visible than ever—but the struggle comes from within a system that has different priorities.