Last month Netflix’s CFO David Wells announced that in 2018 alone the streaming giant would be forking out upwards of $8 billion on producing 700 original titles. For relentless binge-watchers and professional procrastinators alike, this was probably equally exciting and daunting news. However, on a more serious note, this is in many ways emblematic of the attention-driven digital economy we now find ourselves in, defined by sheer excess and algorithmic consumption, and its effect on how we consume culture; extending to every corner of the cultural landscape, not just film and TV.
As Netflix strives for a global monopoly, pumping out unprecedented amounts of content in over 190 countries to around 118 million users, it is arguably us the consumers that are suffering as a result. The oh-so-hopeful neoliberal dogma of endless free choice is ironically underpinned by a snake-like grip on production, distribution and consumption. Hours upon hours have been spent staring gormlessly at my TV, trudging through the endless ether of rubbish on Netflix to then, in sheer frustration, give up and start Peep Show for the 13,683th time. If Netflix hadn’t already proved you definitely can have too much of a good thing, then surely this is it. In its search for worldwide economic and technological capital, Netflix has sacrificed quality in favour of incessant mass-production.
Minus the intellectual-elitism, this is all somewhat reminiscent of the work of German philosophers Adorno and Horkheimer and their work on the culture industry. They argue that capitalism (Netflix as a proponent here) has sought to it that culture is resorted to almost-mechanical mass-production in order to meet only our most basic needs, leaving us docile and idle consumers at the expense of culture’s potential for subversiveness. This account comes from a sneering disapproval of popular culture, which I think is a bit reductive, but what’s interesting here is that, like many of the digital platforms we use on a daily basis, Netflix also has control over how and what we consume on a personal level. Its process of recommendations supposedly accounting for around 80% of content watched on the platform.
Netflix’s recommendation system works by taking a detailed log of our individual viewing habits and spitting out multiple profiles on each user based on taste as well as behavioural patterns, which are then pooled into thousands of different audience groups. Then, this data is pumped through an algorithm that combines it with an extensive collection of tags on each and every episode, which can include anything from specific genre-traits to moods or emotions that are invoked when watching. These tags are meticulously compiled by Netflix staff and freelancers who watch every TV show and film from start to finish. Even the little image previews you see on the homepage are selected from a number of choices per TV show or film based on that same data—using algorithms to work out which image is most likely to make you click that title. When you break down the process in this way and consider just how much control Netflix has over what we consume, the romantic idea of endless free choice in the neoliberal digital economy seems like nothing other than an illusion.
It’s clear that culture is at a precarious point right now. With Netflix striving for a worldwide monopoly of distribution and production in film and TV, constantly expanding into new geographic regions and pumping out a stupid amount of original content, it ends up running the risk of overshadowing local producers across the world and sacrificing culture’s value in the process, whilst at the same time using our online habits to control what and how we consume. This extends far beyond just film and TV too. Take Spotify for example, we now have access to a previously unimaginable amount of recorded music yet we still often find ourselves stuck in a never-ending loop of the same old stuff, our recommended artists permanently haunted by that friend with the terrible music taste who used your account that one time.
Some might argue that it’s never been better, precisely because of the sheer amount of different content available. However I think it’s less about a dull, homogeneous cultural world than one that’s been forcibly fragmented into so many different niches within an attention economy that, in seeking to keep us habitually logged on, uses algorithms to keep us in a perpetual loop of recommendations. As a result, this has left us confined to restrictive patterns of consumption so tailed around our ‘data self’ that it becomes almost impossible to escape from. Big data and algorithms are deeply ingrained in how we consume culture, but as we’ve seen with the recent Cambridge Analytica scandal, their presence is being widely felt not just in the cultural, but also the social and political realms too.
Seemingly oblivious (well not at all, perhaps) to their key demographic, it was recently revealed that Uber is developing a new AI system that can tell if users are drunk, allowing the driver to choose whether to accept a ride based on a variety of metrics such as your walking speed, frequent typos and whether you’re swaying around or holding the phone at a weird angle. As information technology colonises all aspects of day-to-day life, what happens when our drunken behaviour—aka our worst selves—falls into the profit-focussed world of Big Data too?
Even before going into speculations on how this data could be used in sinister ways, there’s a more obvious reason that this can pose a risk for users. When you consider Uber’s pretty terrible track-record at reporting sexual assaults committed by its employees, the idea that drivers would be able to spot drunk and vulnerable people when choosing whether to take the job is obviously a dangerous move which could easily be abused. There’s also the issue that for young women in particular, if it’s late at night and you’re drunk and alone, Uber can be a safer and quicker alternative to public transport. If these users are unable to book a lift home because they appear to be too intoxicated—bear in mind this is using superficial digital data to measure a chemical imbalance—then it could be putting them at risk even further.
Of course, there’s also the chance that this won’t extend further than the development phase, after all, that’s one perk of being a multi-billion dollar tech company: you can pump a bunch of money and resources into developing ridiculous ideas and then if they don’t work, just move on to the next. Still, I think it raises some interesting questions about the dangers imposed by the accumulation of this kind of data and, in particular, how it could be used against us—by Uber or any other private company. After all, it’s virtually impossible, by its very nature, for any kind of AI or automation to be totally free from personal, political or corporate bias, instilled consciously or unknowingly at some stage along in its development and deployment.
Uber has presented this idea as a way of keeping their drivers safe, however, I think it would be pretty naïve to presume that this is the only motive at play. That’s just how the tech industry works, data is capital and we volunteer to give it all away for the taking. One way Uber could use this would be to apply surge pricing, ramping up the price for those that appear drunk—knowing they’re more likely to accept the additional charges because of their booze-tainted decision-making or, as I’ve mentioned earlier, to avoid having to travel home alone and late at night. It’s for this reason that the ability to target us when we’re drunk would inevitably offer huge opportunities to marketers too.
It’s here when we start looking at how this technology could be misused in a wider sense where more sinister scenarios arise, such as how this feature could take on a more disciplinary usage. It almost resembles some form of digital breathalyser, only those doing the policing are the same tech companies whose business models rely on a vast mining of behavioural data for capitalistic gain.
Since back in 2015 a handful of U.S. health insurance companies have started experimenting with how they can use wearable technologies to their advantage. Luring them in with reduced rates and discounts, and even cash prizes, some companies have begun getting customers to opt-in to giving away the medical data from their Apple Watches and FitBit’s. It’s not hard to see how continual access to your biometric information would be of value to insurance companies. In a similar way, if alcoholism falls into this kind of area, then so-called signs of it in our digital footprint could be used to prevent us from a variety of different services—be it just a taxi, health insurance, or even access to certain places and areas if deployed on a more municipal level within a ‘Smart City’ that uses real-life data to inform its infrastructure and services.
Regardless of whether it does indeed go down the route, it’s clear that there’s a lot to be gained for certain parties with our drunken behavioural traits being added to the swarms of data we already outpour—posing serious threats in terms of privacy, surveillance, discipline and user safety as a result. It’s a pessimistic vision but it feels like an inevitable step in the profit-driven quest for Big Data to colonise all corners of human social experience, carving out a whole new data set for any interested party to play with as they please.