In 1993, not long after the fall of the Soviet Union, Kazakhstan opened its borders to foreign investment. The country’s state-owned energy company agreed to partner with the American oil company Chevron in a joint venture to extract oil. The project, named Tengizchevroil (TCO), was granted an exclusive forty-year right to the Tengiz oil field—one of the largest fields in the world that carries roughly 26 billion barrels of oil.
According to Zero Cool, an anonymous software engineer at Microsoft, who was sent to Tengiz to ‘help’ and who recently wrote a piece for Logic Mag, Chevron has been pouring an excessive amount of money into the joint venture with the goal of using new technology to increase oil production at the site. Big Tech and Big Oil are fraternising, and this is worrying—for you, me, and our planet.
So how exactly is Microsoft helping Chevron?
As illogical is it sounds for most, the collaboration between Big Tech and Big Oil actually makes sense. Tech companies like to portray themselves as advocates for sustainability, but in reality, the biggest names in the tech industry are helping big oil companies to double down on fossil fuels while we protest for measures against climate change. In his article, Zero Cool stated that Big Tech’s foundation of its partnership with Big Oil is based on ‘the cloud’.
Amazon, Google and Microsoft are constantly fighting in what Zero Cool calls the “cloud wars”. And for one of them to win it, it also has to go “where most of the money in the public cloud market will be made,” or in other words, go to Big Oil, which makes up six of the ten biggest companies in the world in terms of revenue. In 2017, Microsoft signed a seven-year deal with Chevron to establish itself as Chevron’s primary cloud provider.
The collaboration means great business for both giants. Chevron is finally able to use more of the data that thousands of oil wells are producing, and Microsoft can access an enormous amount of data through sensors-covered wells that generate more than a terabyte of data per day. It’s a win-win situation that doesn’t seem to stop there.
Zero Cool further explained that “Big Tech doesn’t just supply the infrastructure that enables oil companies to crunch their data. It also offers many of the analytical tools themselves. Cloud services provided by Microsoft, Amazon, and Google have the ability to process and analyze vast amounts of data.” This multi-million-dollar partnership between Microsoft and Chevron was the reason Zero Cool went to Kazakhstan—to help the Tengiz oil field adopt Microsoft’s technology.
What came out of his trip and the harvesting of this amount of data? A new, more efficient way to improve oil exploration explains the Microsoft software engineer, “The traditional way to find a new oil or gas deposit is to perform a seismic survey. This is a technique that sends sound waves into the earth and then analyzes the time it takes for those waves to reflect off of different geological features.” However, interpreting this map is a long process that can take months and involve many geophysicists.
That’s where Microsoft came in, and used computer vision technology to automatically segment different geological features. This right here is the perfect example demonstrating how Big Tech working with Big Oil means big money, big time. Whether it damages our planet in the process seems irrelevant to both industries.
On top of that, Zero Cool also shared another aspect of this partnership, one that won’t be as surprising as the first one. While on site in Kazakhstan, the TCO manager from Chevron asked him if Microsoft could help with better surveilling their workers, to see “if they are doing anything at all.” Zero Cool admitted that him and other Microsoft employees, like most people, disapproved: “When I reflect back on this meeting, it was a surreal experience. Everyone present discussed the idea of building a workplace panopticon with complete normalcy.” The TCO manager ended up claiming that Chevron needed that increased surveillance to improve worker safety.
To conclude his tale, Zero Cool reflected on how he helped Big Tech in accelerating the climate crisis. “How can tech help, instead of hurt, the climate?” he asked. It won’t be easy, but changes that ultimately should come from the top, from the big companies, is slowly coming from the bottom. People working for Big Tech and Big Oil are demanding change, and while it stays ignored for now, persistence is key.
There’s currently a huge move happening in Silicon Valley. Big Tech is just starting to figure out that any technology’s design can have unintentional consequences on users, and any intended consequences may have unexpected long-term effects—a realisation that should have been obvious from the beginning.
Lately, this movement has been championed by former Google Design Ethicist Tristan Harris, whose treatises on the ways that contemporary technologies “hijack our psychological vulnerabilities” led him to quit the big G and found the Center for Humane Technology. On April 23, Harris presented Humane: A New Agenda for Tech which explained the organisation’s mission to combat the extractive techniques utilised by our modern attention economy. According to Harris and the Center for Humane Technology, the current technological paradigm preys on human weaknesses, exploiting us for financial profit, rather than compensating for the vulnerabilities hardwired into our nature and making the world an easier place to live in.
While organisation’s like the Electronic Frontier Foundation and the World Economic Forum’s Centre for the Fourth Industrial Revolution have existed for years to help preserve the relationship between humans and technology, it is only recently that these topics have taken centre stage in American conversations surrounding ethical applications of technology. As we settle into the post-post-internet era, the critical need to understand the long-term effects of the information economy on a global audience’s ability to tell fact from fiction or exercise self-control has reached a fever pitch. Something’s gotta give.
Online advertising employs certain kinds of algorithmic technology that have made the tension between human needs and the coercive pull of technological design especially apparent. By design, a social media platform like Facebook works to extract attention from a captive audience. The more time a user spends watching or scrolling, the more money the platform makes. Our data is mined and algorithmically exploited to spew content at us that will keep our eyes locked on a screen for longer. YouTube’s recommendation algorithm, for example, actively works to break down free will in that it attempts to show us videos that are increasingly difficult to resist clicking on. This has led to the propagation of radical content across the network, as increasingly extreme videos are particularly effective at captivating viewers.
While data use, on the one hand, is a major issue, perhaps more frightening are the ways that black box algorithms invisibly shape the digital bubbles that surround us online—practically without us knowing it. The advertising rules for Facebook, for example, prohibit (among other things) any discussion of mental health, cryptocurrency or sex. These rules don’t just apply for products or services, but also non-profit groups and news publications. Take a small tech news business (like Screen Shot, for instance) that almost exclusively deals with these kinds of topics in relation to digital culture. Or, alternatively, a mental health group that wants to advertise its group grief counselling sessions to a wider audience that might need some help.
It’s true that the engineers behind these decisions probably had somewhat benevolent intentions—something along the lines of protecting young and vulnerable users from inappropriate content or pyramid schemes—the unintended consequences have been devastating. Advertisers who attempt to post content that violates advertising rules without in-depth knowledge of the subtleties of these policies risk content deletion, account suspension, or worse, shadow banning. Moreover, these policies stifle rich possibilities for conversation and cross-pollination of ideas.
As users, our options remain slim to combat these forms of algorithmic control. We naturally want to connect with other people, and we also naturally enjoy the targeted advertisements that more closely fit our needs. These advertising methods work, and e-commerce economies rely on the very high conversion rates that social media platforms offer, which is why these practices have only ramped up.
But, technologies of the extractive attention economy throw up major blinders that prevent users from branching into new digital ecosystems and instead confine us to shrinking online spheres. Instead of protecting users, algorithmic surveillance, biased content moderation, and artificially imposed social advertising regulations threaten free speech and damage the ability of smaller entities to competitively engage in their markets.
The solution? Hard to say beyond ‘don’t use social media’. Raising your voice and letting your chosen social media provider know that you disagree with some of their content moderation strategies might be a good option, but then again it’s hard to think that Facebook might change its mind about this stuff. These same damaging regulatory technologies also prevent child pornography, money laundering scams, and illegal goods from flooding your timeline, provide crucial targeted advertising on smaller scales for smaller entities and pay the bills of your favourite influencers so you can get more piping hot tea delivered to your feed on a regular basis.
More than ever, it is vital that every internet user pays attention to the frame, rather than just the content that passes across the surface of the screen.