Project Dragonfly is making Google’s employees question the moral values of the company – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

Project Dragonfly is making Google’s employees question the moral values of the company

“Anybody who does business in China compromises some of their core values. Every single company, because the laws in China are quite a bit different than they are in our own country”, said John Hennessy, the chair of Google’s parent company Alphabet Inc., when asked about Project Dragonfly, a formerly secret Google plan for a censored Chinese search engine and the dubious morals behind it.

Project Dragonfly (as it was internally named at Google) is a customised version of Google’s iconic search engine, where the freedom of internet browsing is curtailed by heavy censorship on the search results. The project was designed to conform to the rules of the Chinese Communist Party, which means that Google has to censor ‘sensitive’ information such as “Tiananmen Square”, “Nobel Prize” and “human rights”. Any public information must adhere to China’s surveillance laws or be screened by authorities first. For example, information on air pollution needs to be approved by Beijing before appearing on the search engine. The protocol does not exactly comply with what, in the Western hemisphere, is considered freedom of speech or basic privacy settings, which is why it doesn’t come as a surprise that Google’s employees are now asking for the launch of the project to be reconsidered.

It’s not the first time Google has attempted to introduce its search engine to China. Back in 2010, a similarly controversial programme was dropped when founders Sergey Brin and Larry Page decided that censoring the search engine to the standards of the Chinese government was not in line with the company ethos. Eight years later, Google is trying to enter China’s restricted network once again. According to reporter Mark Bergen, the reason behind this new venture, beyond Google’s innate expansion to China, was to tighten the collaboration between the company focus on Artificial Intelligence and China’s “talent pool in the field”.

But like in 2010, following months of internal calls for clarification on the project’s human rights concerns and several leaks of the protocol, a number of employees are now threatening to strike as they make their opposition. The lack of coherent answers coming from Project Dragonfly’s main executives has also triggered former Google research scientist Jack Poulson’s resignation, as he thoroughly explains in an article published on The Intercept. According to some of the protesters, the project managers, among which is Scott Beaumont, Google head of operations in China, initially intended to disclose Project Dragonfly only once it was already launched in China.

But with an increasing number of internal pressures, things didn’t go as smoothly as the project’s executives imagined. As a response to the questioning, a Google spokesperson published a statement addressing the plan, arguing that Project Dragonfly was only at a test phase. In a bid to demonstrate that the company has taken into consideration the moral working of the engine, the spokesperson said that “This is an exploratory project and no decision has been made about whether we could or would launch.” Yet, as Poulson himself wrote, “Google CEO Sundar Pichai attempted to invoke an engineering defence by arguing that Google would not need to censor “well over 99 percent” of queries. Such a framing is perhaps the most extreme example of a broad pattern of redirecting conversations away from their concrete governmental concessions—which, again, literally involved blacklisting the phrase “human rights,” risking health by censoring air quality data, and allowing for easy surveillance by tying queries to phone numbers.”

The negotiation on a censored search engine in China is over a decade old. At the same time, doubts regarding whether it would be better for Google to provide Chinese citizens with their renowned engine, albeit censored, or to keep the ethos of the company coherent have prevailed alongside the negotiations. It would be interesting to know if Google’s CEO and the project leaders ever foresaw the possibility of an opposition coming from their own workers, or if they were ever aware that a reasonable amount of Google’s well-nurtured employees would come forward in denouncing one of the company’s most critical projects for its questionable values. At this point, one thing is certain: Google is currently being scrutinised by its own employees, an agency that holds an empowering force and that reveals a glimpse of resistance within one of the world’s most powerful and otherwise untouchable corporations.

Amazon is working on a voice-activated device that can read our emotions

According to Amazon, we suck at handling our emotions—so they’re offering to do it for us. The company that gave us Echo and everyone’s favourite voice to come home to, Alexa, has announced it is working on a voice-activated wearable device that can detect our emotions. Based on the user’s voice, the device (unfortunately not a mood ring but you can read more about these here) can discern the emotional state the user is in and theoretically instruct the person on how to effectively respond to their feelings and also how to respond to others. As Amazon knows our shopping habits, as well as our personal and financial information, it now wants our soul too. Welcome to the new era of mood-based marketing and possibly the end of humanity as we know it.

Emotional AI and voice recognition technology has been on the rise and according to Annette Zimmermann, “By 2022, your personal device will know more about your emotional state than your own family.” Unlike marketing of the past where they captured your location, what you bought, or what you like, it’s not about what we say anymore but how we say it. The intonations of our voices, the speed we talk at, what words we emphasise and even the pauses in between those words.

Voice analysis and emotional AI are the future and Amazon plans to be a leader in wearable AI. Using the same software in Alexa, this emotion detector will use microphones and voice activation to recognise and analyse a user’s voice to identify emotions through vocal pattern analysis. Through these vocal biomarkers, it can identify base emotions such as anger, fear, and joy, to nuanced feelings like boredom, frustration, disgust, and sorrow. The secretive Lab 126, the hardware development group behind Amazon’s Fire phone, Echo speaker and Alexa, is creating this emotion detector (code name Dylan). Although it’s still in early development, Amazon has already filed a patent on it since October 2018.

This technology has been around since 2009. Companies such as CompanionMx, a clinical app that uses voice analysis to document emotional progress and suggest ways of improvement for a patient, VoiceSense who analyses customer’s investment style and employee hiring and turnover, and Affectiva, born out of the MIT media lab, that produces emotional AI for marketing firms, healthcare, gaming, automotive, and almost every other facet of modern life you can think of.

So why is Amazon getting into it now? With Amazon’s data goldmine combined with emotional AI, it has a bigger payout than Apple or Fitbit. Combining a user’s mood with their browsing and purchasing history will improve on what they recommend you, refine their target demographics, and improve how they sell you stuff.

From a business standpoint, this is quite practical. When it comes down to it, we’ll still need products. One example being health products. You won’t care so much about the bleak implications of target marketing when you’re recommended the perfect flu meds when you’re sick. Mood-based marketing makes sense as mood and emotions can affect our decision making. For instance, if you were going through a breakup you’re more apt to buy an Adele album than if you were in a relationship. But this is deeper than knowing what type of shampoo we like or the genre of movie we prefer watching. This is violating and takes control away from our purchasing power. They’re digging into how we feel—our essence and if you believe in it, into our souls.

One must ask who is coding this emotion detector? Whose emotional bias is influencing and identifying what is an appropriate emotional response? Kate Crawford from the AI Now Institute voiced her concerns in her 2018 speech at the Royal Society, emphasising how the person behind the tech is the most important person as they will be affecting how millions of people behave, as well as future generations.

For instance, if a Caucasian man was coding this tech, could they accurately identify the emotional state of a black female wearing this device? How do you detect the feeling after experiencing microaggressions if the person coding the tech has never experienced that? What about emotions that can’t be translated from language to language? Other concerns are that we won’t be able to trust ourselves on how we feel. For instance, if we ask where’s the closest ice cream shop and it asks if we’re sad, will we become sad? Can it brainwash us to feel how it wants us to feel? After decades of using GPS, we don’t know how to navigate ourselves without it. Will this dependency sever our ability to feel and how to react emotionally—in other words being human?

Taking all this information in, I’m still weirdly not mad at the idea of a mood detector. This has potential as an aid. People with social conditions such as PTSD, Autism, or Asperger’s disease can benefit, as this would aid in interaction with others or for loved ones to better understand those who are afflicted. So should we allow non-sentient machines who’ve never experienced frustration, disappointment, or heartache to tell us how to feel? Part of me says hell no, but a part of me wouldn’t mind help with handling my emotions. If we are aware of all the positive and negative implications, we can better interact with this technology and use it responsibly. If we see this as an aid and not as a guide, this could have great potential to communicate better with others and ourselves objectively. Or it can obliterate what is left of our humanity. Sorry, that was a bit heavy-handed, but can’t help it, I’m human.