Deep Dives Level Up Newsletters Saved Articles Challenges

AI

How voice profiling determines how to exploit your feelings, privacy and your wallet

By Alma Fabiani

Jun 11, 2021

COPY URL

Whenever you call a number and hear: “This call is being recorded for training and quality control,” it isn’t just the customer service representative they’re monitoring. It can be you, too. When it comes to your shopping experience, as you dial in, the computer of an artificial intelligence company hired by the store is then activated. It accesses previous data on the speaking style you used when you phoned other companies the same software firm services.

You’re in luck, the computer has concluded you are ‘friendly and talkative’. Using predictive routing, it connects you to a customer service agent who company research has identified as being especially good at getting friendly and talkative customers to buy more expensive versions of the goods they’re considering to buy. In other words, voice profiling allows them to exploit not only your feelings but also your privacy and your wallet.

If you don’t believe me, check out the company CallMiner and exactly what it offers on its website. “Reveal patterns and insight at scale to understand customers, better meet their needs and expectations, and drive improved loyalty and satisfaction,” is just a nicer way to say what I just stated above.

When conducting research for his forthcoming book The Voice Catchers, author Joseph Turow went through over 1,000 trade magazines and news articles on the companies connected to various forms of voice profiling. “I examined hundreds of pages of US and EU laws applying to biometric surveillance. I analysed dozens of patents. And because so much about this industry is evolving, I spoke to 43 people who are working to shape it,” Turow wrote in an article published on Big Think.

“It soon became clear to me that we’re in the early stages of a voice-profiling revolution that companies see as integral to the future of marketing,” he continued. And although he precised how, for now, we’re still in the early stages of that revolution, it’s clear to see that things are already well in motion. Thanks to the public’s embrace of smart speakers, intelligent car displays and voice-responsive phones, marketers say they are on the verge of being able to use AI-assisted vocal analysis technology to achieve unprecedented insights into shoppers’ identities and inclinations.

Soon enough, they’ll be able to circumvent the errors and fraud associated with traditional targeted advertising. At least, that’s what they’re willing to share with us. Top marketing executives Turow interviewed said they expect their customer interactions to include voice profiling within a decade. Let’s look at what marketers say they need voice profiling for, and what they don’t want to share just yet.

Part—and part is the key word here—of what attracts them to this new technology is a belief that the current system used to create unique customer profiles (and therefore targeting them with personalised offers and ads) has drawbacks that simply can’t be ignored any longer. Too often, customer data isn’t up to date, profiles are based on multiple users of a device, names can be confused and, well, people lie.

As a result, these create barriers to understanding individual shoppers—and selling more crap they probably don’t need. Voice analysis, on the other hand, is seen as a solution that makes it nearly impossible for people to hide their feelings or evade their identities, Turow explains.

Here’s where marketers’ real interest in voice profiling comes into play—in customer support centres, which are largely out of the public eye. Since they’ve been introduced to us as the little helper we all need and deserve, hundreds of millions of Amazon Echoes, Google Nests and other smart speakers have infiltrated our homes. Smartphones also contain such technology.

You’ve probably heard rumours of this before, but all these smart speakers are listening—just not in the way you might think. They don’t listen to your conversations to then present you with ads of what you might have mentioned as per se, but they’re tied to advanced machine learning and deep neural network programmes that analyse what you say and how you say it.

Voir cette publication sur Instagram

Une publication partagée par Screen Shot Media (@screenshothq)

The user agreements of Amazon and Google (as well as many other companies that people access routinely via phone app) give them the right to use their digital assistants to understand you by the way you sound. Amazon’s most public application of voice profiling so far is its Halo wristband, which claims to know the emotions you’re conveying when you talk to relatives, friends and employers.

Although these Big Tech companies assure customers that they’re not using this data for their own purposes—yet—their patents offer a clear vision of what’s coming. From deciphering a shopper’s voice to measure unconscious reactions to products to collecting gender and age information based on the pitch of voice signatures throughout a house, the future is nearing.

As scary as this sounds, you probably haven’t even considered the worst part yet: the impact voice profiling could have on both political campaigns and government activities.

Get more from us, become a Screen Shot Pro member!

Here's what you'll get:

Level Up classes brought to you by your favourite professionals

Access to weekly customisable newsletters

Deep Dives on exclusive insights into industry trends, movements and cultures

20% of all profits go directly into commissioning community members

How voice profiling determines how to exploit your feelings, privacy and your wallet


By Alma Fabiani

Jun 11, 2021

COPY URL


Amazon is working on a voice-activated device that can read our emotions

By Camay Abraham

Jul 1, 2019

COPY URL

According to Amazon, we suck at handling our emotions—so they’re offering to do it for us. The company that gave us Echo and everyone’s favourite voice to come home to, Alexa, has announced it is working on a voice-activated wearable device that can detect our emotions. Based on the user’s voice, the device (unfortunately not a mood ring but you can read more about these here) can discern the emotional state the user is in and theoretically instruct the person on how to effectively respond to their feelings and also how to respond to others. As Amazon knows our shopping habits, as well as our personal and financial information, it now wants our soul too. Welcome to the new era of mood-based marketing and possibly the end of humanity as we know it.

Emotional AI and voice recognition technology has been on the rise and according to Annette Zimmermann, “By 2022, your personal device will know more about your emotional state than your own family.” Unlike marketing of the past where they captured your location, what you bought, or what you like, it’s not about what we say anymore but how we say it. The intonations of our voices, the speed we talk at, what words we emphasise and even the pauses in between those words.

Voice analysis and emotional AI are the future and Amazon plans to be a leader in wearable AI. Using the same software in Alexa, this emotion detector will use microphones and voice activation to recognise and analyse a user’s voice to identify emotions through vocal pattern analysis. Through these vocal biomarkers, it can identify base emotions such as anger, fear, and joy, to nuanced feelings like boredom, frustration, disgust, and sorrow. The secretive Lab 126, the hardware development group behind Amazon’s Fire phone, Echo speaker and Alexa, is creating this emotion detector (code name Dylan). Although it’s still in early development, Amazon has already filed a patent on it since October 2018.

This technology has been around since 2009. Companies such as CompanionMx, a clinical app that uses voice analysis to document emotional progress and suggest ways of improvement for a patient, VoiceSense who analyses customer’s investment style and employee hiring and turnover, and Affectiva, born out of the MIT media lab, that produces emotional AI for marketing firms, healthcare, gaming, automotive, and almost every other facet of modern life you can think of.

So why is Amazon getting into it now? With Amazon’s data goldmine combined with emotional AI, it has a bigger payout than Apple or Fitbit. Combining a user’s mood with their browsing and purchasing history will improve on what they recommend you, refine their target demographics, and improve how they sell you stuff.

From a business standpoint, this is quite practical. When it comes down to it, we’ll still need products. One example being health products. You won’t care so much about the bleak implications of target marketing when you’re recommended the perfect flu meds when you’re sick. Mood-based marketing makes sense as mood and emotions can affect our decision making. For instance, if you were going through a breakup you’re more apt to buy an Adele album than if you were in a relationship. But this is deeper than knowing what type of shampoo we like or the genre of movie we prefer watching. This is violating and takes control away from our purchasing power. They’re digging into how we feel—our essence and if you believe in it, into our souls.

One must ask who is coding this emotion detector? Whose emotional bias is influencing and identifying what is an appropriate emotional response? Kate Crawford from the AI Now Institute voiced her concerns in her 2018 speech at the Royal Society, emphasising how the person behind the tech is the most important person as they will be affecting how millions of people behave, as well as future generations.

For instance, if a Caucasian man was coding this tech, could they accurately identify the emotional state of a black female wearing this device? How do you detect the feeling after experiencing microaggressions if the person coding the tech has never experienced that? What about emotions that can’t be translated from language to language? Other concerns are that we won’t be able to trust ourselves on how we feel. For instance, if we ask where’s the closest ice cream shop and it asks if we’re sad, will we become sad? Can it brainwash us to feel how it wants us to feel? After decades of using GPS, we don’t know how to navigate ourselves without it. Will this dependency sever our ability to feel and how to react emotionally—in other words being human?

Taking all this information in, I’m still weirdly not mad at the idea of a mood detector. This has potential as an aid. People with social conditions such as PTSD, Autism, or Asperger’s disease can benefit, as this would aid in interaction with others or for loved ones to better understand those who are afflicted. So should we allow non-sentient machines who’ve never experienced frustration, disappointment, or heartache to tell us how to feel? Part of me says hell no, but a part of me wouldn’t mind help with handling my emotions. If we are aware of all the positive and negative implications, we can better interact with this technology and use it responsibly. If we see this as an aid and not as a guide, this could have great potential to communicate better with others and ourselves objectively. Or it can obliterate what is left of our humanity. Sorry, that was a bit heavy-handed, but can’t help it, I’m human.

Amazon is working on a voice-activated device that can read our emotions


By Camay Abraham

Jul 1, 2019

COPY URL


 

×

Emails suck! Ours don't

Sign up to our weekly newsletter

 

Don't show again