Spotify’s new feature will let you talk with its ads

By Sofia Gallarate

May 13, 2019

COPY URL

With 217 million monthly active users—and with this number set to grow—Spotify is undeniably top of the game in the streaming market. Considering these figures, the company has been extra efficient in monetising every aspect of the platform to make the highest profit out of its broad user base. With an array of different options—from sponsored playlists to video takeover—Spotify for Brands has been offering all kinds of ad experiences to its clients. Today, the streaming platform is testing yet another advertisement programme: voice-enabled audio ads, a new interactive feature that hasn’t been tested by any other platform yet.

The feature works on the premise that the audio advertisements that usually interrupt playlists on Spotify could be used to a greater extent by also offering listeners to switch to a playlist curated by the brand whose advertisement just played. How? With the use of voice command. The new marketing feature allows listeners to interact with the advertisement by saying a specific phrase while the commercial is playing. By saying out loud ‘Play now’ when prompted by an audio commercial, the listener could activate the curated playlist from the brand (which comes with more commercials) that will start streaming instead of what the person was previously listening to.

The feature, that only works if users have their microphone enabled, was launched on May 2 in the U.S. with Spotify Free users, just a few days after the company CEO Daniel Ek stated that the voice space is a “critical area of growth” for the company. Spotify is currently only testing out the feature with its own podcasts and an AXE audio commercial for Unilever.

Although voice commands are becoming ubiquitous within technology, many industry professionals have raised their concern on the functionality of this new project. “Voice commands are becoming second nature to us, just as swiping on a tablet or phone already is today,” told Leslie Walsh, executive director of strategy at agency Episode Four, to AdAge. Just because people are becoming increasingly accustomed to using their voices to activate and control a variety of home devices, doesn’t necessarily imply that users will be equally willing to interact with an advertisement. “While the ‘Play now’ voice command is a shiny new feature, as with all new shiny things in tech, brands need to make sure they have something of value to offer to listeners before jumping on it.” adds Walsh.

Spotify’s Audio Everywhere, the standard audio ad package offered by the streaming company, already allows brands to “Reach your target audience on any device, in any environment, during any moment of the day” as it reads on Spotify’s website. Today, the new feature enables brands not only to reach their target audience anywhere, but also to measure how effective their ads are based on the users’ immediate reactions to them. To do so, brands have to be particularly efficient in producing relevant and engaging content for both the audio advertisement and its following playlist. Something that has yet to be proven possible.

Spotify’s ‘Play now’ feature is not groundbreaking in terms of voice command technology, as both Apple’s Siri and Amazon’s Alexa enable to extend native applications through the use of the voice. But as Tom Edwards, Chief Digital and Innovation Officer at Epsilon, explains, “The Spotify experience adds a monetization angle to that experience.” The streaming platform’s new experiment is reducing the distance between clients and advertisers. Spotify is asking its users to literally answer to advertisements, and whether this will be a successful feature or not, one thing is certain: this project is paving the way for a whole new advertisement era that will see listeners playing an increasingly active role within the brands’ marketing strategies.

Spotify’s new feature will let you talk with its ads


By Sofia Gallarate

May 13, 2019

COPY URL


Amazon is working on a voice-activated device that can read our emotions

By Camay Abraham

Jul 1, 2019

COPY URL

According to Amazon, we suck at handling our emotions—so they’re offering to do it for us. The company that gave us Echo and everyone’s favourite voice to come home to, Alexa, has announced it is working on a voice-activated wearable device that can detect our emotions. Based on the user’s voice, the device (unfortunately not a mood ring but you can read more about these here) can discern the emotional state the user is in and theoretically instruct the person on how to effectively respond to their feelings and also how to respond to others. As Amazon knows our shopping habits, as well as our personal and financial information, it now wants our soul too. Welcome to the new era of mood-based marketing and possibly the end of humanity as we know it.

Emotional AI and voice recognition technology has been on the rise and according to Annette Zimmermann, “By 2022, your personal device will know more about your emotional state than your own family.” Unlike marketing of the past where they captured your location, what you bought, or what you like, it’s not about what we say anymore but how we say it. The intonations of our voices, the speed we talk at, what words we emphasise and even the pauses in between those words.

Voice analysis and emotional AI are the future and Amazon plans to be a leader in wearable AI. Using the same software in Alexa, this emotion detector will use microphones and voice activation to recognise and analyse a user’s voice to identify emotions through vocal pattern analysis. Through these vocal biomarkers, it can identify base emotions such as anger, fear, and joy, to nuanced feelings like boredom, frustration, disgust, and sorrow. The secretive Lab 126, the hardware development group behind Amazon’s Fire phone, Echo speaker and Alexa, is creating this emotion detector (code name Dylan). Although it’s still in early development, Amazon has already filed a patent on it since October 2018.

This technology has been around since 2009. Companies such as CompanionMx, a clinical app that uses voice analysis to document emotional progress and suggest ways of improvement for a patient, VoiceSense who analyses customer’s investment style and employee hiring and turnover, and Affectiva, born out of the MIT media lab, that produces emotional AI for marketing firms, healthcare, gaming, automotive, and almost every other facet of modern life you can think of.

So why is Amazon getting into it now? With Amazon’s data goldmine combined with emotional AI, it has a bigger payout than Apple or Fitbit. Combining a user’s mood with their browsing and purchasing history will improve on what they recommend you, refine their target demographics, and improve how they sell you stuff.

From a business standpoint, this is quite practical. When it comes down to it, we’ll still need products. One example being health products. You won’t care so much about the bleak implications of target marketing when you’re recommended the perfect flu meds when you’re sick. Mood-based marketing makes sense as mood and emotions can affect our decision making. For instance, if you were going through a breakup you’re more apt to buy an Adele album than if you were in a relationship. But this is deeper than knowing what type of shampoo we like or the genre of movie we prefer watching. This is violating and takes control away from our purchasing power. They’re digging into how we feel—our essence and if you believe in it, into our souls.

One must ask who is coding this emotion detector? Whose emotional bias is influencing and identifying what is an appropriate emotional response? Kate Crawford from the AI Now Institute voiced her concerns in her 2018 speech at the Royal Society, emphasising how the person behind the tech is the most important person as they will be affecting how millions of people behave, as well as future generations.

For instance, if a Caucasian man was coding this tech, could they accurately identify the emotional state of a black female wearing this device? How do you detect the feeling after experiencing microaggressions if the person coding the tech has never experienced that? What about emotions that can’t be translated from language to language? Other concerns are that we won’t be able to trust ourselves on how we feel. For instance, if we ask where’s the closest ice cream shop and it asks if we’re sad, will we become sad? Can it brainwash us to feel how it wants us to feel? After decades of using GPS, we don’t know how to navigate ourselves without it. Will this dependency sever our ability to feel and how to react emotionally—in other words being human?

Taking all this information in, I’m still weirdly not mad at the idea of a mood detector. This has potential as an aid. People with social conditions such as PTSD, Autism, or Asperger’s disease can benefit, as this would aid in interaction with others or for loved ones to better understand those who are afflicted. So should we allow non-sentient machines who’ve never experienced frustration, disappointment, or heartache to tell us how to feel? Part of me says hell no, but a part of me wouldn’t mind help with handling my emotions. If we are aware of all the positive and negative implications, we can better interact with this technology and use it responsibly. If we see this as an aid and not as a guide, this could have great potential to communicate better with others and ourselves objectively. Or it can obliterate what is left of our humanity. Sorry, that was a bit heavy-handed, but can’t help it, I’m human.

Amazon is working on a voice-activated device that can read our emotions


By Camay Abraham

Jul 1, 2019

COPY URL