‘Can we give technology a new voice?’ asks the introduction of the video presentation of Q, the first genderless voice in an otherwise binary landscape of AI voice assistants. A Denmark-based group of linguists, technologists, and sound designers thinks so, that’s why they embarked on a mission to create the first gender-neutral voice that can potentially be implemented within IoT devices and services.
As it fluidly oscillates between higher and lower pitches, the soothing voice of Q is not attributable to neither a male or a female identity. Q’s developers—a team born out of a collaboration between Copenhagen Pride, Virtue (Vice’s creative agency), and Equal AI—began by recording the voices of more than 20 people identifying as male, female, transgender and non-binary. After merging all these voices together, they then identified what audio researchers consider a neutral frequency range—which sits between 145 and 175 hertz. The new voice sample was then tested by over 4,000 people who gave their feedback, and by tweaking the modulation of the voice to match that specific middle range, and also accordingly to the testers’ inability to attribute the voice to a gender, Q was finally here.
Q was created to challenge the gender bias that is present in the AI tools that aid, and that are becoming more ubiquitous to personal assistant devices. We are all accustomed to Alexa’s smooth female voice as well as Siri’s default feminine tones. And it’s no coincidence that our domestic and personal devices all speak with a female voice: their role is to make us feel helped, comfortable and intimately connected with the device. On the contrary, security and public space robots often have a male voice, which is supposed to deliver authority and distance. In this regard and in many ways, despite its limitless ability to be whatever we make it, AI is perpetuating the same gender stereotypes still very much present in everyday life.
Q is still at an early stage as it doesn’t yet have an AI framework that activates it. But to build one is the team’s next goal. As robots, AI assistants, and more generally IoT will increasingly communicate with us via the voice, it’s worth asking ourselves the question of how we can erase the bias in technology from the start. “Q adds to a global discussion about who is designing gendered technology, why those choices are made, and how people feed into expectations about things like trustworthiness, intelligence, and reliability of a technology based on cultural biases rooted in their belief system about groups of people”, said advisor to the project Julie Carpenter, a researcher at the Ethics and Emerging Sciences Group.
There is no doubt that Q could challenge some of the bias currently present in our technology, but it also speaks of the potential of tech to become a tool for experimenting and challenging the stereotypes that we still find hard to break IRL. Much of the fear associated with AI is fuelled by the belief that we will not be able to control it as much as it will be able to control us; that it could harm more than it could help. But at the same time we now have the knowledge and the capacity to shape AI to be better—not only at controlling us—but at being more progressive than we currently are.
As the voice continues to be a prominent feature in both present and future technologies, taking the time to reflect on what type of voice should technology have in the first place, appears to be not only a logical, but rather a necessary progression towards shaping AI to be as, or even more, inclusive than our society.
In nearly three decades on this earth, I’ve encountered (and slept with) a fair share of liars. Be it my hopeless naivete or general absentmindedness, I always feel thunderstruck by the revelation of their deceitful ways, which tend to cost a great deal of frustration, heartache, or, in some cases, money. Now, thanks to a new study by Florida State University (FSU) and Stanford researchers, it may be possible to identify such liars early on and spare ourselves the eventual catastrophe through an online lie-detector.
This beta incarnation of what is intended to become a bona fide virtual polygraph, utilises algorithms to decipher truth from lies while observing online conversations between two people—based solely on the content and speed of typing. Yet, an examination by WIRED reveals that numerous academics and machine learning experts raise serious concerns about this so-called lie detector and the social, economic, and personal implications such an invention might have.
In their experiment, the FSU and Stanford researchers had 40 participants play a game called ‘Real or Spiel’, in which paired individuals would answer each other’s questions either as lying ‘sinners’ or truth-telling ‘saints’. The researchers then used the content of the conversations and the pace and timing of each response in order to train a machine learning model to identify ‘sinners’ from ‘saints’—which they claim to have done with 82.5 percent accuracy. The researchers also state that they found liars exhibited more negative emotions and anxiety than saints, and produced faster and longer responses, replete with words such as “always” and “never”.
“The results were amazingly promising, and that’s the foundation of the online polygraph,” says one of the researchers to Computers in Human Behavior, “You could use it for online dating, Facebook, Twitter—the applications are endless.”
Critics, however, seriously doubt the ability of such a model to function as a reliable lie-detector device, and warn that a malfunction of such an invention could bear catastrophic ramifications on people’s lives and on society as a whole. In an interview for WIRED, Jevin West, a professor at the Information School at the University of Washington, says, “It’s an eye-catching result. But when you’re dealing with humans we have to be extra careful, especially when the implications of whether someone’s lying could lead to conviction, censorship, the loss of a job. When people think the technology has these abilities, the implications are bigger than a study.”
Kate Crawford, the co-founder of the AI Now Institute at New York University, seconds West’s scepticism and goes on to question the very methodology of the study. “You’re looking at linguistic patterns of people playing a game, and that’s very different from the way people really speak in daily life,” says Crawford to WIRED. She then draws a connection between this study to what she sees as a problematic history of polygraph tests, which produce scientifically questionable results and often render false positives.
As demonstrated by West and Crawford, there’s a litany of concerns regarding FSU and Stanford’s current study and its possible implications. But even if a more reliable virtual polygraph study were to surface—is this truly a technology we should be espousing?
Yes, it could potentially constitute a useful tool to tackle false information circulating online or call out politicians on their untruthful Tweets. Even in day-to-day life, identifying lies could help us avoid scams, eliminate toxic relationships, and win court battles. Yet, a polygraph-on-the-go could also prove to be destructive, and for reasons other than those put forth by West and Crawford. One of them is the fact that, whether we care to admit it or not, we all engage in different degrees of lying, whether by telling our bestie that their new poem rocks or choosing to conceal our true location from a potentially creepy person we met on Grindr. The truth is that, often, a fib can be life-altering, and not necessarily in a negative sense.
As researchers and institutions across the globe race to forge the world’s first virtual lie-detector, major questions still linger: is an algorithm genuinely equipped to decipher dishonesty? What would its errors cost us? Is it always possible to strictly define a lie? And, are we truly ready to face the truth?
Besides, if we were to get the nitty-gritty of things, we’d realise that truth can be subjective, dynamic, and up for interpretation—particularly when it comes to emotions and opinions. Sometimes, exposure to the ‘truth’ can cause unnecessary pain and conflict. There is a reason we construct certain bubbles of illusion—and perhaps some of them aren’t meant to be burst.