In nearly three decades on this earth, I’ve encountered (and slept with) a fair share of liars. Be it my hopeless naivete or general absentmindedness, I always feel thunderstruck by the revelation of their deceitful ways, which tend to cost a great deal of frustration, heartache, or, in some cases, money. Now, thanks to a new study by Florida State University (FSU) and Stanford researchers, it may be possible to identify such liars early on and spare ourselves the eventual catastrophe through an online lie-detector.
This beta incarnation of what is intended to become a bona fide virtual polygraph, utilises algorithms to decipher truth from lies while observing online conversations between two people—based solely on the content and speed of typing. Yet, an examination by WIRED reveals that numerous academics and machine learning experts raise serious concerns about this so-called lie detector and the social, economic, and personal implications such an invention might have.
In their experiment, the FSU and Stanford researchers had 40 participants play a game called ‘Real or Spiel’, in which paired individuals would answer each other’s questions either as lying ‘sinners’ or truth-telling ‘saints’. The researchers then used the content of the conversations and the pace and timing of each response in order to train a machine learning model to identify ‘sinners’ from ‘saints’—which they claim to have done with 82.5 percent accuracy. The researchers also state that they found liars exhibited more negative emotions and anxiety than saints, and produced faster and longer responses, replete with words such as “always” and “never”.
“The results were amazingly promising, and that’s the foundation of the online polygraph,” says one of the researchers to Computers in Human Behavior, “You could use it for online dating, Facebook, Twitter—the applications are endless.”
Critics, however, seriously doubt the ability of such a model to function as a reliable lie-detector device, and warn that a malfunction of such an invention could bear catastrophic ramifications on people’s lives and on society as a whole. In an interview for WIRED, Jevin West, a professor at the Information School at the University of Washington, says, “It’s an eye-catching result. But when you’re dealing with humans we have to be extra careful, especially when the implications of whether someone’s lying could lead to conviction, censorship, the loss of a job. When people think the technology has these abilities, the implications are bigger than a study.”
Kate Crawford, the co-founder of the AI Now Institute at New York University, seconds West’s scepticism and goes on to question the very methodology of the study. “You’re looking at linguistic patterns of people playing a game, and that’s very different from the way people really speak in daily life,” says Crawford to WIRED. She then draws a connection between this study to what she sees as a problematic history of polygraph tests, which produce scientifically questionable results and often render false positives.
As demonstrated by West and Crawford, there’s a litany of concerns regarding FSU and Stanford’s current study and its possible implications. But even if a more reliable virtual polygraph study were to surface—is this truly a technology we should be espousing?
Yes, it could potentially constitute a useful tool to tackle false information circulating online or call out politicians on their untruthful Tweets. Even in day-to-day life, identifying lies could help us avoid scams, eliminate toxic relationships, and win court battles. Yet, a polygraph-on-the-go could also prove to be destructive, and for reasons other than those put forth by West and Crawford. One of them is the fact that, whether we care to admit it or not, we all engage in different degrees of lying, whether by telling our bestie that their new poem rocks or choosing to conceal our true location from a potentially creepy person we met on Grindr. The truth is that, often, a fib can be life-altering, and not necessarily in a negative sense.
As researchers and institutions across the globe race to forge the world’s first virtual lie-detector, major questions still linger: is an algorithm genuinely equipped to decipher dishonesty? What would its errors cost us? Is it always possible to strictly define a lie? And, are we truly ready to face the truth?
Besides, if we were to get the nitty-gritty of things, we’d realise that truth can be subjective, dynamic, and up for interpretation—particularly when it comes to emotions and opinions. Sometimes, exposure to the ‘truth’ can cause unnecessary pain and conflict. There is a reason we construct certain bubbles of illusion—and perhaps some of them aren’t meant to be burst.