When it comes to scoring your personality from 0 to 10—decimal points included—it’s rather hard to feel as though anything is accurate or authentic. ‘Ability to learn’, sure. Is 10 out of 10 too arrogant? 6.7 too low? What about 7.8? Now imagine that it isn’t you scoring yourself, but rather an AI system scoring you according to your social media profiles, CV or even just your name, email or phone number. Welcome to DeepSense, a big data science company working with AI to predict things. Predict what exactly, you may ask? For example, how likely your potential employees are to maintain stability in the workplace.
I entered DeepSense as an ‘employer looking for a content writer’. I was then asked a few questions about the type of position I was looking to fill, and to select what score range I’d like this candidate to hit in fields such as ‘Attitude and Outlook’, ‘Stability Potential’ and ‘General Behaviour’, I was then asked to drop in a quick link to the potential candidate and let DeepSense do its magic. I of course added my own LinkedIn profile out of sheer intrigue to see what my own score would be, and whether indeed this AI thought I was suited for the job I anyway do.
Within a mere number of seconds and a spinning of a 3D globe icon, my score was revealed. And the results were bleak. DeepSense had reduced my entire career, education, and skills into numerical scores that quite frankly were worse than I’d imagined. 5.2 for ‘Stability Potential’, 5.8 for ‘Learning Ability’, 5.9 for ‘General behaviour’ (what does that even mean?), and 5 for ‘Need for Autonomy’. Overall my match score for the ‘Content Writer’ position was at a 59 percent match. Not ideal.
Sure, I identify as slightly neurotic with a shorter temper than most, but I never looked at these traits as hindrances to my career nor my ability to deliver within a role. Having lived alone since the age of 18, endlessly hustling the creative industries and wiggling my way through back doors, founding my own company and managing a team of people, I thought that my ‘Stability Potential’ score would at least be hitting on an average. A 7, let’s say. But DeepSense was determined this was not the case.
At closer investigation into what DeepSense categorises as ‘General Behaviour’ I found that this means “The overall ability to get along with others despite personal challenges.” For ‘Stability Potential’, the AI company scores according to the person’s inclination “to give it their all before calling it quits”, and for ‘Need for Autonomy’ I found out that this means a person’s tendency “to work better independently”. What’s more is that DeepSense has summed me up in but a few sentences, writing “Shira can be friendly at times but does not hesitate from being critical when the situation so demands. Shira can be sceptical of what others have to say and can question them incessantly until Shira gets his/her answers”.
Now, I understand the potential appeal in a tool such as this. It can, in theory, help recruiters eliminate their own personal biases while at the same time scan through thousands of applications that represent but a small number of candidates who actually match the desired role. In an interview with the Verge, co-creator of DeepSense Amarpreet Kalkat, says that personality is often what determines a good candidate and that the AI developed by DeepSense is here to conclude just that. When asked whether AI is fit to determine a person’s character, Kalkat answers, “From a relative point of view, how accurate is human judgement?”.
Whether or not AI systems will heavily enter into the recruitment field is no longer a question of if, or when. They are here and for the long run. The workforce is growing, jobs are becoming more specialised and with that, the need to vet through applications is largely understandable. The question instead needs to be whether or not AI should be used to replace human judgement of character and more importantly—if it is able to predict a person’s character as DeepSense claims to—is it ethical to use this prediction for or against a person? All I know is that it did not feel great nor accurate to be reduced to ‘Slightly Easygoing’ and ‘Slightly Sensitive’.
‘Can we give technology a new voice?’ asks the introduction of the video presentation of Q, the first genderless voice in an otherwise binary landscape of AI voice assistants. A Denmark-based group of linguists, technologists, and sound designers thinks so, that’s why they embarked on a mission to create the first gender-neutral voice that can potentially be implemented within IoT devices and services.
As it fluidly oscillates between higher and lower pitches, the soothing voice of Q is not attributable to neither a male or a female identity. Q’s developers—a team born out of a collaboration between Copenhagen Pride, Virtue (Vice’s creative agency), and Equal AI—began by recording the voices of more than 20 people identifying as male, female, transgender and non-binary. After merging all these voices together, they then identified what audio researchers consider a neutral frequency range—which sits between 145 and 175 hertz. The new voice sample was then tested by over 4,000 people who gave their feedback, and by tweaking the modulation of the voice to match that specific middle range, and also accordingly to the testers’ inability to attribute the voice to a gender, Q was finally here.
Q was created to challenge the gender bias that is present in the AI tools that aid, and that are becoming more ubiquitous to personal assistant devices. We are all accustomed to Alexa’s smooth female voice as well as Siri’s default feminine tones. And it’s no coincidence that our domestic and personal devices all speak with a female voice: their role is to make us feel helped, comfortable and intimately connected with the device. On the contrary, security and public space robots often have a male voice, which is supposed to deliver authority and distance. In this regard and in many ways, despite its limitless ability to be whatever we make it, AI is perpetuating the same gender stereotypes still very much present in everyday life.
Q is still at an early stage as it doesn’t yet have an AI framework that activates it. But to build one is the team’s next goal. As robots, AI assistants, and more generally IoT will increasingly communicate with us via the voice, it’s worth asking ourselves the question of how we can erase the bias in technology from the start. “Q adds to a global discussion about who is designing gendered technology, why those choices are made, and how people feed into expectations about things like trustworthiness, intelligence, and reliability of a technology based on cultural biases rooted in their belief system about groups of people”, said advisor to the project Julie Carpenter, a researcher at the Ethics and Emerging Sciences Group.
There is no doubt that Q could challenge some of the bias currently present in our technology, but it also speaks of the potential of tech to become a tool for experimenting and challenging the stereotypes that we still find hard to break IRL. Much of the fear associated with AI is fuelled by the belief that we will not be able to control it as much as it will be able to control us; that it could harm more than it could help. But at the same time we now have the knowledge and the capacity to shape AI to be better—not only at controlling us—but at being more progressive than we currently are.
As the voice continues to be a prominent feature in both present and future technologies, taking the time to reflect on what type of voice should technology have in the first place, appears to be not only a logical, but rather a necessary progression towards shaping AI to be as, or even more, inclusive than our society.