Can’t decide where to go for your next summer vacay? Why not let your DNA dictate your next destination. From learning which strain of weed to smoke to what music you should listen to, Airbnb and 23andMe are the latest in strange collabs with DNA analysis. On May 21, the global hospitality and homestay service and the world’s first at-home DNA testing kit announced they are making a foray into the ever-rising demand for heritage travel. They’re now offering users the chance to go from a curious trip down ancestry lane online to a literal trip down ancestry lane.
Heritage travel has been hotter than ever due to the ease of technological access to our past and the opening of the genetic market. More enriching than your regular escape to Ibiza, heritage holidays provide an emotional experience to explore your ancestral roots as well as take a week’s worth of Instagram-worthy photos. Before, these excursions were meant for religious purposes such as pilgrimages to Mecca for Muslim travellers or a Jewish birthright trip to Israel, but according to Airbnb’s consumer trends spokesperson Ali Killam, heritage travel has been a top travel trend since 2014.
After filling your genealogy reports on 23andMe and sending it in, you will be able to click through Airbnb homes and experiences that reside in your ancestral homelands. According to 23andMe, users have at least five (out of eight) different origins within their report, which leads to ample options for your travel itinerary. Compared to sifting through dusty records and studying family trees, it’s a unique and easy way to learn about your heritage. Not to mention an offbeat approach to your vacation plans inspired by your genetic code.
According to press releases on Airbnb and 23andMe, their aim is to provide “an exciting opportunity for customers to connect with their heritage through deeply personal cultural and travel experiences.” Although some see it as a unique escapade, others find this partnership uneasy. Using DNA to capitalise on emotional experiences may feel demeaning and diminish the significance heritage trips could actually be. Not to mention the level of legitimacy of your genetic reports and the authenticity of these travel options.
As we become increasingly wary of how our data is used the thought of our genetic data used for ulterior motives comes to mind. This slightly whispers back to U.S. President Roosevelt’s 1942 order for Japanese Americans to register their identity and begs the question: Could this be a new form of racial registration but with a different face? The ethical implications of having your cultural and racial identity monetised should also be questioned. If it is used as a cultural or racial registry, this not only affects the people who have done the test but also their relatives—which was proven through the capturing of the golden state killer, where police found him through the genetic code of a relative who used 23andMe. Although both parties have stated they won’t be sharing personal information with the other, being sceptical seems like a safe option.
Or maybe we should just take this collaboration at face value. As one Reddit user asks in the 23andMe thread discussing this topic, “Why are you complaining?”. There also are some positive implications that could come from this new way of travelling. People who are adopted or people from disconnected communities from their ancestral home, such as the African American community, could learn more about their geographic history. For those who have no idea where they came from this may be a stepping stone to their personal journey, but then again, how personal can you get with a pre-packaged holiday?
If both companies communicate this collaboration as a way to explore your roots from a physical standpoint then it may not seem so contrived. Instead, it seems that Airbnb and 23andMe are trying too hard to pull at your heartstrings and not telling it like it is: a fun new way to pick your next travel destination.
Your approach to this new way of travelling now depends on how you perceive society. Some may think, as we have already given up so much of ourselves to digital data, what’s one more thing? Others may see it as another way for companies to use and sell our data. So where exactly is the limit? Just like many other questions, there’s no right answer just yet. We’ll find out in the future. In the meantime, hopefully give your long lost cousin a good Airbnb review cause you know, ‘family’.
According to Amazon, we suck at handling our emotions—so they’re offering to do it for us. The company that gave us Echo and everyone’s favourite voice to come home to, Alexa, has announced it is working on a voice-activated wearable device that can detect our emotions. Based on the user’s voice, the device (unfortunately not a mood ring but you can read more about these here) can discern the emotional state the user is in and theoretically instruct the person on how to effectively respond to their feelings and also how to respond to others. As Amazon knows our shopping habits, as well as our personal and financial information, it now wants our soul too. Welcome to the new era of mood-based marketing and possibly the end of humanity as we know it.
Emotional AI and voice recognition technology has been on the rise and according to Annette Zimmermann, “By 2022, your personal device will know more about your emotional state than your own family.” Unlike marketing of the past where they captured your location, what you bought, or what you like, it’s not about what we say anymore but how we say it. The intonations of our voices, the speed we talk at, what words we emphasise and even the pauses in between those words.
Voice analysis and emotional AI are the future and Amazon plans to be a leader in wearable AI. Using the same software in Alexa, this emotion detector will use microphones and voice activation to recognise and analyse a user’s voice to identify emotions through vocal pattern analysis. Through these vocal biomarkers, it can identify base emotions such as anger, fear, and joy, to nuanced feelings like boredom, frustration, disgust, and sorrow. The secretive Lab 126, the hardware development group behind Amazon’s Fire phone, Echo speaker and Alexa, is creating this emotion detector (code name Dylan). Although it’s still in early development, Amazon has already filed a patent on it since October 2018.
This technology has been around since 2009. Companies such as CompanionMx, a clinical app that uses voice analysis to document emotional progress and suggest ways of improvement for a patient, VoiceSense who analyses customer’s investment style and employee hiring and turnover, and Affectiva, born out of the MIT media lab, that produces emotional AI for marketing firms, healthcare, gaming, automotive, and almost every other facet of modern life you can think of.
So why is Amazon getting into it now? With Amazon’s data goldmine combined with emotional AI, it has a bigger payout than Apple or Fitbit. Combining a user’s mood with their browsing and purchasing history will improve on what they recommend you, refine their target demographics, and improve how they sell you stuff.
From a business standpoint, this is quite practical. When it comes down to it, we’ll still need products. One example being health products. You won’t care so much about the bleak implications of target marketing when you’re recommended the perfect flu meds when you’re sick. Mood-based marketing makes sense as mood and emotions can affect our decision making. For instance, if you were going through a breakup you’re more apt to buy an Adele album than if you were in a relationship. But this is deeper than knowing what type of shampoo we like or the genre of movie we prefer watching. This is violating and takes control away from our purchasing power. They’re digging into how we feel—our essence and if you believe in it, into our souls.
One must ask who is coding this emotion detector? Whose emotional bias is influencing and identifying what is an appropriate emotional response? Kate Crawford from the AI Now Institute voiced her concerns in her 2018 speech at the Royal Society, emphasising how the person behind the tech is the most important person as they will be affecting how millions of people behave, as well as future generations.
For instance, if a Caucasian man was coding this tech, could they accurately identify the emotional state of a black female wearing this device? How do you detect the feeling after experiencing microaggressions if the person coding the tech has never experienced that? What about emotions that can’t be translated from language to language? Other concerns are that we won’t be able to trust ourselves on how we feel. For instance, if we ask where’s the closest ice cream shop and it asks if we’re sad, will we become sad? Can it brainwash us to feel how it wants us to feel? After decades of using GPS, we don’t know how to navigate ourselves without it. Will this dependency sever our ability to feel and how to react emotionally—in other words being human?
Taking all this information in, I’m still weirdly not mad at the idea of a mood detector. This has potential as an aid. People with social conditions such as PTSD, Autism, or Asperger’s disease can benefit, as this would aid in interaction with others or for loved ones to better understand those who are afflicted. So should we allow non-sentient machines who’ve never experienced frustration, disappointment, or heartache to tell us how to feel? Part of me says hell no, but a part of me wouldn’t mind help with handling my emotions. If we are aware of all the positive and negative implications, we can better interact with this technology and use it responsibly. If we see this as an aid and not as a guide, this could have great potential to communicate better with others and ourselves objectively. Or it can obliterate what is left of our humanity. Sorry, that was a bit heavy-handed, but can’t help it, I’m human.