AI

If you had to code an accident, whose life would you choose to save?

By Shira Jeczmien

Nov 1, 2018

COPY URL

Over the past seven minutes I’ve been asked to choose whether I would rather kill five young passengers by crashing their self-driving car into a concrete barrier, or if the car should swerve slightly right, save their little souls and kill five elderly people crossing the road instead. This was during an illustrated MIT ‘Moral Survey’ slideshow that presented myself, and others like me from 223 other countries, with thirteen different scenarios where a self-driving car would have to be specifically coded to make a moral decision in the instance of a nearing and unpreventable fatal accident.

In case you were wondering, I rather reluctantly chose to save the five young people and before I knew it, I was saving an adult, a toddler and a cat over three adults and a toddler; a doctor, a dog and a toddler over three slightly overweight illustrated figures; two joggers, a doctor, a cat and a woman over five adult passengers with a thief amongst them. It’s fascinating how, once the initial hesitation to resist making such a painfully moral choice, my subconscious began to dictate choices I did not even believe would come into my ethic-driven decision making process.

See this is the thing, as Uber, Tesla and Google’s Waymo driverless cars begin to take root in our world—which they will and already are, there are some pretty serious decisions that need to be coded into their AI. Who to prioritise in the case of an approaching accident is up there on the agenda. Sure the whole point of having AI systems drive on simulated city roads for an accumulation of millions of hours is meant to almost entirely eliminate the possibility of a crash by teaching the AI to drive thousands if not millions of times better than a human. But we all know that mistakes happen. In fact, just last year two lives were lost in the U.S. as a result of machine failure. One involving a self-driving Uber that did have a driver but who was not paying attention and the other with a Tesla self-driving car. Two lives were lost. In Uber’s case it was a woman pedestrian who the car simply did not register even though footage of the camera view clearly shows her appearing out of a dark pedestrian crossing and in Tesla’s case, it was a man who drove the car.

Turns out that on the scale of ‘do I prefer to kill less or more people’ I scored quite heavily into the “does not matter” spectrum (in opposition to the average). But hey, at least I preferenced humans over pets and women over men (sorry, I’m in a female-comradeship kind of mood). On a more serious note, despite the fascinating scores this MIT survey has collected over the past four years on different cultures’ value of life, their preference to fitness, age, animal or human, what the makers of these cars of the future now face is to make real decisions that will guide AI to prioritise what kind of life is more valuable; a decision that I believe no one has the right to make.

I do admit however that projecting my moral ethics from behind a screen, far away from the developers and coders of such systems is easy. And whether we agree or disagree, these decisions will need to be made. The real question is how such decisions will be made, who makes them and to what capacity. Within just thirteen slides, the survey was able to present back to me my own bias, as a twenty-something year-old woman, to preference young people and women. As the technology around us becomes more automated, more powerful and with a stronger grip on everything around us, these types of moral questions—or moral surveys—are only set increase. So if instilling morals into the machines that will serve us is set to be the course of the next few years, we better make sure there is a fair representation of demographics coding these decisions. Otherwise we could accidentally end up with an army of cars that prefer cats over toddlers. Just saying.

If you had to code an accident, whose life would you choose to save?


By Shira Jeczmien

Nov 1, 2018

COPY URL


Future literacy: read, write and code?

By Audrey Popa

Jul 26, 2018

COPY URL

Looking back at our recent history, there’s validity and reason for our societal obsession and almost feverish fear of machines replacing human labour for good. But will robots actually take our jobs anytime soon? Approaching what can only be seen as the third wave of automation, what’s expected to happen in the near future seems much more daunting and intimidating than any other previous job loss crisis in history. A study from Oxford University claims that a whopping total of 47 per cent of all US jobs are at risk of termination, or rather upgrading in terms of hiring—no humans need apply. But how worried should we really be?

Even though we’ve been forced to understand the detrimental effects of the previous waves of automation, it’s important to note that with each wave, there has never been any actual job losses, as gains in productivity have eventually always led to more jobs. The true issue has been the disparity in technical skills that are needed for people to find jobs, once they’ve been automated out of their own field.

The first wave of automation came with the industrial movement. Beginning with machines helping discharge humans of manual and physical labour, this wave was known for taking workers off of farms and into factories. Many people lost their jobs, but it is important to note that all this work was exhausting, both manually and mentally. The second wave helped tackle tedious and dull work, typically affecting office workers who were performing excruciatingly repetitive tasks. Think telephone operators or the women who were the original calculators at NASA prior to computers being accessible.

The third era, and the one we are fast approaching, will be one which will not affect those in the farms, or factories, or even those doing dull tasks. Rather, machine replacements will be stealing jobs from what we can categorise now as knowledge workers or decision-makers. Educated people with university backgrounds and degrees in subjects that were always considered safe—until now. To put it simply, AI is becoming better at making decisions than we are and the next few decades will see the result of this incredibly quick change. We are witness to it now, with articles gushing over how smart machines are beating us in Go and Jeopardy, but soon these machines will be completely changing the job market, in absolutely all industries.

After reading tips on how to find our first job, how do we prepare and shift our skill sets to guarantee ourselves some sort of job-centric security? Technology is changing faster than laws and policies are able to react, making it even harder for individuals to equip themselves for what’s to come. An obvious answer, but one that’s consistently sidelined, is education: teaching people what they need to know in order to have the skills they need for jobs that will be available. In the past, waves of automation haven’t been as easy to predict, but even with the future as unclear and fast-paced as it is now, education is a clear answer to help solve the disparity between the skills demanded and the skills had. It’s mandatory to learn how to read and write, thus coding only seems to be the next logical step.

Canada, with its mid-size economy and stereotypically friendly political demeanour, has set some plans forward to deal with whatever dangers lay ahead. The onus lies on provinces to decide their own educational curriculum, as long as it meets country-wide goals with testing. Due to a decline in the country’s more traditional and dominating industries, British Columbia has recognised a new fast-growing industry for itself—the tech industry. In Canada alone, there’s already a demand issue for the number of tech jobs available, without enough people applying for them.

The region’s plan moving forward into this industry begins with education. The western province will be introducing mandatory coding in its school curriculum from kindergarten all the way through to 12th grade. While at the age of 13 my generation was learning to use Word programmes to create pamphlets, it will be expected of kids to learn a variety of computing programmes, as well as learn how to debug algorithms, have skills in visual programming, and then further on in high school have the opportunity to specialise in particular areas of technology—rather than taking a one time intro to coding course.

Canada isn’t necessarily being innovative in its tactics either, as other countries such as Britain and Australia have taken similar measures within their education curriculums. The world is getting ready for the nearing and possibly most disruptive decade in history—technologically speaking. My only question now lies with how my generation will be able to not only adapt, but compete. Growing up with technology, we’ve evolved as the different versions of iPhones have come and gone. Very early on we can remember a time without constant social media and instant messaging, but are still familiar enough with the tools to know how to navigate every new update. The future is tech, and while it might be coming for your job, it’s also creating new ones.

Future literacy: read, write and code?


By Audrey Popa

Jul 26, 2018

COPY URL