Twitter’s image cropping algorithm favours “slim, beautiful and light-skinned faces” – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

AI

Twitter’s image cropping algorithm favours “slim, beautiful and light-skinned faces”

On 19 September 2020, PhD student Colin Madland posted a Twitter thread with images of himself and a black colleague—who had been erased from a Zoom call after its algorithm failed to recognise his face. When Madland viewed the tweet on his phone, Twitter chose to crop his colleague out of the picture altogether. The findings triggered several accusations of bias as Twitter users published photos to analyse whether the AI would choose the face of a white person over a black person or if it would focus on women’s chests over their faces.

Twitter has been automatically cropping images to prevent them taking up too much space on the main feed and to allow multiple pictures to be shown in the same tweet. Dubbed the “saliency algorithm,” it decides how images would be cropped in Twitter previews, before being clicked to open at full size. But when two faces were in the same image, users discovered how the preview appeared to favour white faces, hiding the black faces until users clicked on the image.

Two days later, Twitter apologised for its ‘racist’ image cropping algorithm with a spokesperson admitting that the company had “work to do.” “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing,” the spokesperson said. “But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate.” The company thereby disabled the system in March 2021.

On 19 May 2021, Twitter’s own researchers analysed the algorithm and found a very ‘mild’ racial and sexist bias. As an attempt to open source and analyse the problem more closely, however, Twitter held an “algorithmic bug bounty”—an open competition held at the DEFCON security conference in Las Vegas. Findings from the competition embarrassingly confirmed the earlier allegations.

The competition’s first place entry—and winner of the top $3,500 prize—was Bogdan Kulynych, a graduate student at EPFL, a research university in Switzerland. Using an AI program called StyleGAN2, Kulynych generated a large number of realistic faces which he varied by skin color, feminine versus masculine facial features and slimness. He then fed these variants into Twitter’s image-cropping algorithm to analyse the ones it preferred.

The results? The algorithm preferred faces that are “slim, young, of light or warm skin color and smooth skin texture, with stereotypically feminine facial traits.” It doesn’t stop there. The second and third-placed entries showed that the system was biased against people with white or grey hair—suggesting age discrimination—and favouring English over Arabic script in images.

According to Kulynych, these algorithmic biases amplify biases in society, thereby cropping out “those who do not meet the algorithm’s preferences of body weight, age, skin color.”

“Algorithmic harms are not only ‘bugs’,” he wrote on Twitter. “Crucially, a lot of harmful tech is harmful not because of accidents, unintended mistakes, but rather by design.” Kulynych also noted how these biases come from the maximisation of engagement and profits in general.

The results of Twitter’s “algorithmic bug bounty” calls upon the need to address societal bias in algorithmic systems. It also shows how tech companies can combat these problems by opening their systems up to external scrutiny. And now we have our eyes trained on you, Facebook, and your gender-biased job advertisements.

AI

Amazon is now using algorithms to fire Flex delivery drivers

In order to deliver all those next-day deliveries on time, Amazon uses millions of subcontracted drivers for its Flex delivery programme, started in 2015. Drivers sign up via a smartphone app where they can choose shifts, coordinate deliveries and report problems. Think of Uber’s false sense of freedom for its employees and you’ve pretty much got the same thing happening here with Amazon Flex. But the reliance on technology doesn’t end there as drivers are also monitored for performance and fired by algorithms with little human intervention, according to a recent Bloomberg report.

As much as we love and depend on algorithms, it’s crucial that we understand how even those can mess up from time to time. According to the report, the AI system used by Amazon Flex can often fire workers seemingly without good cause. One worker said her rating—ranging from Fantastic, Great, Fair, or At Risk—fell after she was forced to halt deliveries due to a nail in her tire.

Over the next several weeks, she managed to boost it to Great but her account was eventually terminated for violating Amazon’s terms of service. She contested the firing, but the company wouldn’t reinstate her.

Another driver was unable to deliver packages to an apartment complex because its gate was closed and the residents wouldn’t answer their phones. Shortly after, in another building, an Amazon locker failed to open. His rating quickly dropped and he spent six weeks trying to raise it, only to be fired for falling below a prescribed level.

In those instances, when a driver feels they’re wrongly terminated, there’s not much recourse, either. Drivers must pay $200 to dispute any termination, and many have said it’s simply not worth the effort. “Whenever there’s an issue, there’s no support,” said Cope, 29. “It’s you against the machine, so you don’t even try.”

Amazon became the world’s largest online retailer in part by outsourcing its many different operations to algorithms. For years, the company has used algorithms to manage the millions of third-party merchants on its online marketplace, drawing complaints that sellers have been booted off after being falsely accused of selling counterfeit goods and jacking up prices.

More and more, the company is also ceding its human-resources operation to machines, using software not only to manage workers in its warehouses but to oversee contract drivers, independent delivery companies and even the performance of its office workers. People familiar with the strategy say Jeff Bezos believes machines make decisions more quickly and accurately than people, reducing costs and giving Amazon a competitive advantage.

Inside Amazon, the programme has been praised for its success. Around 4 million drivers have downloaded the app worldwide including 2.9 million in the US, according to Bloomberg’s report. More than 660,000 people in the US have downloaded the app in the last five months alone.

Amazon said drivers’ claims of poor treatment and unfair termination were anecdotal and don’t represent the experience of the vast majority of Flex drivers. “We have invested heavily in technology and resources to provide drivers visibility into their standing and eligibility to continue delivering, and investigate all driver appeals,” Spokesperson Kate Kudrna told Bloomberg.