In an effort to combat cyclones of misinformation and fake news raging across its platform, Facebook has recently taken to assigning credibility scores to their users. According to Tessa Lyons, Facebook’s product manager in charge of targeting and eliminating misinformation, the new campaign consists of algorithms programmed to assess how likely a person is to spread fake news and how credible their flagging of misinformation is. In short, their new mission is to “identify malicious users.” But Facebook’s crusade against evildoers on the web raises a bunch of other concerns regarding how the data is collected, how precisely it will be utilised, and—most importantly—the legitimacy of their authority to determine what constitutes “trustworthy” activity.
On their part, Facebook insists that the new scoring method is a highly credible and efficient way to thwart the spreading of fake news on their platform. It replaced a 2015 attempt to tackle misinformation, which allowed users to flag content they believe is suspicious or downright false. Alas, according to Lyons, users often reported content simply because they disagreed with the author’s argument or political affiliation, or when they were personally offended. In order to surpass that problem, Facebook fashioned the sophisticated, algorithm-based method of scoring users in order to screen content reporting and alleviate the pressure from fact-checkers. Lyons insists that the scoring system doesn’t produce an overall score for their users’ reputation and trustworthiness, but rather relies on thousands of factors to identify particular behavioural patterns that are often associated with either spreading misinformation or wrongly flagging content as fake.
Yet, very little is known about what factors and behavioural clues Facebook is considering while assessing a user’s trustworthiness, or what measurements it uses in order to construct a person’s credibility profile. What if, for instance, one once opened a fake account to stalk their ex (an arguably desperate and creepy act that many of us are culpable of, but not necessarily ‘malicious’)? Is their score going to take a nosedive? If so, are a broken heart or lover-withdrawal symptoms any indication of a person’s likelihood to spread fake news or flag content as such for no reason other than they don’t agree with it? It has also been reported that while assessing a user’s trustworthiness, Twitter often looks at the behaviour patterns of other people in their network. In the unimaginable event that Facebook utilises similar tactics, can the actions of one’s boss or classmate or a total rando on their friends’ list affect their score?
One can contemplate countless other potential issues with Facebook’s scoring method. For instance, what assurance do we have that they are not in fact in the process of constructing an overall, comprehensive ‘trustworthiness’ profile of their users? Simply because they told us so? (lol!) And what will become of this data should it end up in the wrong hands or be sold to a third party (such as governments or greedy conglomerates that are ravenous for such delicious information)? In what manner will it be utilised?
Unfortunately, our experience with Facebook and its tech siblings shows us that we simply don’t know what occurs behind closed doors and that we have no real way of monitoring or making a well-informed criticism of its actions and objectives until it’s too late.
But the problem, in this case, runs far deeper than that. Perhaps it all boils down to the agency we entrust to such platforms to determine what is ‘trustworthy’. Perhaps we are too quick to believe that companies that are ultimately motivated by their own interests (which often contrast those of the public) are legitimate judges when it comes to the assessment of credibility and morality… and character.
You might have heard about micro-influencing and how it appeals to brands that want to have their products nonchalantly advertised by ‘normal’ users via social media platforms. What you might have not heard of—as, unsurprisingly, it hasn’t been ‘advertised’ much—is the patent that Facebook filed last week in the U.S. for ‘computer-vision content detection for sponsored stories’. What this means is that the social media powerhouse has developed a system that could turn users’ uploaded photographs into automated sponsored posts.
The new patent describes how Facebook would scan users’ photos to identify the products casually displayed within them, to then send the image to the featured brand, who could decide to boost the post to the users’ network. According to this patent, if someone posts a selfie of themselves sunbathing and in the corner of the picture appears any branded sun cream, this person could potentially become an unintentional promoter of the lotion’s brand across their friends’ feeds without being aware of it, nor benefit from it.
The technology that can turn the patent into a working system already exists. Last year Facebook launched a tool called Rosetta, an AI-powered photo scanning that can scan more than a billion photographs and stills from videos every day, and identify texts displayed within the pictures (on products, signs or even clothing), including any brand names. By using this AI technology, Facebook would not only be able to gather an increasing number of its users’ personal information, but it could also create a ‘heat map’ for brands, providing them with analytics on not only by who, but also where, their products are being consumed.
Both Instagram and Facebook users, even those who are not paid for it, already upload thousands of images featuring branded products, so it’s no surprise that some visionary marketer thought of taking advantage of this ready-made advertising. As consumers start to get sceptical of celebrities and mega-influencers promoting products, brands, marketing agencies, and most noticeably Facebook, are paving the way for a new wave of digital advertising—one that is based on the power of the many rather than on the popularity of one individual. Companies such as Zyper for instance, help brands find consumers willing to join its community and advocate its products as ‘fans’ via social media. But unlike Facebook’s new system, users have to apply to the platform to become micro-influencers, instead of having their photographs scanned and automatically sold.
The patent does not mean Facebook will certainly end up using this service, but it’s hard to think of one reason why it wouldn’t want to do so. At its current state, though, the patent is only a draft on how this system could be working, and more importantly under which rules, considering that for now, it’s not explicit whether users would be given the possibility to opt out of the plan.
As the influencers phenomena continues to rock the marketing world, micro-influencing is definitely set to overturn the industry even more. By creating a horde of individual advertisers (whether informed of their role or not), companies such as Zyper and Facebook are turning customers into promoters, and in doing so, they are not only drastically changing the traditional mechanism of marketing, but they are returning to one of the most subtle forms of advertising: masses of people sharing the products they use everyday with their digital network.