In the past year Facebook has garnered negative media attention from the Cambridge Analytica data breach which led to further media scrutiny of hate speech, fake news and imposing political agenda’s on users timelines.
With all this negativity perhaps a deeper looker in to the problems Facebook has to encounter can bring about more clarity and understanding of their issues.
Cambridge Analytica Scandal
It is difficult to mitigate through the barrage of media stories to manage which ones are legitimate and which ones are jumping on the moral social media panic bandwagon.
It was revealed that Cambridge Analytica had harvested the personal data of millions of people’s Facebook profiles without their consent and used it for political advertising purposes. This brought about a large number of concerns about user privacy and data protection. Many of the concerns were of powerful institutions such as Facebook destablizing democracy and using its platform to manipulate political decisions.
Of course this is is no way a defence of Facebook but more a look behind the media and an autopsy of the types on problems Facebook has to face.
Fake News & Hate Speech
The problems Facebook has to tackle on a daily basis is to do with scale and the bespoke nature of these problems. You see it is only now that we are starting to see the harmful, negative impact that users can use platforms such as Facebook for.
This scenario has been likened to when civilisations started building roads, transport between cities. The problems that have widespread transport across the globe have been managed over many years. Facebook similarly are facing problems that have never been tackled before and at such scale.
Fake news at the best of times is difficult to detect at the best of times. So Facebook are employing a new algorithm to tackle it. Click-Gap, which Facebook’s News Feed algorithms will use to determine where to rank a given post. It is Facebook’s attempt to limit the spread of websites that are disproportionately popular on Facebook compared with the rest of the web. If Facebook finds that a ton of links to a certain website are appearing on Facebook, but few websites on the broader web are linking to that site.
With Facebook in over 155 countries the issue becomes how do you write rules allowing for freedom of speech for every religion and culture whilst also stopping abuse?
As of the first quarter of 2019, Facebook had 2.38 billion monthly active users. To navigate this amount of posts, videos and pictures that Facebook has to deal with on daily basis is a challenge which has never been seen before. For example if Facebook tackle 99% of the problems of Facebook posts, there is still 1% that get through. Due to the scale of Facebook’s platform that 1% is possibly millions of posts. Which brings about a wider discussion about AI.
An automated solution has to tackle the problem of hate speech online. Therefore Facebook has brought in AI to help defeat these problems. Facebook has had to start their AI research from scratch. AI can detect pornography and terrorist content. But the massive technological challenge of being able to understand videos, images and text from different languages then contextualise and form meaning behind these posts in real time to decide if content is safe or not is a massive challenge.
There is always a percentage of errors but the issue is to try abstain and maintain those errors to minimise the impact on users. However with Facebook hiring human moderators to determine which content is acceptable or not is another way Facebook are trying to combat the situation.
You can read further about this in a great post by The Verge
Furthermore determining hate speech from less obvious forms of content seen online such as emoji’s throws even more complications. A symbolic representation of hate speech is difficult to determine. This is a clear example of users finding creative ways to bypass the rules already in place.
There is no previous templates in place that Facebook can copy because these are such new bespoke problems we have never seen before. Leading to Facebook to be in a constant battle and finding ways to combat the increasing creatives who have found ways to bend the algorithm.
Fake accounts are another massive problem. Fake accounts are used to spread misinformation and scam other users. In the early days Facebook largely relied on users to report fake accounts leaving a lot of fake ones unchecked.
However in the past year artificial intelligence software has managed to analyse the behaviour of all 2.3 billion accounts and stop the suspicious accounts. Behaviour information such as geographical location, how you connect to the internet from accounts can be compared to that of an authentic account. The AI software can then detect the anomalies between account history.
Since the beginning of the year The company said it removed 2.2 billion accounts in the first quarter of the 2019. Which surprised me and doesn’t make a whole lot of sense due to Facebook boasting 2.3 billion active users but I will provide a link to an explanation below. Again the problem detection techniques are constantly getting plugged and figured out. This cyber war is another challenge Facebook faces on a daily basis.
Again this is not say Facebook have a clear conscious, not at all but rather than spending time slating the company, an understanding of the problems will provide better context. A lot of the solutions that Facebook are creating a very much a wider technological issue. Working with emerging technologies such as AI is always a challenge and not a solution you can simply ask on Stack Overflow.
To summarise all this it comes down to the users’ trust in the integrity of Facebook, a corporate entity more than Facebook’s responsibility of user privacy.
I gained a lot of inspiration for this article from the BBC’s ‘Inside the Social Network’ which I would recommend watching if you have a further interest in this topic. The programme is available on the BBC iPlayer.