Facebook: Spam, Scam or Safe?

Like Comment

With the growing pressure on Facebook to act in a safer manner, the social media giant has responded with their latest move giving users the responsibility when it comes to advertisements on the platform. The update offers the ability to vet content by deeming it as ‘spam’ or ‘scam’ with a simple click. Seemingly a positive step forward, the move does raise several questions. Our Media Account Director, Ed Paine, shares his views:

Facebook has recently pushed more power into the hands of users. This time to better understand fraudulent ads and scams. It's a necessary step by the social giant, but it opens numerous questions into responsibility, safety, and where the real power lies.

The timing of the announcement is key - a time when the national governments and the public are calling for the platform to operate in a safer manner, and empower users who may not have been previously heard.

Whose Responsibility is it?

Facebook’s move essentially allows the users to play ‘boss’, when it comes to vetting content deemed ‘spam’ or ‘scam’. Some may think this is great, but is it right to bear the brunt of the search and vetting process onto the users who at the end of the day, are meant to be enjoying the platform for personal use?

The content that Facebook deems as unfair (and often spam) are littered with personal content and credible photos - just like Moneysavingexperts.com’s Martin Lewis - in order to encourage user interaction, and ultimately spending.

Bullet Dodging?

One could argue that the decision is essentially Facebook trying to dodge a bullet from having to invest money in a larger department of scam/fraud detection staff, and expensive tech to make it work.

….Yes, I know they have hired a team, but there's no chance it’s big enough.

That said, it has been too easy for the public to criticise the tech giants of late. If we are to solve the issues, I feel the wider public must work in collaboration to find a resolution that meets the necessary standards to keep social media users safe.

Possible Complications

A bold move by Facebook, and one that could backfire with a spike in misuse of the function whereby users falsely flag content they personally disagree with - a real challenge for the ‘new, dedicated and specially trained internal operations team’, who is responsible for determining if content should be classed as a scam and removed or if the ad has been falsely flagged, and should be left up.

It is impossible to predict how the tool will be received and used, or misused by the user base. Therefore said team could find themselves inundated with requests to vet suspicious ads, and what if they get it wrong?

As we have seen on numerous occasions, for example with Skype and Snapchat, once the internet passes judgement on updates, there can be severe backlash leading to U-turns which is damaging for brand perceptions. Facebook must be careful it does not get sucked into accusations of censorship if the tool becomes a means for anyone to flag a post for Facebook to automatically take down.

Facebook could find itself in a similar predicament to Twitter who continue to grapple with the debate around what is classed as free speech and ‘fake news’.

We should ask ourselves if social media platforms should be the decision makers of modern society, who decide what is classed as appropriate content. Should there be more Government regulation and involvement in this decision-making process? No doubt an unpopular view with big business and the tech giants.

Requirements for Success

If the initiative is to be a success, it’s paramount that Facebook report back to the individual user who flagged the scam ads. Mostly, so people know their voice is being heard, that they’re part of the solution, and that Facebook is taking direct action.

Facebook also needs to broadcast the figures of flagged ads and subsequently removed ads for the wider public, so they too can feel like Facebook is taking the problem seriously, and for there to be a real shifting of the dials regarding trust.

Manipulation Loophole

As the tech giant makes amends to their platform to try and combat the scammers, undoubtedly people aiming to manipulate the platform will quickly adapt their approach to find loopholes and ways around the measures.

There is still a long way to go before the issue can be fully solved. Although the decision throws up more questions, this is a positive first step in the right direction. Mainly I hope the move empowers victims of ad scams on Facebook platforms, and encourages them to take control of their own experience.

This article was featured in the Digital Marketing Magazine.


We’re a modern media agency built for the constantly changing digital world. We exist to shorten the time between a client’s media investment and great stuff happening. Great stuff like driving sales, finding new customers, building market share, launching products....you get the gist. We build connected media for our clients. This means using media that is addressable, ad-servable or conditionally delivered with the purpose of liberating your creative messages and business. If we can’t validate, track and measure it, we don’t do it. When used properly this media and the data captured, creates stories that become the intelligence engine that drives what you do and means you can act now and act fast. We work best with people that don't want business as usual, who want to change how they do stuff and are trying to speed up what they do. That could mean you’re a new start-up or a business trying to secure second round funding. Or you might be a traditional business who recognises there is a need to change. And change now. If you sound like that sort of client, we'll probably work very well with you.

No comments yet.