A decisive first under Singapores online harms law
Singapore has directed Meta to roll out stronger anti scam measures on Facebook or face a penalty of up to S$1 million. The move is the first implementation directive issued to an online service provider under the Online Criminal Harms Act, a law that took effect in February 2024. It aims to curb a surge in scams that impersonate government office holders, an area where police say Facebook is the primary platform used by fraudsters. The order compels Meta to address scam advertisements, accounts, profiles and business pages that mimic prominent officials, and to take more effective steps to detect and remove such content quickly.
- A decisive first under Singapores online harms law
- Why Facebook is under pressure
- What the Online Criminal Harms Act empowers regulators to do
- How Meta says it is responding
- TikTok added to Singapores anti scam regime
- The numbers behind Singapores scam fight
- A regional and global story of platform accountability
- What users can do to stay safe
- What to Know
Authorities say the decision reflects a steep rise in impersonation scams and the need for more decisive intervention. Police data released on Aug 30 showed these scams almost tripled year on year, climbing to 1,762 cases in the first half of 2025, up from 589 cases in the same period in 2024. Financial losses linked to the category rose to S$126.5 million in the first six months of 2025, up from S$67.2 million a year earlier. Investigators have attributed the spike to a blend of social engineering tactics and more sophisticated digital tricks, including the use of face swapping and deepfake style videos to mislead victims.
Several incidents earlier this year involved fake Facebook posts and ads using the likeness of Prime Minister Lawrence Wong or former defence minister Ng Eng Hen to promote cryptocurrency or fast money schemes. In March, the prime minister warned the public to be vigilant about fake ads that misused his image and name to sell dubious offers or immigration services. The new directive was announced on Sept 3 by Minister of State for Home Affairs Goh Pei Ming at the Global Anti-Scam Summit Asia 2025 in Singapore. The move builds on a broader push by authorities to treat the fight against scams as a top national priority, combining upstream blocking, rapid reporting, enforcement against syndicates, recovery of proceeds and public education.
Why Facebook is under pressure
Police cited Facebook as the top platform used for government official impersonation scams, which made it the first target under the new law. Singaporean officials and independent surveys have also scrutinised Facebook Marketplace for fraud risks. The Ministry of Home Affairs has said that more than a third of all ecommerce scams reported in 2024 were perpetrated on Facebook. The platform has been ranked the weakest among major marketplaces in Singapores Ecommerce Marketplace Transaction Safety Ratings for four consecutive years, reflecting long running concerns about verification gaps and user safety features.
Recent cases and deepfake abuse
Scammers have increasingly used image manipulation and synthetic media to copy the faces and voices of public figures, then graft them onto videos or static ads that appear to carry endorsements. The technique can be convincing to casual viewers, especially when framed with bogus press headlines or fabricated testimonials. Earlier in 2025, accounts and advertisements falsely presenting Prime Minister Lawrence Wong as a promoter of crypto schemes circulated on Facebook, alongside content using the image of former defence minister Ng Eng Hen. These items often linked to phishing sites that harvested personal data or funneled users to fraudulent payment pages.
Marketplace and ecommerce fraud
Beyond impersonation, police and consumer watchdogs have flagged scams tied to online buying and selling on Facebook Marketplace. Fraudsters frequently create temporary profiles, list attractive goods at low prices, then push buyers to pay deposits or full amounts off platform. In response to criticism, Meta introduced steps such as enhanced user verification for some sellers, in product safety notices and anti scam warnings within its messaging tools. Authorities say these features help, yet they want stronger and faster action to deter repeat offenders and prevent account recycling by scam rings.
What the Online Criminal Harms Act empowers regulators to do
The Online Criminal Harms Act, which took effect in February 2024, gives Singapore authorities a range of tools to reduce the spread of criminal content online. Key among them are implementation directives to online services. These directives can require platforms to take targeted actions against content linked to offences, including scams, and to introduce or upgrade systems that reduce exposure to repeat or high risk patterns. The law focuses on practical steps that can disrupt the reach of scams and speed up removal of harmful content.
Under an implementation directive, a platform can be instructed to identify and address specific scam formats, disable or remove offending accounts at scale, and tighten checks around advertising or account creation in sensitive areas such as public figure impersonation. Non compliance can draw fines of up to S$1 million. The structure is designed to prompt timely fixes without waiting for lengthy court processes, especially when rapid intervention can prevent ongoing losses.
Ocha also works alongside a Code of Practice for Online Communication Services that applies to designated platforms. Once a service is designated, it must meet requirements for scam detection, reporting channels, data preservation, and collaboration with law enforcement. If a designated service falls short of the code, authorities can order corrective action and impose penalties. The approach mirrors broader global trends that ask large online platforms to build in safety by design, with clear accountability when harm persists.
How Meta says it is responding
Meta says impersonation and deceptive ads are against its policies and that it removes such content when detected. The company points to specialised systems that identify accounts pretending to be public figures, and to a combination of automated detection with trained reviewers to speed enforcement. Executives have also highlighted the use of facial recognition technology to protect well known people whose likenesses are frequently abused by scam ads and fake pages.
A Meta spokeswoman, responding to questions about the directive, said the company deploys dedicated tools to prevent and remove impersonation and celeb bait ads, and that it invests heavily in detection and user reporting features. She added that Meta continues to add safety features and share advice with users on how to spot and report scams.
Impersonation and deceptive ads are against our policies, and we remove them when detected. We have specialised systems to spot impersonating accounts and scam ads that misuse public figures, and we continue to improve detection, train review teams and provide tools so people can report potential violations, the Meta spokeswoman said.
At the Global Anti-Scam Summit Asia 2025, Maxime Prades, product management director at Meta, said the company had already protected nearly 500,000 public figures worldwide with facial recognition safeguards. That protection can help find and take down copies of a public figures face used to anchor fake endorsements in ads or posts, and it can assist with removing coordinated batches of lookalike accounts that pop up after takedowns.
We have protected nearly 500,000 public figures around the world with facial recognition technology designed to detect impersonation at scale, Maxime Prades said at the summit.
Meta says it has rolled out advertiser verification steps, in product safety notices and anti scam alerts in messaging to warn users, following earlier calls by the Singapore government for stronger safeguards. The company also notes that it works with law enforcement and takes legal action against criminal groups behind large scale fraud operations. Industry statements from Meta in recent years have said the company removes large numbers of scam related accounts across its apps. Authorities in Singapore say they want to see faster, measurable reductions in the scams that most affect local users, especially those that exploit the names and images of public officials.
TikTok added to Singapores anti scam regime
Singapore police have also designated TikTok as an online service under Ocha, effective Sept 1. That designation means the platform must meet the Code of Practice for Online Communication Services by Feb 28, 2026. TikTok has committed to working with the government on anti fraud systems, a pledge that follows a sharp rise in reports of scams connected to the app. Police say scam cases on TikTok increased by 240 percent in 2024 compared with 2023, and the company reported removing more than 173,000 fraudulent videos in Singapore last year.
A TikTok spokesman told local media the platform backs Singapores focus on online safety and will keep investing in education and product level defenses aimed at stopping scam content before it spreads.
We welcome the governments efforts under Ocha and remain committed to strengthening in app protections. We will continue to invest in education and awareness so our community stays ahead of scam tactics and avoids fraud, the TikTok spokesman said.
Both Meta and TikTok participate in the Global Signal Exchange, a cross border initiative that shares signals related to scams among companies and public agencies. Singapore’s Government Technology Agency has joined that network, which aims to help platforms spot repeat scam infrastructure and patterns that move across services. Cooperation of this kind can speed the removal of scam content and reduce the window of time during which users are exposed to fresh campaigns.
The numbers behind Singapores scam fight
While government official impersonation scams have soared, police noted that total losses from scams declined in the first half of 2025 compared with the same period a year earlier. Scam victims lost S$456.4 million between January and June 2025, down from S$522.4 million in the first half of 2024. Officials cite sustained effort by banks, telecom firms, platforms and regulators to block suspicious transactions, warn users and take down content as factors that helped edge losses lower.
The mixed picture shows progress in some areas while other forms of fraud, such as impersonation of public figures, continue to evolve quickly. That evolution keeps pressure on platforms to improve account verification, detect patterns that indicate abusive ad campaigns and intercept new tactics like AI generated audio or video. It also intensifies the need for rapid coordination between platforms, payment providers and law enforcement when a scam is detected.
Addressing the summit in Singapore, Minister of State for Home Affairs Goh Pei Ming framed the order to Meta as a necessary step to cut off a prolific vector for fraud. He also underscored that countering scams needs broad cooperation, from tech firms to users who report suspicious behavior.
We are issuing the order to Meta because Facebook is the top platform used by scammers for such impersonation scams, and the police has assessed that more decisive action is required to curb these scams, Goh said. For Singapore, the war against scams remains a top national priority, and dealing with a threat as insidious as scams requires a whole of society approach.
A regional and global story of platform accountability
Singapore’s action fits a wider pattern in Asia where regulators are pressing major platforms to do more against online fraud. In recent years, Thailand warned Meta that it could be blocked if crypto scam ads continued to spread using fake endorsements. Media reports have also pointed to scrutiny in South Korea and other markets where authorities are weighing penalties for failure to stem online fraud. The message is consistent. Companies that control a large share of online attention are expected to embed effective safeguards and to act quickly when notified of harmful content.
The Ocha framework is tailored to Singapore’s context, but its mix of directives, codes of practice and fines mirrors approaches in other advanced jurisdictions that seek measurable reductions in risk. A practical test will be how fast platforms can turn design changes into fewer victims, fewer fake endorsements and better detection of the next wave of tactics fraudsters use. Policymakers say the goal is not to punish for its own sake, but to change platform behavior, and to cut the time that scams remain live.
That approach often pushes companies to invest more in detection, advertiser verification and user education in markets where enforcement is active. Public statements from Meta and TikTok suggest both companies support law enforcement collaboration in Singapore, and that they see value in intelligence sharing projects like the Global Signal Exchange. The next year will reveal whether those measures reduce the volume of impersonation scams and keep losses on a downward track.
What users can do to stay safe
Even as regulators push platforms to raise their defenses, users play a crucial role in stopping scams at the first click. Simple steps can sharply lower the risk of falling for impersonation or ad based fraud. These practical habits help limit damage and give investigators the trail they need to move quickly.
- Be wary of ads or posts that use a public figures image to sell investments, immigration help or money making schemes. Singapore officials do not use personal social media to endorse such offers.
- Check the account handle and page history. Impersonating pages are often newly created, have sparse content or low quality posts, and little user interaction beyond comments that look copied.
- Avoid paying deposits or full amounts off platform when buying goods. Use escrow or marketplace payment features that offer dispute resolution where available.
- Do not click links in unsolicited messages that urge urgent action. Visit official websites directly by typing the address into your browser.
- Enable two factor authentication on your accounts. This reduces the chance that a stolen password leads to full account takeover.
- Report suspicious ads, profiles and pages through the platforms built in tools. The fastest way to trigger review is often the report button on the item itself.
- Verify high stakes claims with official channels. If a post says it is from a government agency or minister, cross check on the agency’s official site or verified social media account.
- Keep devices updated and use reputable security software. Some scams rely on old browser versions or known vulnerabilities.
- If you suspect you have been scammed, contact your bank and the police immediately. Quick reporting can sometimes halt transfers and support recovery efforts.
What to Know
- Singapore issued its first directive under the Online Criminal Harms Act, ordering Meta to curb Facebook impersonation scams or face a fine of up to S$1 million.
- Police say impersonation scams of government officials nearly tripled to 1,762 cases in the first half of 2025, with losses of S$126.5 million.
- Facebook is the top platform used for these scams, and Facebook Marketplace has ranked lowest in Singapores marketplace safety ratings for four years.
- Meta says it removes impersonation and deceptive ads when detected, uses facial recognition to protect public figures and has expanded advertiser verification and user reporting tools.
- TikTok is now a designated service under Ocha and must comply with the Code of Practice by Feb 28, 2026, after a surge in scam cases linked to the app in 2024.
- Despite the rise in impersonation scams, total scam losses in Singapore fell to S$456.4 million in the first half of 2025, down from S$522.4 million a year earlier.
- Authorities stress a whole of society response, combining platform directives, law enforcement, financial sector controls and public education.
- Users can reduce risk by scrutinising ads that misuse public figures, avoiding off platform payments and reporting suspicious content quickly.