TikTok is poised to axe hundreds of positions within its London-based content moderation and security division, a move that has raised eyebrows as the º£½ÇÊÓÆµ's Online Safety Act takes effect.

The ByteDance-owned platform is consolidating its "trust and safety" operations on a global scale and intends to lean more heavily on artificial intelligence for content oversight, as reported by .

The restructuring, which may impact approximately 300 London employees, forms part of a worldwide shake-up.

TikTok stated it seeks to concentrate its operational capabilities in selected hubs, including Lisbon and Dublin, to become "more effective and quick."

The firm's latest financial records reveal a 38 per cent annual revenue increase to $6.3bn (£4.7bn), with pre-tax losses dropping substantially, indicating the redundancies are not prompted by financial difficulties.

The rise of AI in content moderation

TikTok reported that more than 85 per cent of material deleted for breaching its community standards is detected and removed through automation.

The platform has maintained that AI can also assist in limiting human moderators' exposure to disturbing or graphic material.

This pivot towards automation is not exclusive to TikTok, reflecting a wider movement across the technology sector, where companies such as Google and Meta are progressively encouraging advertisers to adopt AI-driven solutions. However, critics, including the Communication Workers Union (CWU), have raised concerns.

John Chadfield argued that TikTok's dependence on AI represents a cost-cutting measure designed to shift moderation operations to regions with lower labour costs.

The union also cautioned: "TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives."

The Online Safety Act and AI's role

The timing of these redundancies proves particularly noteworthy given the º£½ÇÊÓÆµ's fresh Online Safety Act, which took effect on 25th July.

The legislation mandates that technology firms deploy "highly effective" age verification systems and block the circulation of harmful content, or risk fines reaching £18m or 10 per cent of worldwide revenue.

Responding to these obligations, TikTok has recently launched "age assurance" controls utilising machine-learning capabilities.

Nevertheless, industry watchdog Ofcom has yet to approve these AI-driven mechanisms.

This establishes a conflict between a firm's economic motivations to employ AI for operational efficiency and regulatory requirements for demonstrable safety protocols.

The dependence on AI also forms a central element of TikTok's wider business approach. The platform has recently mandated that TikTok Shop retailers must utilise an AI advertising platform named GMV Max, which automates promotional campaigns to boost revenue.

Whilst certain advertisers have voiced reservations about surrendering control to an algorithm, a TikTok Shop agency partner described the tool as a "game changer" for some smaller merchants.

As TikTok advances deeper into e-commerce and AI-driven moderation, the equilibrium between profitability, efficiency, and user safety continues to be a pivotal and unsettled matter.

Like this story? Why not sign up to get the latest business news straight to your inbox.