The Harms Race

The Harms Race

When representatives from Google, Meta and TikTok appeared in front of Irish legislators last Wednesday to discuss the (self) regulation of their platforms, it was the no-show by bad boys X that stole the headlines.  

Chair of the Oireachtas Committee Alan Kelly deemed it “disgraceful” that Elon Musk’s company had ignored invitations, including one coming directly from the Taoiseach.  

X’s decision to ignore elected officials came after the company’s AI tool Grok had enabled users to generate deepfake sexualised image of women and children. Committee members were denied an opportunity to vent at them for allowing what’s been coined an “abuse factory” to operate on Dublin’s Fenian Street. Their absence also seemed to sap much of the media buzz from proceedings, which is a shame, as there were some useful nuggets to be found amid all the corporate guff and political incredulity.

CSAM generation not Grok’s only use 

On Grok itself, we had Fine Gael senator Garrett Ahearn ask Google’s Ryan Meade why an illegal-content generator was still available to download on the Play Store.  

What followed was several minutes of strategic vagueness - "we brought it to the developer's attention... we don't automatically go to removal of an app like that that has other purposes". This later prompted Alan Kelly to ask what percentage of abuse generation was acceptable in any app contained in Google's app store? None apparently. The company continues to engage with X. No action has yet been taken. The matter remains open. We can continue to hold our breath. 

Uncollected fines 

The futility that Irish legislators face in tackling these global behemoths was evident from the start. Sinn Féin TD Joanna Byrne cited journalist Ken Foxes’s FOIs showing that Ireland's Data Protection Commission had administered €4bn in fines on social media firms over the last four years, but just 0.6% of this has been collected. Last year for instance just €125,000 of the €530 million in fines imposed were received. The rest has been gunged up in the courts awaiting appeals. 

Violating our community standards 

TikTok’s Susan Moss surprised the room with her candid rebuttal to the charge they were failing to prevent harmful content, saying the responsibility for social media groups was in “minimising harm” and removing content “as quickly as possible... The harsh reality is it will never be zero”. It’s a defence that newspaper publishers would no-doubt love (Just imagine Irish Independent owner Medahuis explaining to a committee that articles containing suicide ideation and targeted death threats “represented just a fraction of our news output that day”). 

When Fine Gael Senator Evanne Ní Chuilinn said it should be “a zero-tolerance position, as supposed to minimising something”, Moss replied: “No large-scale platform the size that we’re operating on can realistically guarantee zero violations... 0.5% is what the overall number of violative content is.” 

Fianna Fáil TD Malcolm Byrne suggested companies should be held personally responsible for harmful content: 

“The reason why certain people don’t pollute rivers is they know that they will personally be held liable.” 

This would require social media platforms to be viewed as publishers, something the government’s “AI Minister” Niamh Smyth has touted, but which has so far been rigorously crushed by Big Tech via the courts. To the US this is censorship with a capital C. It's not a battle the EU is likely to win.  

Failure to prepare 

So how many violations are we talking? Not one of the companies could provide specific figures on suspected abuse material taken down, nor the number of incidents referred to gardai, much to the ire of Fianna Fáil TD Peter 'Chap' Cleere:  

“Lads, this is the Roy Keane thing – fail to prepare, prepare to fail. You don’t have the data”  

We did however hear that last year Google reported one million global cases of suspected child sexual abuse to NCMEC, a sort of US clearing house for child safety reports. TikTok reported 2.3 million pieces of CSAM content in the first six months and banned one million accounts. That left deputies to pounder how a small amount of a very large number of child sexual abuse material can still be a lot.  

Addictive, us? 

Social Democrats TD Sinead Gibney, herself a former Google employee, probed the contributors with the sort of questions likely to lose the room but signal an insider's I-know-you-know-I-know intent.  

There was a perceptible shift in Susan Moss’s demeanour as Gibney referenced a faulty redaction of internal Tiktok documents which revealed internal company research into the potential harms and addictive nature of their algorithm. The TD also cited DCU research that showed that harmful content is provided to user by said algorithm, less than an hour after they first sign on. 

Gibney then addressed all three companies: 

do you agree... that algorithms are designed to be addictive. They are designed to keep our eyes on the screen?” 

Now there is absolutely no question whatsoever that the “secret sauce” driving these attention machines is the ability to hold engagement for as long as is scientifically possible, to extract every second of eyeballs and clicks. So the evasiveness of their answers is worth sharing in full:  

Richard Collard, Tiktok Minor Safety Public Policy Lead: “Algorithms are here to provide users with content experiences that they enjoy, to help them find communities and to access a range of content that they might not necessarily find... They’re not designed to be addictive.” 
Chloe Setter, Google (YouTube) Child Safety Public Policy Manager: “We believe the algorithm is there to help sort and make practically useful the huge vast amounts of content. I think something like 500 hours per second – “ 
Sinead Gibney: “Not keeping our eyes on the screen?” 
Chloe Setter, Google: “No we prioritise user experience over time spent.” 
David Miles, Meta’s Safety Policy Director, EMEA: “I think addiction would be an overly simple term. It’s about making age-appropriate experiences but also dealing with harms from a preventative perspective.” 

Their reticence is understandable. Two days later, the European Commission found TikTok in breach of the Digital Services Act for its addictive design: “This includes features such as infinite scroll, autoplay, push notifications, and its highly personalised recommender system.” EU regulators clearly have these algorithms in their crosshairs.  

Just ban them already! 

There’s an expectation, when given the opportunity to grill officials about the harms their voters can identify, that policy makers will dial up the indignation, and there was a fair bit of this posturing on display during the session; angry parents, berating these corporate suits for not “getting” what it’s like to raise teenagers in 2026. 

This was the approach Senator Evanne Ní Chuilinn took, frustrated with reference to all the tools available to assist parents:  

“I'm not comfortable with the narrative of parents feeling empowered, parent centres, parent guides... Social media is the worry of the moment... 35 years ago it was probably smoking. If you had said to politicians at an Oireachtas committee [back then], 'Oh we’ve a parent’s guide! Just look up the parent’s guide and you’ll be totally empowered’. It’s an absolute farce!” 

It was often satisfying to watch but was probably as useful as emailing press queries to X. 

Senator Alison Comyn looked positively shook when asking the policy wonks what possible benefit their social media hellscapes offer teens. Referencing a case known to her of an 11-year-old child who has been sharing and commenting on self-harm content, it was her firm view that teenagers must be blocked from such spaces. 

The Inherent Logical Position being taken here is that teenagers need to be blocked from accessing social media. The platforms have obvious harms but also subtle ones that only "We" as parents understand. It’s the direction of travel being taken by Serious Countries like Spain and France. True, some silly and probably unlawful (under EU law) approaches are being discussed in Ireland to age-gate online access, including a digital ID tool which would grant tech firms lucrative personal data and would probably be hackable. But to quote South Dublin comedians quoting North Dublin taxi drivers, something must be done.  

I’m not going to get into the criticisms of such a ban here, but it is worth watching this exchange between For Tech Sake podcaster Elaine Burke and The Tonight Show guest presenter John Lee in terms of intuition v evidence.  

Simplification, me?   

Many of the policy reps come from child protection backgrounds and can speak with conviction on the need to minimise harms. They understand that their role is to absorb an often-performative anger, and calmly signal that the totalitarian Death Stars they represent aren’t completely bereft of care.  

So there was some mild discomfort when Sinéad Gibney went off piste, bringing in lobbying and digital deregulation, which she said was being pushed by the Trump administration with the support of big tech, including representatives of the three companies present.  

She asked the representatives had they ever lobbied in favour of deregulation at European levels, including the Digital Simplification Package “which will roll back on GDPR and the EU AI Act”?. 

They replied they were all in favour of simplification. Google cited the Draghi report which recommended there was scope to make the digital rulebook more straightforward. Meta insisted it was not about removing protections for individual rights but “improving implementation”.  

Gibney then raised the obvious paradox, that while Big Tech are lobbying in Brussels for “simplification” and “clarity”, in the US they’re pushing Trump to bully Europe towards deregulation.  

Google“We operate in over 100 different countries so we engage with many many governments so its no surprise that we would be enaging on both sides of the Atlantic.” 
Meta“It’s not just US companies who are calling for simplification, there are about 60 European companies.” 

Undaunted, Gibney asked the panel two specific questions relating to the EU Digital Omnibus Package: 

“Do you accept that changing what data counts as private for GDPR means more sensitive personal data will be legal to keep? Do you accept that?“ 
“Do you accept that self-certification when it comes to risk under the EU AI Act will result in weaker protections for citizens and more scope for abuse?” 

None could answer, it was not something they were briefed on for the committee. But perhaps these weren’t questions in search of answers, but rather the Social Democrats TD signalling to colleagues to up their game on what is a complex multifaceted piece of geopolitical tension. And Dublin is caught right in the middle.