Skip to content

Can tech ever rid us of online abuse? Does it even want to?

A whole industry exists to combat abuse and grooming online, but can we trust it to ever find a ‘solution’?

Can tech ever rid us of online abuse? Does it even want to?
Published:

Exploitation and abuse long predate the internet, as do bullying, harassment and doxxing. But the internet and, increasingly, artificial intelligence have the capacity to deliver these old harms at terrifying speed and scale.

The quantity of child sexual abuse material found online has been dramatically increasing. The scale of other forms of image-based abuse, such as deepfakes, is largely unknown, but early research suggests levels of victimisation that are not trivial. We have every reason to think they too are growing exponentially. Social media, online gaming, smartphones: all these technologies are also creating new opportunities for grooming while reshaping old ones.

Meanwhile, policymakers, law enforcement, advocates and corporations are struggling to keep up. There is an established market for ‘solutions’ and a dizzying array of tech-based interventions out there purporting to combat tech-enabled abuse. Some are good, some are ineffectual, and some do outright harm. The combination of a rapidly evolving problem, technological complexities, governance challenges, weak regulation, and high demand for ‘doing something’ makes it confusing to know what is needed. And all too easy to sell digital snake oil.

There are good people in this space, but also many charlatans. Here, as offline, doing something in the name of anti-trafficking is frequently just a weak cover for an anti-sex work agenda. And there are plenty of corporate suits using surface-level do-gooding to protect their core products and distract from the damage caused by their underlying business models. When it comes to tech-based responses to trafficking, exploitation and abuse, there are good reasons to be cautious.

These, among other factors, are what make the problems of online exploitation and abuse so tricky to address. There’s urgency because the problems have dangerous, real-world consequences. Many struggle to feel comfortable with the nuts and bolts of tech, and are used to relying on third-party experts to tell them what to do. And the landscape of actors and interventions is full of mirages and pitfalls – take the wrong move or listen to the wrong advice and you’ve gotten nowhere or made things worse.

So what must people in this space do to make real progress against online exploitation and abuse? And, crucially, whose advice should they be listening to?

Why are we turning to tech to fight trafficking and abuse?

The so-called ‘technology-trafficking nexus' is a matter of much hype in certain corners of academia, activism and the public sector right now. But, empirically speaking, we know relatively little about it.

A lot of the work that has been done primarily exists to call out tech saviourism at the intersection of consensual sex work and trafficking for sexual exploitation. This is for good reason – intervention has been particularly damaging there – but the fields of both exploitation and intervention are far bigger than that.

Expanding the frame to consider a broader range of cyber-enabled violence complicates matters. It demands rethinking both mainstream anti-trafficking measures – increased surveillance, ‘rescue’ and enforcement – and the rights-based alternatives that activists and critical scholars often propose. Focusing on building labour rights and unionisation, for example, lands poorly when sexual abuse of children is the topic.

Through the voices of survivors, this series seeks to show a few of the ways the internet creates and perpetuates trauma, and why we cannot blindly trust in technology alone to offer up a sufficient response

There’s no question that we need alternatives to the alternatives; neither mainstream actors nor critics provide satisfying answers when we enter spaces like child sexual exploitation. This is one reason why the entrepreneurial space for tech-based interventions has opened up so wide. Another is the scale and speed of some of the attacks: if an algorithm is propagating abuse at the speed of modern computation, surely only another algorithm has a chance of keeping up. Involved actors are casting around for ideas and there’s a seemingly open-ended invitation for contributions.

But those inside this space need to be very discerning about what they pursue. They need to understand not only the root causes of the problems but also comprehend what the full spectrum of problems actually is. They need to understand the impact of on-going trauma and how the internet is seemingly tailor-cut to exacerbate it rather than mitigate it. And, as is so often the case, they need to learn to disaggregate while keeping phenomena connected to their specific context.

The new mini-series that we’re releasing this week gives an often horrifyingly intimate glimpse of what this dark world looks like, and how the normal workings of the internet can collude to make it truly pitch black. It seeks to show, through the voices of survivors, a few of the ways that the internet creates and perpetuates trauma, and why we cannot blindly trust in technology alone to offer up a sufficient response.

The contributions

This mini-series is built around three powerful pieces, all with lived experience at their heart. While this was an extraordinarily difficult series to produce, we recognise that three articles alone can only scratch the surface of such a varied domain. We see this as the first attempt at exploring a new direction for us at Beyond Trafficking and Slavery and we hope to do more on this next year.

First, Rose Kalemba writes powerfully and painfully about how technology “immortalised” the trauma of being trafficked and sexually abused aged 14. Her account sheds light on the “relentless secondary abuse” she has faced since her original abuse was posted online, and details the difficulties of living in a constant state of anxiety about when it will next resurface.

Her story captures the intersections of online and offline, and sexual and non-sexual harms, demonstrating how boundaries overlap and how one abuse can spark many more. She expresses clear frustrations about the lack of recourse for survivors from tech companies and disappointment with over-hyped but underwhelming tech ‘solutions’.

Next, Eunice-Esther Mejiadeu delivers another gut punch of a piece that focuses on her experiences of being groomed on social media aged 12. By that time, she writes, she had already been extensively sexually abused offline and was struggling within a harsh care system: “In my world, I was trash already”.

Her account is a powerful plea to stop treating online grooming as technological wizardry and to pay attention to children’s unmet needs that abusers seek to manipulate. For her, it is a mistake to expect high-tech responses to solve low-tech issues. Instead, she argues that “a social problem requires a social response” and calls to funnel investment to deprived communities and children themselves as a more effective alternative.

Finally, Yiğit Aydınalp and Luca Stevenson draw on their new research at the European Sex Workers’ Rights Alliance to discuss intimate image abuse. They emphasise that, despite being prime candidates for such abuse, sex workers and their needs are routinely ignored or disregarded. Their piece engages thoughtfully around questions of intimacy, consent and privacy, and the impacts of stigma and criminalisation.

Like the other contributors, they spotlight the inadequacies of existing responses and draw attention to hypocrisies in what is considered abusive. They also call for regulations to be broadened so that they also cover non-sexual images generated and shared without consent. Sex workers’ faces, they argue, are often their “most intimate property” – people have killed themselves after having their identities revealed.

It's a minefield, but society can’t afford to ignore it

Individually and in combination, the articles confront many assumptions and surface neglected perspectives about the interface between technology and sexual violence.

They shed light on the intersections and tensions between online and offline, sexual and non-sexual, legal and illegal. They share a common call to think more critically about response needs, centre those most affected, and be wary of tech interventions that over-promise and under-deliver. They also share a frustration with the inaction of tech companies in the face of the abuse they host and profit from, directly or indirectly.

The tech industry cannot be left to define its parameters and methods on its own

As technologies seep into more aspects of our daily lives, it will get ever harder to find sexual abuse that is not in some way facilitated, perpetrated or spread through technology. Responding well to sexual violence in the digital domain means straddling often complex technical, legal, social and ethical dimensions. It also requires engaging with the messiness of the issues involved, competing interests, and inherent tensions and trade-offs.

Crucially, we’d argue that responses must be centred around the actual – not imagined – needs of affected communities. And they are not homogenous masses either.

Kneejerk reactions rarely make for good policies and practices, but neither inaction nor relying on tech companies to self-regulate is any better. Tech is clearly no panacea for tech-based harms, but it’s also likely a mistake to write them out of the script completely. There is a role for the tech industry’s actual and potential contributions to prevention, identification and harm-reduction in the sexual violence domain. But the tech industry cannot be left to define its parameters and methods on its own.

We think there is much to be gained by bringing together different people’s expertise and experience around various technologies, tech-enabled abuses, and their impacts online and offline. There is also value in having a nuanced conversation about how crime prevention and harm-reduction lessons from offline can be adapted to online environments.

Some tech-related interventions to online sexual abuse are, although not perfect, apparently indispensable by now (e.g. image hashing to help detect child sexual abuse material). Others have clear promise (e.g. targeted warnings delivered in situ to deter abusive image sharing). Some have been shown to be outright harmful (e.g. the implementation of FOSTA-SESTA legislation in the US that pushed sex advertising underground). We need to get much better at learning to distinguish between these categories, and much more forceful in our rejection of interventions that cause harm.

Tech won’t save us, but we can’t escape it either. So, it’s vital to increase our understanding of both the capabilities and limits of counter-measures. We need to be wary of the clear commercial incentives to overstate success and gloss over failures in this area. We also need to pay close attention to ethical questions in intervention design (including in relation to transparency and explainability).

There are also important questions around how digital tracking and algorithmic decision-making are being used, at whose cost and whose gain. Greater regulation is sorely needed and that needs to be informed by a far more nuanced and inclusive understanding of the issues at hand. Finally, it’s well worth remembering that interventions developed for one stated aim can all too easily be deployed for another.

It's a minefield that will take time to explore, but we hope over the next year we can start a constructive conversation on this topic. Thank you for joining us.

Cameron Thibos

Cameron Thibos is the managing editor of Beyond Trafficking and Slavery.

All articles

More in Home: Feature

See all

More from Cameron Thibos

See all