Source: The Conversation – UK – By Jason R.C. Nurse, Reader in Cyber Security, University of Kent

It’s hard to overstate the impact that artificial intelligence has had since the release of generative AI platforms such as ChatGPT just three years ago. While they have led to countless advances in how we live and work, they have also been at the centre of controversies around domestic and sexual abuse.
The use of the AI tool Grok to remove women’s clothing in images brought the issue of so-called technology-facilitated abuse to the fore. But it’s a problem that predates AI – with Bluetooth trackers, wearable devices, smart speakers, smart glasses and apps all used by abusers to control, harass or stalk their victims.
This abuse has worsened as tech has become more embedded in people’s lives, and as AI advances rapidly. But governments have struggled to make tech companies design systems that minimise misuse, and to hold them accountable when things go wrong.
Our own research has confirmed that technology misuse has increased and that its harms are significant. But governments and the tech sector are doing little to combat it – despite numerous examples of how tech can enable abuse.
Case 1: Smart glasses
The growing availability of smart glasses – which look like normal eyewear but can do many things a smartphone does – has led to reports of secret filming. In some cases, videos were posted online, often attracting degrading and sexually explicit comments.
Meta has said its smart glasses have a light to show when they are recording and anti-tamper tech to make sure the light cannot be covered. But there appear to be workarounds.
In England and Wales, voyeurism legislation focuses on private spaces, and harassment laws do not specifically apply to targeted recording and online distribution. However, the UK Information Commissioner’s Office is investigating Meta after subcontractors were allegedly able to access intimate footage from customers’ glasses. This is in addition to a lawsuit in the US, which alleges Meta violated privacy laws and engaged in false advertising. Meta has said that it takes the protection of data very seriously and that faces are usually blurred out. It also discloses in its UK terms of service the potential for content to be reviewed either by a human or by automation.
Case 2: Bluetooth trackers
Apple’s AirTags, and other devices built for tracking personal items, can be misused to stalk and harass people, particularly women. Apple released updates to AirTags and other trackable tech so that potential victims would be alerted if an unknown device was travelling with them. But for many, this feature should have existed from the outset.
The law in England and Wales is clear that attaching tracker devices to someone without their knowledge is a criminal offence. But despite convictions, the ease of covertly monitoring people using these devices means people continue to be at risk.

Kannapon.SuperZebra/Shutterstock
Case 3: AI deepfake and ‘nudification’ apps
Apps can now “nudify” people, while AI is increasingly used to make non-consensual deepfake pornography. In January, several instances of xAI’s assistant Grok being used to create sexualised photos of women and minors came to light. All it took to create the images were some simple prompts.
After criticism, xAI decided to limit this feature. But the safeguards appear to apply only to certain jurisdictions and certain users.
In February, the UK government announced legal changes similar to the Take It Down Act in the US, which will require tech platforms in the UK to remove non-consensual intimate images within 48 hours. Failure to do so will result in fines and services being blocked, and the law is likely to be implemented from summer.
Using automated technology known as “hash matching”, victims will only need to report an image once to have it removed from multiple platforms simultaneously. The same images would then be automatically deleted every time anyone attempted to reupload them. Nudification apps and using AI chatbots to create deepfake pornography will also become illegal in the UK.
But there is more to be done. Mitigating risks must be embedded at the design stage to prevent these images being created in the first place. The rise of romantic and sexual chatbots means this has become more urgent.
And beyond deepfakes and nudification, AI can also enable harassment at scale. This includes directly targeting someone with abusive content, or fake images or profiles that impersonate victims for so-called “sextortion” scams.
Challenges ahead
These issues must be prevented with robust guardrails built into these technologies. This is what prioritising user safety should look like, after all. But often, these guardrails have failed. Safety tools are only usually added after public pressure, not built into platforms from the start.
Governments have allowed regulation to fall behind fast-paced developments. Tech companies have grown quickly, but laws and enforcement have not kept up. At the same time, police and legal systems are often under-trained or unclear on how to handle digital harm.
Even where there is regulation, such as the UK’s Online Safety Act, penalties for platforms that allow abuse are often weak or unenforceable. The regulator Ofcom has issued only voluntary guidance to tech companies on how to better protect women and girls on their platforms. Campaigners have called for this to be made mandatory, with clear penalties for companies that do not comply, placing it on a level legal footing with child sexual abuse and terrorism content.
As AI advances, tech companies must prioritise system design that puts user safety first. But until governments enforce real consequences, the tech sector will be able to profit from harm while those using the platforms bear the cost.
![]()
Jason R.C. Nurse receives/received funding from The Engineering and Physical Sciences Research Council (EPSRC), The Research Institute for Sociotechnical Cyber Security, The National Cyber Security Centre (NCSC), and the UK Home Office. He is affiliated with Wolfson College, University of Oxford as a Research Member, CybSafe as the Director of Science and Research, and The Royal United Services Institute (RUSI) as an Associate Fellow.
Lisa Sugiura receives funding from Home Office Domestic Abuse Perpetrators Intervention Fund
– ref. From AirTags to AI nudification: the growing toolkit of technology-facilitated abuse – https://theconversation.com/from-airtags-to-ai-nudification-the-growing-toolkit-of-technology-facilitated-abuse-274468







