Globalytic
GlobalyticPoliticsConflictsTechScienceHealthBusinessWorld

Globalytic

Independent world coverage — geopolitics, conflicts, science, and health — with AI-assisted editing and verification.

Sections

  • World
  • Politics
  • Conflicts
  • Tech
  • Science
  • Health
  • Business
  • World
  • All news
  • Search

Resources

  • About
  • RSS Feed
  • Search

Summaries and analysis may be AI-assisted. Content is for informational purposes only.

Not professional advice.

© 2026 Globalytic. All rights reserved.

  1. Home
  2. /News
  3. /Law enforcement is trying to combat abusive AI. Experts say easier said than done
ConflictsBreakingneutral

Law enforcement is trying to combat abusive AI. Experts say easier said than done

NPR Topics: NewsApr 146 min readOriginal source →
Law enforcement is trying to combat abusive AI. Experts say easier said than done

TL;DR

An Ohio man has been convicted under the federal 2025 Take It Down Act for creating and distributing AI-generated abusive images. Experts caution that prosecuting such cases is becoming increasingly challenging.

Key points

  • An Ohio man was convicted under the 2025 Take It Down Act
  • He created and published AI-generated abusive images
  • Prosecuting such cases is becoming increasingly difficult
  • The crimes included cyberstalking and digital forgeries

Mentioned in this story

James StrahlerU.S. Attorney's Office
2025 Take It Down Act

Why it matters

The case underscores the urgent need for effective legal frameworks to address the rise of AI-generated abusive content.

A person has a conversation with a Humanoid Robot from AI Life, on display at the Consumer Electronics Show (CES) in Las Vegas.
A person has a conversation with a Humanoid Robot from AI Life, on display at the Consumer Electronics Show (CES) in Las Vegas.

A person has a conversation with a humanoid robot from AI Life, on display at the Consumer Electronics Show (CES) in Las Vegas. Frederic J. Brown/Getty Images

Frederic J. Brown/Getty Images

An Ohio man has been convicted of cybercrimes, including the publication of AI-generated images depicting abusive sexual activity, in a historic first under the federal 2025 Take It Down Act. But experts warn that prosecuting these cases is increasingly difficult.

James Strahler, 37, pleaded guilty to cyberstalking, producing obscene visual representations of child sexual abuse and publication of digital forgeries – crimes that included both real and AI-generated images, according to the U.S. Attorney's Office in the Southern District of Ohio.

The Take it Down Act makes it illegal to publish nonconsensual, intimate digital content.

The UK communications regulator Ofcom launched a formal investigation into Elon Musk's social media platform X regarding its AI chatbot, Grok following reports that Grok has been used to generate nonconsensual sexual deepfakes.
The UK communications regulator Ofcom launched a formal investigation into Elon Musk's social media platform X regarding its AI chatbot, Grok following reports that Grok has been used to generate nonconsensual sexual deepfakes.

Technology

Elon Musk's X faces bans and investigations over nonconsensual bikini images

Strahler used dozens of AI platforms and over 100 AI web-based models on his phone to create more than 700 illicit images to post to a website dedicated to child sexual abuse material, according to the Justice Department.

According to court records, Strahler was caught when one of his adult victims reported receiving threatening and harassing messages.

Court records also state that Strahler admitted to being the one behind the violent calls and texts. Information extracted from his seized phone revealed additional victims and the extent of his AI abuse.

Small risks for big payouts

Kolina Koltai is a senior researcher at Bellingcat — an investigative journalism group — who specializes in AI technology.

She said the sheer volume of the content Strahler created is not unusual for these sorts of offenders, and that is part of what makes it so difficult for law enforcement to manage.

"Even when we think about early, early days of AI technology, people would have to learn how to maybe install or host something locally on their own devices," Koltai said.

Technology

Elon Musk's X to block AI chatbot Grok from making explicit images of real people

"But nowadays, you can even go to a web domain and put in a prompt, and you have to have very little technical knowledge to be able to start creating the content. This poses a huge challenge because there's just an overwhelming amount of content."

Koltai cited earlier editing programs like Photoshop — a pricey graphic design software that pioneered early amateur image engineering and required some degree of skill to make realistic edits.

"Nowadays," however, she said, "with a dollar or sometimes even cheaper, you can take a photo of anyone on the internet and put it into a 'nudifier' or some sort of AI-generation platform and create a convincing new image even based on that person's face."

Also adding to law enforcement's woes when seeking out these cybercriminals is the overwhelming number of platforms dedicated to creating deepfake material.

"Oftentimes it's incredibly, incredibly difficult to know what technology, what service, what platform the person is using … unless we get access to their devices or their browser history," Koltai said.

"It's not like there's only just two or three providers. Everyone's trying to get into the game because it's a multimillion-dollar industry," she said, adding that sites will often buy multiple domains under different extensions (dot com, dot io, etc.) to avoid being taken offline.

"Even for our investigative site, when we shut down a site, which is great, unfortunately, it's a bit of a hydra, where there's still many other services willing to take the place of that other one," she said. "It's a difficult problem to solve until we make it harder for these platforms to be used."

Deepfakes and young people

AI's transition from obscure to mainstream technology came faster than the law's ability to adapt, said Matthew Faranda-Diedrich, an attorney who has handled cases dealing with deepfaked nudes.

Elon Musk's artificial intelligence company, xAI, which makes the Grok chatbot, is being sued by teenagers who say the company's AI models were used to create nonconsensual nudes of them
Elon Musk's artificial intelligence company, xAI, which makes the Grok chatbot, is being sued by teenagers who say the company's AI models were used to create nonconsensual nudes of them

Technology

Tennessee teens sue Elon Musk's xAI over AI-generated child sexual abuse material

"We went from two years ago, never having heard of this, never having seen a case like this, to right now having at any one time, five or six of these cases, unfortunately," he said.

Faranda-Diedrich said he works closely alongside police to help them understand the rapidly evolving technology and support them throughout their investigations into potentially illegal behavior.

But for police and civilians alike, he said, there is often a learning curve in understanding just how sophisticated many of these apps can be at manipulating images inappropriately.

"They'll think back to when they were younger or other generations and say, 'Oh, this is like Photoshop,' and have this idea in their head that you can easily tell that the doctored image is fake. But in fact the images produced by the 'nudify' apps look very real and are nothing like a Photoshopped image."

"Let's call it what it is"

The distribution of nonconsensual deepfakes is a multigenerational problem, but it is particularly rampant among young people, research shows.

And women and girls are especially at risk, representing an estimated 90% of the victims of these crimes.

Faranda-Diedrich said that in most cases with which he has been involved, both the victims and perpetrators of the crimes have been children, ranging in age from 14 to 16 years old.

"You want to try to educate them about the dangers of this and the harms it can cause so that kids don't make 'dumb' decisions that actually end up hurting people so disastrously," he said.

Online safety experts say something else that is happening may be less obvious but more consequential to the future of the internet: OpenAI has essentially rebranded deepfakes as a light-hearted plaything and recommendation engines are loving it.
Online safety experts say something else that is happening may be less obvious but more consequential to the future of the internet: OpenAI has essentially rebranded deepfakes as a light-hearted plaything and recommendation engines are loving it.

Technology

Sora gives deepfakes 'a publicist and a distribution deal.' It could change the internet

And schools, he said, have a major responsibility to get involved at the first signs of these technologies being abused.

"Let's call it what it is: it's child pornography," Faranda-Diedrich said. "And I don't think any school administrator would ever say, 'Oh, if I knew of [child sexual abuse material], I would not call the police.' Of course they would. And they need to make that same call here and get it into law enforcement's hands quicker."

Q&A

What crimes did James Strahler commit related to AI-generated images?

James Strahler was convicted of cyberstalking, producing obscene visual representations of child sexual abuse, and publishing digital forgeries.

What is the 2025 Take It Down Act?

The 2025 Take It Down Act is a federal law aimed at combating the online distribution of abusive and exploitative material, including AI-generated content.

Why is prosecuting AI-related cybercrimes difficult?

Experts warn that the complexity of distinguishing between real and AI-generated images complicates legal proceedings in cybercrime cases.

What are the implications of this conviction for future AI-related crimes?

This conviction may set a precedent for future cases but highlights the ongoing challenges law enforcement faces in prosecuting AI-related crimes.

People also ask

  • James Strahler AI-generated images conviction details
  • 2025 Take It Down Act explained
  • challenges in prosecuting AI cybercrimes
  • impact of AI abuse conviction on future cases
Load next article

Related Articles

'I might not be here' - Stokes on being hit in face by ball
World

'I might not be here' - Stokes on being hit in face by ball

Ben Stokes feels 'lucky' after serious injury from cricket ball

BBC News·Apr 15·1 min read
As world focuses on Iran, Israel ‘engineering starvation policy’ in Gaza
Conflicts

As world focuses on Iran, Israel ‘engineering starvation policy’ in Gaza

As the world focuses on Iran, Israel intensifies attacks causing famine in Gaza.

Al Jazeera English·Apr 15·1 min read
Starmer rejects accusation that Labour is ‘complacent’ on defence funding
Politics

Starmer rejects accusation that Labour is ‘complacent’ on defence funding

Starmer defends Labour against claims of complacency on defence funding amid rising pressures.

The Guardian World·Apr 15·1 min read
Labor to boost defence spending by $53bn over next decade – but plan still short of Donald Trump’s demands
Politics

Labor to boost defence spending by $53bn over next decade – but plan still short of Donald Trump’s demands

Labor announces $53bn increase in defense spending, but still short of Trump's demands.

The Guardian World·Apr 15·1 min read
US eases sanctions on state-run Venezuelan banks
Politics

US eases sanctions on state-run Venezuelan banks

US has eased sanctions on state-run Venezuelan banks, allowing them to use US dollars.

BBC News·Apr 15·1 min read
Bridget Jones statue becomes permanent resident of Leicester Square: ‘She makes Londoners feel seen’
World

Bridget Jones statue becomes permanent resident of Leicester Square: ‘She makes Londoners feel seen’

Bridget Jones statue now a permanent fixture in Leicester Square!

The Guardian World·Apr 15·1 min read

More from News

View all →

See every story in News — including breaking news and analysis.

At a glance

  • An Ohio man was convicted under the 2025 Take It Down Act
  • He created and published AI-generated abusive images
  • Prosecuting such cases is becoming increasingly difficult
  • The crimes included cyberstalking and digital forgeries

Advertisement

Placeholder