Chat

Our Blog

Check out our blog! Come back weekly to see new posts and subscribe to our RSS feed.
View RSS Feed

Archives

  • Mar4Mon

    AI and Child Pornography

    March 4, 2024 Telios Law

    Confronting AI-Generated Child Sexual Abuse Material

    Science fiction is pretty annoying. It keeps coming up with new ideas, like AI, which then become reality and change the world for everybody. In many ways, AI promises to improve our world. In others, it brings new risks and threats. Typically, when Americans think “Threats from AI,” we think of science fiction scenarios involving rogue AIs and exploitive mega-corporations. But evil robots and syndicates aren’t the only ones able to exploit AI: child predators can too.

    AI-Generated Child Porn

    The name speaks for itself. An AI is trained—possibly using actual child porn, or using facsimiles—to create images of children in pornographic and sexually abusive situations. In one sense, the material doesn’t directly show actual children (the same way a drawing doesn’t). But in another sense, it does: the AI had to be trained using images it got from somewhere. And often there’s a victim involved in that process. Children’s faces from different contexts may also be appropriated.

    Why is AI-Generated Porn Abusive?

    This question is a bit like asking, “Why is murder wrong?” The idea inherently revolts our humanity. But rage and disgust are hardly appropriate legal metrics. AI generated child pornography promotes the sexual exploitation and objectification of minors. It doesn’t matter if those minors are real, child porn programs the brain of the consumer and potentially creates future child sex offenders. Tolerating AI child porn inherently pushes individuals and cultures towards tolerating pedophilia and normalizing child sexual abuse.1 This in turn creates a dangerous environment for children.

    In fact, many child safety advocates note that the term “child pornography” minimizes what is actually happening. These advocates prefer the term “Child Sexual Abuse Material” (CSAM) to distinguish it from adult pornography. With adult pornography, regardless of morality, it’s theoretically possible to portray something non-abusive—for example, a monogamous married couple. This is a far cry from what happens in CSAM, where what is being portrayed is inherently abusive and evil. That’s why, morality and victimhood aside, adult pornography remains legal, while child pornography is a high crime: one is far worse than the other.

    How is it “Worse”?

    CSAM desensitizes society to the seriousness of child abuse. Any type of CSAM, including AI-generated, can be used to encourage sexual attraction to minors, perpetuating harmful behaviors and attitudes. Sexual predators frequently use child pornography to groom their victims and convince them that sexually abusive behavior is normal. In cases where the material is generated using real children’s faces, the children depicted may face trauma, shame, and harm even if they were not physically involved.

    Psychological research shows that exposing children to sexually explicit material causes systemic, long-term harm. Exposure disrupts normal childhood development and makes children more vulnerable to predators and more likely to become predators. It encourages high risk and inappropriate behaviors and makes early intercourse much more likely. When sexual behavior is modelled from abusive behaviors, that in turn introduces an array of new dangers.

    But it’s still worse than that. AI functions as a loudspeaker for content, making it faster and easier to distribute. AI generation amplifies the dangers of CSAM in the same way it amplifies any other kind of content.

    Can This be Stopped?

    Many countries—including the United States—have banned AI-generated CSAM

    as it involves the sexual exploitation and abuse of minors. While the USA criminalizes any kind of CSAM—including drawings, cartoons, and digitally created content2—cartoon and AI-generated CSAM is rarely prosecuted.

    And so, it falls to the private sector to combat AI CSAM generation. Stable Diffusion has been a longtime offender for nonconsensual nude pictures, both of adults and children. However, it claims to be making strides on better filters on its newer versions. Some of the new AI-image generators contain mechanisms to block CSAM, and early indicators show that closed AI models do a better job filtering dangerous content than open models.

    Tech companies, law enforcement agencies, and advocacy groups are actively working to identify and remove such content from the internet. AI detection tools have been developed to help identify and report instances of AI-generated child pornography to authorities for investigation and prosecution. But this alone isn’t enough. The public needs to be aware and advocate to strengthen laws and regulations, close legal loopholes, and actually prosecute those promoting these images.

    Parents Can Protect Their Children.

    The most important role in protecting a child’s life is their parents. Parents can maintain open and honest conversations about online safety, digital boundaries, sexual morality, and appropriate internet usage. Parents can put controls on minors’ phones as well as technology devices in the home such as smart TVs, tablets, computers, and internet access.

    Families can work with companies like Thorn, which uses technology to fight against CSAM. Thorn helps bring awareness to the issue, identify victims, and educate parents about how to best protect kids. Good blocking software is also available to block out sexually explicit content such as pornography; these are not always completely effective but are still worth having on home devices.

    Social media use has increased visibility and created an unprecedented amount of access to minor content than ever before. Parents can protect their children by locking down social media accounts, being cautious about posting photos and videos of their children, and not sharing locations and information about their children. When it comes time for minors to have their own phones or devices, the devices can be limited, such as a safety feature limiting texts and calls to an approved list and blocking the Internet. When minors are old enough for their own social media accounts, parents should educate their children about possible dangers, monitor those accounts to ensure responsible internet usage is being followed, and encourage/enforce time limits for devices. 

    Parents can make sure their local schools, churches, and ministries are up to speed on child safeguarding, with programs like Telios Teaches. Part of a child safeguarding policy should address online behavior and digital risks. Schools should not give children access to sexually explicit material and parents should be aware that some (usually public) schools do.

    How Should Ministries Address This?

    Ministries should have and enforce strict policies, banning AI-generated or any other child pornography, treating CSAM with zero tolerance like any other form of sexual abuse. They should do screening and background checks in hiring for potential red flags, introduce trainings and education on the dangers of porn use and CSAM, and place blocking software on all ministry devices. They should adopt and enforce robust child safety policies that consider digital access for kids and adults.

    1 "Why Language Matters: Why We Should Never Use ‘Child Pornography’ and Always Say Child Sexual Abuse Material." NSPCC Learning, 30 Jan. 2023, https://learning.nspcc.org.uk/news/why-language-matters/risky-behaviour-young-people-experiencing-abuse

    2 18 U.S.C. § 2256; 18 U.S.C. § 1466A

    Because of the generality of the information on this site, it may not apply to a given place, time, or set of facts. It is not intended to be legal advice, and should not be acted upon without specific legal advice based on particular situations.

    Provided by Telios Law. 

    Leave a Comment