A rising crisis in Japanese schools
Police across Japan are fielding more reports that classmates are using artificial intelligence to create sexual deepfakes of minors, often by pulling photos from school events, graduation albums, and private group chats. The victims are typically junior high and high school students, though elementary school children have also been targeted. In several cases, the perpetrators are minors themselves who accessed image generation sites or simple smartphone apps to produce convincing, explicit forgeries and share them within peer groups.
Authorities say more than 100 consultations and reports about fake sexual images of minors were logged in the past year. Seventeen of the known cases involved images derived from school events or yearbook photos. Victims often suspect someone they know. Police have even documented harassment against same sex peers, where the aim was to bully or humiliate rather than seek a sexual thrill. In one case from the Tokai region in central Japan, a male junior high school student was referred to prosecutors on suspicion of defamation after he created a fake nude image of a female classmate and shared it among friends.
The surge follows a broader global trend. A U.S. security firm tracked at least 95,820 deepfake videos online in 2023, and reported that 98 percent involved pornography. Easy to use tools now let a user type a prompt, upload a face, or attempt to remove clothing from a photo with one tap. That accessibility is colliding with school social dynamics and the huge volume of personal photos children post or share, often without realizing how those pictures can be weaponized.
How teens create deepfakes with everyday photos
Today’s tools fall into a few categories. Some apps attempt to “nudify” a photo by fabricating a body under clothing. Others swap a face from a school photo onto an adult performer. Generative AI sites can also produce explicit images from text prompts alone, then be tuned to resemble a specific person with a few reference photos. These systems were designed to synthesize new images from patterns learned on massive datasets, and they can run on consumer devices, which lowers barriers for misuse and makes detection harder for schools, families, and law enforcement.
Why cases are hard to prosecute
Japan’s primary statute that addresses child sexual images, the Act on Regulation and Punishment of Acts Relating to Child Prostitution and Child Pornography, was written to protect real children captured in real images. That creates a gray zone for AI fakes. When the image is synthetic, even if it uses a real child’s face or likeness, the legal pathway is less clear. Police have adapted by reaching for other laws, including defamation and certain local ordinances, to pursue cases where possible. The approach is piecemeal, and it places a burden on victims and families to detect the abuse and file complaints quickly.
What are police doing now
The National Police Agency has begun examining roughly 10 image generation websites and apps that are especially popular among junior high and high school students. Investigators plan to use their findings to support criminal inquiries and to develop education programs for youth, including delinquency prevention classes that explain why creating or sharing such images can be a crime and a serious harm to classmates.
Senior officials warn the scope of abuse is much larger than reported cases suggest. After reviewing recent complaints, one senior police official said the known incidents likely represent a small fraction of actual activity, since many victims do not come forward.
The legal gray zone in Japan
Legal scholars in Japan say the country’s framework has not kept pace with generative AI. Under current rules, if prosecutors cannot show a real child was depicted, or a real child can be clearly identified, a deepfake may fall outside the national child sexual abuse material regime. That gap stands in contrast to the very real harm victims suffer when their faces or names are attached to explicit fabrications that spread in group chats or across social media.
Takashi Nagase, a lawyer and professor at Kanazawa University who has advised on internet policy, argues that the distinction between real and synthetic imagery no longer fits how the technology is used.
The current law was designed to protect real children, but generative AI has blurred the line between real and fake.
Some local governments have started to act. Tottori Prefecture revised its ordinance in 2024 to explicitly prohibit creating sexually explicit fake images using photographs of children. Advocates welcome local steps, but say national legislation is needed so enforcement does not vary by jurisdiction or falter when images are stored on foreign servers or shared anonymously. A central government working group has discussed youth internet use and potential measures for AI abuse, though progress has been slow.
Masaki Ueda, an associate professor of criminal law at Kanagawa University who studies obscenity regulations, says the legal framing should reflect the actual privacy and psychological harm experienced by children whose likeness is misused.
If sophisticated fake images spread online, it violates the privacy of the children whose photos are misused, even if they are fake. The psychological burden is significant, so deepfake images should be treated similarly to child pornography and legal regulations should be considered.
The technology that powers the abuse
Deepfakes are generated by machine learning models trained to synthesize lifelike images. Two families of tools dominate. Face swap methods align and blend a target face into an existing video or photo. Text to image systems, often called diffusion models, generate an entire image from scratch given a written prompt and, if provided, a small set of reference faces. Offenders sometimes fine tune a local model with a handful of pictures scraped from a target’s social media profile or taken from a school album, which makes the output look uncannily like the child.
These advances complicate detection and response. If an image is entirely synthetic, there is no original photograph to compare against. Watermarks and AI detection filters exist, but they are inconsistent and can be stripped out. The Internet Watch Foundation reported that in one month in 2023, a single dark web forum saw more than 20,000 AI generated images posted, thousands of which broke UK law. Later updates warned that such content is increasingly spilling into the clear web, where unsuspecting users and victims may encounter it on open forums and image galleries. Per the IWF, some of the most convincing content is visually indistinguishable from real abuse, which increases the risk of re victimization and overwhelms moderators and police. The IWF has published research on how AI is being abused for child sexual abuse imagery and called for urgent legal and technical countermeasures. Their reports are available at IWF research.
The human cost for victims and schools
For a child, seeing their face pasted onto a sexual image can trigger fear, shame, and a sense of lost control. Even if classmates realize a picture is fabricated, the damage to reputation and mental health can be profound. School communities confront secondary harms as rumors spread, bullying escalates, and teachers scramble to respond to parents, student safety, and law enforcement.
Girls in particular describe changing how they engage online. In England, the children’s commissioner reported that some teens have stopped posting photos because they worry that nudification tools could weaponize those images. Dame Rachel de Souza, England’s children’s commissioner, has urged lawmakers to outlaw apps that create explicit deepfakes of children, citing growing distress among teenage girls.
Tools using deepfake technology to create naked images of children should not be legal and I am calling on the government to take decisive action to ban them. They have no place in our society.
In Japan, lawyers say they are seeing more cases where teenage boys are investigated for sexual offenses tied to deepfake activity or explicit content shared in group chats. Specialists caution that many adolescents do not understand the legal stakes, or the harm to peers, until a police visit or a school disciplinary process makes it real. That gap underscores the need for clear rules, school based education, and strong platform safeguards.
Global action and lessons for Japan
Other countries have moved to close loopholes. South Korea passed a bill in September 2024 that criminalizes not only the creation of deepfake sexual images of minors, but also viewing or possessing them. In the United States, a federal law enacted in May 2025 makes it illegal to post sexually explicit images or videos without the subject’s consent, and requires platforms to remove such content within 48 hours after a victim’s request. Many U.S. states have also updated laws so that AI modified or AI generated depictions of minors in sexual conduct qualify as illegal child sexual abuse material, even when a real child was not present during the act depicted. That state level activity is a response to earlier U.S. court rulings that treated purely virtual content differently from images made from real children.
In Europe, law enforcement is collaborating across borders. Europol coordinated Operation Cumberland, one of the first global crackdowns tied to AI generated child sexual abuse material. With support from 19 countries, police identified hundreds of suspects, made dozens of arrests, and searched homes to disrupt a platform distributing fully artificial images of minors. Investigators said AI now lets individuals with limited skills produce vast amounts of abuse content, and warned that traditional methods for identifying victims or tracing source material often fail when images are synthetic. Europol’s executive director Catherine De Bolle underscored the need for new investigative methods suited to AI produced content.
AI generated images can be created easily by individuals with criminal intent, which increases the volume of abuse material and makes it harder to identify offenders or victims.
Within Japan, police recently made the first arrests tied to selling AI generated obscene posters of fictitious adult women through online auctions, signaling that existing obscenity laws can reach some AI abuse. One suspect reportedly earned about 10 million yen over a year. Child cases present additional hurdles, which is why scholars and child protection groups argue for explicit national rules covering AI generated sexual depictions of minors, whether based on real photos or entirely synthetic.
What families, schools and platforms can do now
While policymakers debate reforms, there are practical steps to reduce risk and respond faster when abuse occurs. These measures focus on consent, access to photos, reporting channels, and mental health support.
- Limit circulation of student photos. Schools can restrict the sharing of class albums and event galleries to verified guardians, use expiring links, and watermark images that leave official channels.
- Teach consent and digital ethics early. Integrate AI and media literacy into homeroom or technology classes, including frank discussion of deepfakes, privacy, and the harm these images cause.
- Set clear group chat rules. Student clubs and classes should establish codes of conduct for messaging apps, with defined consequences for sharing explicit or harassing content.
- Empower students to report quickly. Make it simple to alert a trusted adult, the school, and the platform when an abusive image appears. Fast reporting narrows the window of spread.
- Work with law enforcement. Even if the legal route is not obvious, share evidence with police so they can advise on defamation, privacy, or local ordinances that may apply.
- Engage platforms. Use built in tools to report nonconsensual sexual imagery and request takedowns. Keep records of links, timestamps, and usernames.
- Support the victim’s mental health. Offer confidential counseling and involve guardians promptly. Do not require the victim to confront peers to prove that an image is fake.
- Reduce the trail of personal data. Encourage students to keep accounts private, prune old posts, and avoid posting high resolution face shots tied to school or location details.
- Adopt safer photo practices. When sharing team or class photos, consider wider shots, fewer close ups of faces, and restricted redistribution permissions.
For tech companies, guardrails include robust filters for prompts and uploads, default blocks on nudification features, strong age verification, and rapid removal pathways that work across products. Transparency about known failures and documented fixes helps rebuild trust. A recent controversy over a mainstream chatbot that produced sexualized images of a 16 year old celebrity shows how quickly a design gap can cascade into warnings from regulators and public backlash.
Key Points
- Japanese police are seeing more reports that minors are creating and sharing AI sexual deepfakes of their classmates using school photos and group chat images.
- In the past year, more than 100 consultations and reports were filed nationwide, with at least 17 cases tied to school events and graduation albums.
- Prosecuting deepfakes is difficult under current national law, which focuses on real child imagery. Police often use other statutes such as defamation.
- The National Police Agency is investigating about 10 popular image generation sites and apps and plans to use findings for both criminal probes and youth education.
- Experts say the legal system has not kept pace. Scholars call for explicit national rules treating deepfake sexual images of minors similarly to illegal child sexual abuse material.
- Internationally, South Korea, the United States, the United Kingdom, and Australia have moved to regulate AI generated sexual images, including fast removal duties and new offenses.
- Europol’s Operation Cumberland targeted a network distributing fully AI generated child abuse content and highlighted investigative challenges when no real child is depicted.
- Schools, families, and platforms can reduce harms by tightening photo access, teaching AI literacy, enforcing chat rules, streamlining reporting, and providing counseling for victims.