In August 2024, Korean media reported that students at over 500 schools — middle, high, and university — had been generating sexually explicit deepfake content of classmates and teachers using free generative-AI tools. The targets were almost entirely women and girls. Telegram channels organized by school served as distribution.
The legal response sat uneasily on existing guidelines. The Sexual Violence Punishment Act and the Act on the Protection of Children and Juveniles Against Sex Offenses both applied. Per-victim and per-incident sentencing applied. But the Sentencing Commission's published ranges were calibrated for offenses that took meaningful effort to commit; deepfake generation took seconds. A student could create dozens of victims in an evening. Should the guideline scale linearly with victim count? The Commission's published methodology did not say.
The outcomes that emerged in late 2024 and early 2025 were dominated by suspended sentences for student offenders, with the courts citing youth, lack of prior offense, family environment, and 'novelty of the technology.' Civil society organizations argued that 'novelty' was being used as a mitigator for what was actually a force multiplier — that the same conduct, done with paint and a brush, would have been prosecuted as repeated offenses against many victims.
The deeper question for sentencing is what to do when a technology lowers the marginal cost of harm faster than the law can adapt its proportionality calculus. Korea's deepfake response is one early test case. The pattern that emerged — youth + technology + structural lenience for in-school perpetrators against in-school victims — is one the law will have to revisit if it wants to honor the proportionality principle in a world where the same offense can be repeated thousands of times by the same actor in the same hour.