“Pics or it didn’t happen.”
For a generation of young people, that refrain made the stakes clear: if you were going to make an outrageous or surprising claim, you had to have photographic proof to back it up.
Times have changed, especially since realistic AI-generated images arrived on the scene. Now, schools and the general public have to contend with non-consensual deepfakes that blur the boundaries between truth and fabrication.
This technology is in its infancy and already, the stakes are enormous. For schools, these artificially-generated depictions weaponize technology in ways that can test a school’s commitment to providing a safe learning environment. For students, a single targeted deepfake can lead to humiliation, bullying, and profound mental health impacts. Even without being targeted directly, AI-generated imagery can blur students’ understanding of reality, making them more susceptible to misinformation.
Schools still have a lot to learn about AI-generated imagery. But as the whole school community gets up to speed, school leaders and educators need to confront another challenge: AI-generated imagery in schools isn’t just a technology issue; it’s a human issue, too.