AI a ‘complicated new part of our world. It’s certainly a newer form of surreptitious behaviour’: police
Collège Béliveau is dealing with the dark side of artificial intelligence after AI-generated nude photos of underage students were discovered being circulated at the Winnipeg school.
An email sent to parents Thursday afternoon said school officials learned late Monday that doctored photos of female students at the grades 7-12 French immersion school were being shared online, and that school officials have contacted police.
“We are grateful for and proud of the students who came forward to bring this to our attention,” the letter said.
The mother of one girl whose altered photos were among those circulated told CBC News that she hopes the person responsible is held accountable. She questioned why artificial intelligence companies allow this to happen.
Another parent, who has a son in Grade 12 at Collège Béliveau, said she was shocked by the news.
“I think it’s scarier for girls than it is for guys — that’s just my opinion,” said Noni Kopczewski.
“I have three boys, but I think my best advice to my kids is always being respectful.… Put yourself in someone else’s shoes. This is hurtful and the damage that it causes is long-term.”
Kopczewski said there’s been discussion about artificial intelligence in her household, including the risks that come with using it.
“It’s becoming a lot more scary out there — we’re educating ourselves a lot more,” she said.
‘Uncharted territory’: police
The school said the original photos appear to have been gathered from publicly accessible social media and were then explicitly altered.
They didn’t say how many photos they believe were shared or how many girls were victimized.
The school is “investigating to get a better understanding of the extent of what happened and who was involved,” and officials are “taking necessary steps to respond to the actions of identified individuals who shared these images,” the email to parents said
“While we cannot assume that we have evidence of all the doctored photos, we will directly contact the caregivers of those students for whom we do.”
The school, in the Windsor Park neighbourhood, has just under 600 students, its website says.
School officials have also been in contact with Cybertip.ca, a tip line for online abuse that’s operated by the Canadian Centre for Child Protection in Winnipeg.
Images received by the school will be uploaded to Cybertip’s Project Arachnid, which can help get them deleted.
Const. Dani McKinnon said the police service’s counter-exploitation unit is investigating, but it is still too early in an active investigation to provide more details.
“AI is a nuanced, complicated new part of our world,” she said. “It’s certainly a newer form of surreptitious behaviour. We’re in uncharted territory.”
No charges have been laid at this time.
‘This problem is going to get worse’
School officials said support teams are available for any students directly or indirectly impacted by what happened.
The exploitation of young people through imagery created without their knowledge in photo-editing programs, then shared online, has been a troubling issue for years, said Cybertip’s director.
But “AI has really accelerated that,” said Stephen Sauer.
While there have been cases of AI being used to create child sexual abuse material in Canada, including one in Quebec, Sauer is not aware of a prior case in this country where teens at a school were victimized by other teens in this way.
It is not clear at this point whether the person responsible is a fellow student. Police and the school have not commented on that.
Sauer also couldn’t speak specifically about the investigation at Collège Béliveau but said the people who create explicit imagery often don’t understand the consequences and the long-term impacts.
“Even if this material is created because people think it’s funny or they think it’s a slightly malicious [act] … it can still be considered child pornography or child sexual abuse material under the law,” he said.
“This material can come back to haunt that victim later on in other stages of their life if it’s been shared online along with personal information.”
Project Arachnid — a web crawler that scans the internet for known images of child sexual abuse and issues notices to companies to remove them — can reduce the spread of child sexual abuse material online, but the key is to get to it quickly before it is more widely distributed, Sauer said.
“Certainly there is no silver bullet out there, but what it does is that it provides notices to companies to let them know when material is being posted to their service … so that they can keep their networks clean,” he said.
Maura Grossman, a research professor in the school of computer science at the University of Waterloo who has been studying the real-world implications of AI-generated images, also said this would be the first case of students in Canada making such deepfakes of other students.
There have been a few recent cases in the United States, in New Jersey and Seattle, “but I had not heard it had come to Canada,” she said.
“It’s rather alarming and it’s not hard to do, and you can do it for free. There are many online sites.”
Although the ability to face-swap onto other bodies has existed before, the technology was poor. Now, anyone can create convincing images, Grossman said.
The hard part now is figuring out how to control it.
“You’re dealing with people all over the world. It’s very hard to get jurisdiction over that person in a legal matter,” she said.
“It’s extremely challenging, and I just think this problem is going to get worse, not better.”