An attempt by MPs to find out more about the RCMP’s use of controversial facial recognition software hit a wall on Monday as one MP accused officers of being “intentionally evasive”.
The Standing Committee on Access to Information, Privacy and Ethics met this morning to continue its study on the use of emerging technology in Canada.
Their efforts come a year after Federal Privacy Commissioner Daniel Therrien said the RCMP’s use of facial recognition software created by U.S. firm Clearview AI was a serious violation of laws. privacy laws.
This software allows users to compare photos against a database of over three billion images.
In a heated series of questions on Monday, NDP MP Matthew Green pressed Gordon Sage, director general of the RCMP’s sensitive and specialized investigative services, to state who authorized the use of the software by the RCMP in 2018. and who oversaw the process.
“Can you please name your predecessor?” He asked.
After a back-and-forth exchange, Sage finally said that the official in question had since retired and he didn’t think he had the right to name them.
“I think you gave your predecessor more consideration in his right to be named in a situation, which is truly public information in the public forum, than the billions of people whose images were compiled and analyzed by this AI technology,” Green said.
“What we have, what I have – I’ll speak for myself – is a significant trust issue.”
Warning for contempt of Parliament
Tory MP James Bezan later described the responses committee MPs received from the three RCMP officials called to testify as ‘intentionally evasive’ – and reminded them they could be found in contempt of Parliament if they don’t cooperate.
“Some of the responses we have received today have been very limited and I would suggest that witnesses exercise their responsibilities to this committee, that those of us around the table enjoy parliamentary privilege and expect full answers. “, did he declare.
“And one-word answers and being questionable does not fulfill our job as committee members.”
The RCMP initially denied using Clearview AI’s software in 2020. They later confirmed that they had used the software after news broke of the company’s client list hack.
In the same year, a New York Times investigation found that the software had pulled over three billion photos from public websites like Facebook and Instagram. He then turned them into a database used by more than 600 law enforcement agencies in the United States, Canada and elsewhere.
The company stopped offering its facial recognition services in Canada after the launch of the federal privacy commissioner’s investigation. The RCMP said it has stopped using the software.
RCMP say they used the technology three times
Sage said on Monday that the force had used Clearview AI in three official instances: twice within the child exploitation unit and once to track a fugitive who was overseas.
“There were a lot of members testing the technology to see if it worked. They used a lot of research on their own photos, on their own profiles to see if this technology worked. They took photos of celebrities and went through it .the Clearview to see if it worked,” he said.
“In fact, and by testing this technology, we realized that it was not always effective.”
Following a backlash over its use of Clearview AI technology, the RCMP has announced plans to be more transparent about how it approves and uses new technologies and investigative tools involving the collection and use of personal information.
The RCMP promises to publish this new policy by the end of June.
Monday’s committee ended with members agreeing to ask RCMP Commissioner Brenda Lucki to attend.
“[The RCMP] has not, in my view, demonstrated the ability to have the kind of candor and candor with civilian oversight bodies, such as the House of Commons, to provide basic information to Canadians who are concerned about their civil liberties Green said during questioning.
Last week, Therrien and his provincial counterparts released a statement calling on lawmakers to create rules that explicitly state when police can use facial recognition technology.
“No-go areas should include a ban on any use of facial recognition that could result in mass surveillance,” they wrote.
“Legislation should require that police use of facial recognition is both necessary and proportionate for any given deployment of the technology.”