Experts Seek Ways to Make AI Forget What It Learned

When politician Brian Hood discovered earlier this year that ChatGPT had falsely attributed a criminal past to him, the Australian faced a dilemma engineers have been working hard to solve: how to teach AI to erase mistakes.

Filing a defamation lawsuit against OpenAI, ChatGPT’s creator in April 2023, did not seem like the right solution to Hood. Nor was completely retraining the AI model, a lengthy and costly process.

Experts believe “unlearning,” making AI forget some of what it has been taught, will be crucial in upcoming years, especially with European data protection laws.

“The ability to scrub data from AI training sets is a significant issue,” says RMIT University information sciences professor Lisa Given in Melbourne. However, she believes the process requires much work due to the current lack of knowledge about AI’s operations.

With the huge amount of data used to train AI, engineers are seeking more targeted solutions that remove incorrect information from AI systems’ knowledge to stop false information from spreading.

That issue gained momentum over the past three to four years, explains AI unlearning expert Miqdad Karmanji of Warwick University in the UK.

In November 2023, Google’s DeepMind AI worked on the issue along with Karmanji by publishing a tailored unlearning algorithm for important language models like ChatGPT and Google’s Bard.

Adjusting Some Biases

Over 1,000 experts joined a competition DeepMind ran from July to September 2023 to improve AI unlearning techniques.

The method they sought to develop is similar to other research, inserting an algorithm that commands the AI to disregard certain acquired information without modifying the database.

Karmanji says this method can be “a significant tool” in enabling search tools to respond to deletion requests under personal data protection rules.

He states that the developed algorithm also successfully removed copyright-protected content or modified some biases.

However, others like Yann LeCun, Meta’s AI head, seem less convinced about that method’s effectiveness. While he does not label the algorithm as “hopeless, dull or ruthless,” he believes there are “other priorities.”

Artificial intelligence professor Michael Rovatsos at the University of Edinburgh thinks that “The technical solution is not accurate.” He explained that unlearning won’t enable asking broader questions about how data is collected, who benefits and who should take responsibility for AI harm.

Even though Hood’s case was silently resolved after wide media coverage led to correcting ChatGPT’s data, Rovatsos believes current manual methods should remain.

“Users should double-check everything when chatbots provide false information,” says Hood.

Al Jundi

Please use portrait mode to get the best view.