Artificial intelligence (AI) has seeped into every aspect of modern lives, from “intelligent” vacuum cleaners and self-driving cars to advanced methods for diagnosing diseases.
While promoters reckon it is revolutionising the human experience, critics stress that the technology risks putting machines in charge of life-changing decisions, which highlights why regulators in Europe and North America are worried.
The European Union is likely to pass legislation next year, namely “the Artificial Intelligence Act, which aims to rein in the age of the algorithm.
The United States recently published a blueprint for an AI Bill of Rights, while Canada is also mulling similar legislation.
For instance, China’s use of biometric data, facial recognition and other technology to build a powerful system of control is looming large in the debates.
Gry Hasselbalch, a Danish academic who advises the EU on the controversial technology, argued that the West was also in danger of creating “totalitarian infrastructures”.
“I see that as a huge threat, no matter the benefits,” she told AFP.
But before regulators can act, they face the daunting task of defining what AI actually is.
A futile attempt
Suresh Venkatasubramanian of Brown University, who co-authored the AI Bill of Rights, said trying to define AI was a futile attempt.
Any technology that affects people’s rights should be within the scope of the bill, he tweeted.
However, the 27-nation EU is taking the more tortuous route of attempting to define the sprawling field.
Its draft law lists the kinds of approaches defined as AI and includes pretty much any computer system involving automation, however, the problem lies in the changing use of the term AI.
For decades, it described attempts to create machines that simulated the human thought process, but funding for this research largely dried up in the early 2000s.
However, the rise of the Silicon Valley titans saw AI reborn as a catch-all label for their number-crunching programs and the algorithms they generated.
This automation allowed them to target users with advertising and content, enabling them to make hundreds of billions of dollars.
“AI was a way for them to make more use of this surveillance data and to mystify what was happening,” Meredith Whittaker, a former Google worker who co-founded New York University’s AI Now Institute, said.
So the EU and US have both concluded that any definition of AI needs to be as broad as possible.
However, from that point, the two Western powerhouses have primarily gone their separate ways.
A massive challenge
The EU’s draft AI Act is more than 100 pages long, and one of the most eye-catching proposals is the complete prohibition of certain “high-risk” technologies such as the biometric surveillance tools used in China.
It also drastically limits the use of AI tools by migration officials, police and judges.
Hasselbach said some technologies were “simply too challenging to fundamental rights”.
On the other hand, the AI Bill of Rights is a brief set of principles framed in aspirational language, with exhortations like “you should be protected from unsafe or ineffective systems”.
The bill was issued by the White House and relied on existing law.
Experts reckon no dedicated AI legislation is likely in the United States until 2024 at the earliest because Congress is deadlocked.
A temporary fix
Opinions differ on the merits of each approach, “We desperately need regulation,” Gary Marcus of the New York University said.
He added “large language models”, including the AI behind chatbots, translation tools, and predictive text software can be used to generate harmful disinformation.
Whittaker questioned the value of laws aimed at tackling AI rather than the “surveillance business models” that underpin it.
“If you’re not addressing that at a fundamental level, I think you’re putting a band-aid over a flesh wound,” she said.
While other experts have broadly welcomed the US approach, with Sean McGregor, a researcher who chronicles tech failures for the AI Incident Database saying, “AI was a better target for regulators than the more abstract concept of privacy”.
He warned that there could be a risk of over-regulation, adding, “The authorities that exist can regulate AI”, pointing to the likes of the US Federal Trade Commission and the housing regulator HUD.
On the other hand, experts broadly agree on the need to remove the hype and mysticism surrounding AI technology.
“It’s not magical,” McGregor said, likening AI to a highly sophisticated Excel spreadsheet.