Artificial intelligence specialist Gary Marcus has spent the past few months warning his colleagues, lawmakers, and the public about the risks associated with the development and rapid spread of new tools in this field, but he considers the risks of human extinction to be “exaggerated.”
The honorary professor at New York University, who came to California to attend a conference, explained: “Personally, I am not currently very concerned about this matter because the scenarios are not extremely realistic.”
He continued, “What concerns me is building artificial intelligence systems that we cannot control.”
Gary Marcus created his first artificial intelligence program in high school, which was a program for translating texts from Latin to English. After years of studying child psychology, he founded a company specializing in “machine learning,” which was later acquired by Uber.
In March, I was part of the group that published a letter signed by hundreds of experts calling for a suspension of the development of highly powerful artificial intelligence systems, such as the chatbot “Chat GPT” launched by the startup OpenAI for six months, to ensure that the currently proposed programs for use are reliable, safe, transparent, and aligned with human values.
However, he did not sign the concise statement issued by a group of prominent businessmen and experts at the end of May, which had a significant impact.
The signatories of the statement, including the creator of “Chat GPT” himself, Sam Altman, former Google engineer Jeffrey Hinton, one of the founding fathers of artificial intelligence, and the CEO of DeepMind, Google’s artificial intelligence research company, Demis Hassabis, and Microsoft’s Chief Technology Officer, Kevin Scott, called for addressing the risks of human extinction associated with artificial intelligence.
“A bidding war”
The tremendous success achieved by the “Chat GPT” program, which can produce any type of text based on user requests, has sparked a race among giant technology companies to obtain this generative artificial intelligence. However, this has also raised numerous warnings and calls for regulations in this field.
Even software developers working towards achieving general artificial intelligence with cognitive capabilities similar to humans have expressed concerns.
Gary Marcus said, “If you genuinely believe that there is an existential risk, then why are you working on it in the first place? That is a logical question.”
He argued that human extinction is an extremely complex issue in reality, and while we can envision all kinds of disasters, there will always be survivors. However, he emphasized that there are credible scenarios in which artificial intelligence can cause enormous harm.
He added, “For example, some people may succeed in manipulating markets. We may accuse the Russians and attack them, even though they have no involvement at all, and we may slide into an arms race that threatens to turn into a nuclear war.”
“A totalitarian system”
In the near future, Gary Marcus expresses concerns about democracy. Generative artificial intelligence software is increasingly capable of producing fake images and soon realistic videos at a low cost. In this context, he believes that “elections will be decided by those who excel in spreading disinformation, and these individuals may amend the laws after winning and impose a totalitarian system.”
He emphasized that “democracy is based on possessing logical information and making informed decisions. When no one can distinguish between what is true and what is false, then it’s over.”
However, this does not mean, according to the expert who authored the book “Rebooting AI,” that technology does not hold promises.
He said, “There is a chance that one day we can use an artificial intelligence system that we haven’t invented yet, to help us make progress in science, in medicine, in caring for the elderly, but we are currently unprepared. We need regulations to make programs more reliable.”
During his appearance before a committee in the US Congress in May, he called for the establishment of a national or international agency entrusted with the governance of artificial intelligence.
This project is also supported by Sam Altman, who returned from a European tour during which he urged political leaders to seek a fair balance between production and innovation.
However, Gary Marcus warned against leaving authority in this matter in the hands of companies and said, “Recent months have reminded us to what extent they make important decisions without necessarily considering the side effects.”