News

Expert raises the alarm on AI’s threats

Yoshua Bengio, a renowned computer scientist and AI pioneer, has expressed concern about the potential detrimental effects of artificial intelligence on society.

According to CNBC, Bengio advocated for more study to reduce these hazards while continuing his pioneering work in deep learning, a subject that uses brain activity to discover complex patterns in data.

He highlighted concerns about AI's possible hazards, saying that some powerful persons may aim to replace humans with computers.

He emphasized the significance of addressing these risks through additional research and cautious assessment.

"It's really important to project ourselves into the future where we have machines that are as smart as us on many counts, and what would that mean for society," Bengio told me.

The prominent computer scientist warned that machines could soon possess the majority of human cognitive capacities, citing artificial general intelligence as a technique aiming at equal or surpassing human intelligence.

"Intelligence provides power. So who's going to wield that power?" he inquired. "Having systems that know more than most people can be dangerous in the wrong hands and create more instability at a geopolitical level, for example, or terrorism."

He warned that just a few corporations and countries will have the means to create sophisticated AI systems.

He also observed that as these systems expand in size, their intelligence grows.

"These machines require billions of dollars to build and train, and only a few corporations and countries will be able to do so. "That is already the case," he stated. "Power will be concentrated: economic power, which can be terrible for markets; political power, which can be bad for democracy; and military power, which can be disastrous for our planet's geopolitical stability. So, there are many unresolved problems that we must carefully investigate and address as quickly as possible."

He believes that such consequences are within reach in the coming decades. "If it's five years, we're not prepared..." There are no procedures to ensure that these systems do not injure or turn against individuals. "We don't know how to do that," he explained.

Bengio mentioned that current AI training methods may result in systems that eventually turn against humans.

"In addition, some people may seek to misuse that authority, while others may be content to see humanity replaced by computers. I mean, it's on the fringe, but these guys have a lot of power, and they can use it unless we put in place the proper safeguards right now," he said.

In June, Bengio signed an open letter titled "A Right to Warn About Advanced Artificial Intelligence," which was signed by current and former OpenAI personnel, including those behind ChatGPT.

The letter voiced concerns about the "serious risks" of AI advancement and advocated collaboration among scientists, governments, and the general public to address them. OpenAI has encountered growing safety concerns, and its "AGI Readiness" team was terminated in October.

"The first thing governments need to do is have regulation that forces [companies] to register when they build these frontier systems that are like the biggest ones, that cost hundreds of millions of dollars to be trained," Bengio, of CNBC, said. "Governments should know where they are, you know, the specifics of these systems."

Because AI is advancing so quickly, governments must "be a bit creative" and create legislation that can adapt to technological advances, according to Bengio.

Leave A Comment