Former Google CEO Worried About AI Could Be Used to Kill People


Eric Schmidt, former CEO of Google, said artificial intelligence (AI) could pose existential risks to humanity, including to a life-threatening level. He asked that this technology not be misused.

He made this statement at the Wall Street Journal CEO Council Summit conference in London, England, last Wednesday (24/5).

Schmidt himself was Google's CEO from 2001 to 2011. He also headed the National Security Commission on Artificial Intelligence, an independent US commission that makes recommendations to the President and Congress regarding AI, machine learning, and similar technologies for national security and defense needs.

Schmidt said artificial intelligence could have capacities beyond humans, and could use it to design weapons. There is an “existential risk” here.

My concern with AI is actually existential, and existential risk is defined as many, many, many, many people getting hurt or killed. - Eric Schmidt, Former CEO of Google -

"And there are scenarios not today but in the near future, where these systems will be able to find zero-day exploits in cyber problems or discover new types of biology," he said, as quoted by Business Insider.

Zero-day exploit is a term for system vulnerabilities discovered by cybercriminals before the vendor is aware of it. This vulnerability can be abused by hackers when system developers have not prepared security updates.

Schmidt added, AI that is used to injure humans might sound like a science fiction film. But humans, he said, must be prepared to ensure that this is not abused by criminals in the future.

Many parties reject the development of AI

Schmidt isn't the only global tech magnate to warn about the possible dangers of AI. Some say that the development of AI must be limited, while others say that artificial intelligence does not have to be slowed down, but needs to be regulated.

The drums of the AI development war have been beating since OpenAI launched the sensational ChatGPT in November 2022. The trend continues with the presence of AI Bing from Microsoft, Google with its Bard, and Meta which is said to want to make chip devices for AI.

Elon Musk to the founder of Apple, Steve Wozniak, signed a petition from the Future of Life Institute in March 2023 to temporarily put the brakes on the development of AI that is stronger than ChatGPT-4, a stronger version of regular ChatGPT.

Their concern is based on the ability of artificial intelligence to be able to do something that has not been anticipated, is not yet known, and is not fully understood. Along with this comes a huge risk threatening humans.

The CEO of OpenAI himself, Sam Altman, had expressed their fear of their own product. "We have to be careful here," Altman told ABC News Thursday (16/3). "I think people should be happy that we're a little scared of this."

OpenAI itself took six months to complete the development of GPT-4 (the basis of ChatGPT4) before being released to the public, just to ensure the security aspects.

Previously, Geoffrey Hinton, one of the pioneers (godfather) of AI who had just resigned from his position at Google, also said that he regretted his contribution to the world of AI. He highlighted how bad actors can use the power of artificial intelligence to commit crimes.

Related Posts:
Thank you for your visit. Support Pisbon™

Post a Comment

Maaf spam dan link promosi yang kelewatan masuk spam... makasih dan sekali lagi maaf