Industry Chiefs and Experts Warn of Risk of Extinction from Artificial Intelligence Technology


Key Highlights :

1. The authors of the statement are concerned about the potential dangers of artificial intelligence technology, including the possibility that it could lead to the extinction of the human race.
2. They are urging leaders around the world to address these dangers as a global priority.
3. Some of the key concerns that the authors of the statement have include the possibility that chatbots could flood the internet with disinformation, that biased algorithms will churn out racist material, and that AI-powered automation could lay waste to entire industries.




     In a joint statement released on Tuesday, a group of industry chiefs and experts warned that global leaders should be working to reduce "the risk of extinction" from artificial intelligence technology. The statement, signed by dozens of specialists, including Sam Altman of OpenAI, highlighted the need to tackle the risks posed by AI alongside other societal-scale risks such as pandemics and nuclear war.

     The statement comes amid growing concerns over the potential dangers of AI technology. Late last year, OpenAI's ChatGPT bot demonstrated an ability to generate essays, poems and conversations from the briefest of prompts, sparking a gold rush of investment into the field. Critics and insiders have raised the alarm, citing the possibility that chatbots could flood the web with disinformation, that biased algorithms will churn out racist material, or that AI-powered automation could lay waste to entire industries.

     At the heart of the concerns is the possibility that machines could become capable of performing wide-ranging functions and developing their own programming, leading to a so-called artificial general intelligence (AGI). This could lead to humans no longer having control over superintelligent machines, which experts have warned could have disastrous consequences for the species and the planet.

     The statement, which was housed on the website of US-based non-profit Center for AI Safety, was signed by several of the industry's leading specialists, including Geoffrey Hinton, who created some of the technology underlying AI systems. It also comes two months after Tesla boss Elon Musk and hundreds of others issued an open letter calling for a pause in the development of such technology until it could be shown to be safe.

     However, Musk's letter sparked widespread criticism that dire warnings of societal collapse were hugely exaggerated and often reflected the talking points of AI boosters. US academic Emily Bender, who co-wrote an influential paper criticizing AI, said the March letter was "dripping with AI hype."

     Critics have also slammed AI firms for refusing to publish the sources of their data or reveal how it is processed—the so-called "black box" problem. Among the criticisms is that the algorithms could be trained on racist, sexist or politically biased material.

     Sam Altman, who is currently touring the world in a bid to help shape the global conversation around AI, has hinted several times at the global threat posed by the technology his firm is developing. He defended his firm's refusal to publish the source data, saying critics really just wanted to know if the models were biased.

     The joint statement by industry chiefs and experts serves as a stark reminder of the potential risks posed by artificial intelligence technology, and the need for global leaders to prioritize tackling these risks. With billions of dollars of investment currently being poured into the field, it is essential that the development of AI technology is done in a safe and responsible manner.



Continue Reading at Source : gmanetwork_ph