Researchers including OpenAI and DeepMind jointly called for attention to the safety of artificial intelligence

Beyond the recent joint call by Elon Musk, Steve Wozniak, and others, urging technology firms to postpone investment in artificial intelligence technology research for at least six months, the non-profit organization Center for AI Safety (CAIS) has also unveiled a statement signed by numerous AI technology experts. This missive urges that the paramount task at hand is to mitigate the potentially catastrophic risks posed by AI.

In this declaration, the possible risks of AI are equated with epidemics and nuclear warfare, thereby underscoring the profound ramifications that this technology could engender.

OpenAI lost

Among the signatories of this statement are luminaries such as OpenAI co-founders Ilya Sutskever and John Schulman, and CEO Sam Altman, as well as several OpenAI researchers. Other signatories include DeepMind co-founder Shane Legg, CEO Demis Hassabis, numerous DeepMind researchers, and several university professors specializing in relevant fields.

Within the statement, potential risks associated with AI are delineated into eight categories, including the weaponization of AI, propagation of false information, influencing individuals and societal values, along with inciting human regression, loss of control, or allowing AI to fall under the control of specific individuals. The hope is to foster a broader consensus to establish safer trajectories for the development and application of AI.