Superintelligence will be born in the next 10 years, OpenAI will establish a dedicated team to create a method that can be guided and controlled
OpenAI has announced the formation of a novel team dedicated to developing methods for guiding and controlling “superalignment” to prevent the ever-growing AI technology from adversely affecting human life.
This new team will be spearheaded by OpenAI co-founder and Chief Scientist, Ilya Sutskever, although no specific strategies have been revealed yet.
OpenAI conjectures that a “superalignment”, surpassing human intelligence, could potentially materialize within the next decade. The advent of such technology does not necessarily spell improvements for human life, hence the urgent need to establish control and limitation mechanisms beforehand.
Sutskever asserts that in the face of the relentless advancement of AI technology, humans may ultimately be unable to control an AI that grows more intelligent than its creators, highlighting the imperative for the immediate development of countermeasures.
Similar sentiments have been echoed by Geoffrey Hinton, the “Godfather of AI”, a computer and cognitive psychologist from Canada, who has frequently cautioned against underestimating the impact of AI and warned that misuse of AI technology could exact a hefty toll.
In OpenAI’s plan, the team led by Sutskever will have access to over 20% of computing power resources to construct effective management methods for current AI technology. They will collaborate with scientists, engineers, and external researchers under Sutskever, aiming to propose effective AI management techniques within the next four years.
OpenAI believes that the future “thought processes” of AI will surpass those of humans in speed and could be utilized in more research applications while taking over an increasing number of tasks. Humans and AI will work side by side. However, to achieve this ideal scenario, humans still need to resolve potential issues like AI bias and computational loopholes.