The race to develop artificial intelligence (AI) is intensifying and those involved are increasingly voicing concerns about the potential risks. Recently, a group of AI experts and public figures have come together through the Center for AI Safety (CAIS), an organization aimed at helping policymakers, business leaders, and society to manage AI risks.
CAIS published a ‘Statement on AI Risk’, urging that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
While the overall sentiment is hard to argue with, the statement’s usefulness and the clarity of its purpose are somewhat ambiguous. For instance, it’s unclear what “mitigating the risk of extinction” actually means, and whether this statement will have any real impact on AI development.
Among the signatories of the statement are CEOs from major AI companies such as OpenAI and Google DeepMind, suggesting a possible conflict of interest. Furthermore, the source of funding for CAIS remains unclear, as the organization only states that it’s a nonprofit supported by private contributions. The lack of transparency fuels speculation about a possible influence from the AI industry, which could have an interest in projecting an exaggerated sense of oversight.
The world has proved to be ineffective in mitigating societal-scale risks, as evidenced by the Covid pandemic and the recent geopolitical tensions involving Russia, Ukraine, and the US. The same challenges are likely to apply when attempting to manage AI risk. Inspired by the telecoms industry’s adherence to global standards and transparent governance, perhaps a similar approach should be adopted for the AI industry.
However, given the ongoing technological cold war between the US and China, the likelihood of reaching a global consensus on AI seems unlikely, especially given the growing military significance of AI technology. The statement published by CAIS may serve a limited purpose by reinvigorating public debate on AI risk and prompting the need for greater transparency and collaboration in the field. In the meantime, the push to develop AI technologies continues to gain momentum, highlighting the need for vigilance and responsible development to mitigate potential risks.