tl;dr
Ethereum co-founder Vitalik Buterin has raised concerns about the risks of superintelligent AI and the need for a robust defense mechanism. He advocates for decentralized AI systems closely linked to human decision-making to mitigate catastrophic outcomes. Buterin proposes liability for users, imple...
Ethereum co-founder Vitalik Buterin has raised concerns about the risks of superintelligent AI and the need for a robust defense mechanism. He advocates for decentralized AI systems closely linked to human decision-making to mitigate catastrophic outcomes. Buterin proposes liability for users, implementing "soft pause" buttons, and regulating AI hardware to control its functionality. He acknowledges that these strategies are temporary measures.
In a blog post dated January 5, Vitalik Buterin outlined his idea behind ‘d/acc or defensive acceleration,’ where technology should be developed to defend rather than cause harm. However, this is not the first time Buterin has opened up about the risks associated with Artificial Intelligence.
“One way in which AI gone wrong could make the world worse is (almost) the worst possible way: it could literally cause human extinction,” Buterin said in 2023. Buterin has now followed up on his theories from 2023. According to Buterin, superintelligence is just potentially a few years away from existence. “It’s looking likely we have three-year timelines until AGI and another three years until superintelligence. And so, if we don’t want the world to be destroyed or otherwise fall into an irreversible trap, we can’t just accelerate the good, we also have to slow down the bad,” Buterin wrote.
To mitigate AI-related risks, Buterin advocates for the creation of decentralized AI systems that remain tightly linked with human decision-making. By ensuring that AI remains a tool in the hands of humans, the threat of catastrophic outcomes can be minimized. Buterin then explained how militaries could be the responsible actors for an ‘AI doom’ scenario. AI military use is rising globally, as was seen in Ukraine and Gaza. Buterin also believes that any AI regulation that comes into effect would most likely exempt militaries, which makes them a significant threat. The Ethereum co-founder further outlined his plans to regulate AI usage. He said that the first step in avoiding risks associated with AI is to make users liable. If the liability rules don’t work, the next step would be to implement “soft pause” buttons that allow AI regulation to slow down the pace of potentially dangerous advancements. He said the pause can be implemented by AI location verification and registration. Another approach would be to control AI hardware. Buterin explained that AI hardware could be equipped with a chip to control it. The chip will allow the AI systems to function only if they get three signatures from international bodies weekly. He further added that at least one of the bodies should be non-military affiliated. Nevertheless, Buterin admitted that his strategies have holes and are only ‘temporary stopgaps.’