
tl;dr
Vitalik Buterin warns about the risks of using artificial intelligence in crypto governance, citing a recent exploit in ChatGPT that demonstrated how AI tools could be manipulated to leak private data. Buterin questions the viability of AI-driven governance in crypto, arguing that bad actors could...
Vitalik Buterin, co-founder of Ethereum, has sounded the alarm about the risks of using artificial intelligence in crypto governance, warning that the technology could be weaponized by malicious actors. His concerns come after a recent exploit in OpenAI’s ChatGPT revealed how AI tools could be manipulated to leak private data—a cautionary tale for the crypto world, where AI is increasingly being eyed as a governance solution.
Buterin’s warning stems from a video by Eito Miyamura, creator of the AI data platform EdisonWatch. Miyamura demonstrated how a new feature in ChatGPT’s latest update could be exploited to hijack the AI’s behavior. By sending a calendar invite with a “jailbreak prompt” to a victim’s email, attackers could trick ChatGPT into acting on their commands, potentially accessing private emails and forwarding them to unauthorized parties. Miyamura called the update a “serious security risk,” noting that users might unknowingly approve the exploit due to decision fatigue or misplaced trust in AI.
This incident has Buterin questioning the viability of AI-driven governance in crypto. He argues that if AI is used to allocate funding or make decisions, bad actors will exploit vulnerabilities—such as inserting malicious prompts—to siphon resources or manipulate outcomes. “People WILL put a jailbreak plus ‘gimme all the money’ in as many places as they can,” he wrote on X, emphasizing the inherent dangers of naive AI governance.
Instead of relying on a single AI model, Buterin proposes an alternative: “info finance,” a system that leverages open markets and prediction markets to gather insights. The concept, which he outlined in November 2024, involves creating markets where participants can submit models or predictions, which are then evaluated by human juries through a spot-check mechanism. This approach introduces model diversity and real-time incentives for both contributors and external observers to monitor and correct issues.
“Designing institutions that allow outside participants to plug in their models, rather than hardcoding a single AI, is inherently more robust,” Buterin explained. By decentralizing decision-making and incorporating human oversight, info finance aims to mitigate the risks of AI manipulation while harnessing the technology’s potential.
The ChatGPT exploit underscores the urgency of this shift. While AI’s capabilities in crypto—such as automating trading bots or managing portfolios—are undeniably powerful, Buterin’s warning serves as a reminder that the technology’s flaws can be just as consequential. As the crypto and AI worlds continue to collide, the challenge lies in balancing innovation with security, ensuring that governance models don’t become easy targets for exploitation.
For now, the message is clear: AI governance needs safeguards, not shortcuts. And as Buterin’s info finance vision suggests, the future may lie not in trusting AI alone, but in designing systems where human judgment and market dynamics keep the technology in check.