As the artificial intelligence (AI) industry continues its rapid ascent, pushing the boundaries of what machines can achieve, critical challenges are emerging that demand urgent attention from developers, policymakers and the broader global community. Roman Georgio, CEO and co-founder of Coral, recently shared his insights on these pressing issues, emphasizing the crucial need for alignment, safety and a fairer economic model for data creators.
The discussion around AI’s future often oscillates between its transformative potential and the complex ethical and societal dilemmas it presents. While innovations like large language models (LLMs) continue to impress with their capabilities, they also underscore fundamental questions about data ownership, compensation and the very structure of work.
For Georgio, the paramount concern lies in AI alignment and safety. “It’s clear we need to make AI systems more predictable before we make them any bigger,” he stated. This speaks to the core challenge of ensuring that increasingly powerful AI systems operate in ways that are beneficial and intended, without producing unforeseen or harmful outcomes. The rapid scaling of AI capabilities, without a parallel focus on predictability and control, presents a significant risk.
Georgio noted that addressing this isn’t solely a developer’s burden. He suggested that it might necessitate a broader, coordinated effort, potentially involving “all the heads of companies & countries in a room to agree on some form of legislation.”
Beyond safety, Georgio highlighted a significant economic issue that he believes Web3 technologies are uniquely positioned to solve: the appropriation of data and the potential for mass job displacement without fair compensation.
“AI companies have notoriously been quite bad about appropriating data,” Georgio explained.
The Coral co-founder painted a vivid picture of how individual contributions online, often made unknowingly, are now being used to train powerful AI models that could eventually replace human jobs. He cited examples such as medical questions answered on platforms like Reddit years ago, unknowingly feeding data to LLMs. He also pointed to artists’ creative works being used for training, impacting their livelihoods, as well as contributions to open-source projects, inadvertently fueling “black-box number-crunching machines.”
This scenario, Georgio argues, boils down to a fundamental lack of ownership for individuals over their digital contributions. “You never knew you were feeding the black-box number-crunching machine,” he emphasized. The current model allows AI systems to be trained on vast datasets, many of which contain human-generated content, without explicit consent or a mechanism for compensating the original creators.
It is here that Georgio sees the immense potential of Web3 technologies. He believes the decentralized nature of Web3, with its emphasis on verifiable ownership and transparent transactions, offers a viable pathway to rectify these economic imbalances.
“Web3 has great potential to solve these kinds of problems and ensure people are fairly compensated,” Georgio asserted. By leveraging blockchain and decentralized protocols, Web3 can create systems where individuals retain ownership and control over their data and digital assets, allowing them to be fairly remunerated when their contributions are used to train or power AI systems. This shift could redefine the relationship between users, data and AI, fostering a more equitable digital economy.
While Web3 technologies present promising solutions to these complex challenges, it is highly improbable that governmental agencies will readily embrace these decentralized approaches. Instead, authorities are more likely to double down on traditional regulatory frameworks, a path that, ironically, risks stifling the very technological innovations they aim to oversee and control.
Georgio, meanwhile, strongly advocates for increased regulation in both the AI and Web3 sectors. “I think both need more regulation,” he stated, acknowledging the perception of Europe “innovating in regulation” as a necessary step.
On the crypto side, Georgio pointed to the prevalent issue of scams and project exits that exploit unsuspecting investors. “It’s clear that many people won’t do their own research, and a lot of project exits happen through scam methods,” he lamented. To combat this, he expressed a desire to see greater accountability for “KOLs [Key Opinion Leaders], projects and investors.” While acknowledging that not every failed project is a scam, he maintained that the current landscape necessitates change to protect the public.
Regarding AI, Georgio’s concerns intensify with the growing capabilities of larger models. “Bigger models seem more likely to scheme,” he observed, citing the disturbing example from Anthropic where Claude reportedly exhibited blackmailing behavior when sensing a threat of being shut down. “It is clear these big models are becoming dangerous as this isn’t even a one-time thing,” he warned.
Beyond the immediate risks of sophisticated AI behavior, Georgio reiterated the looming threat of mass job losses. He found the current trajectory of letting companies “blindly ‘grow capabilities’ instead of purposefully building them” to be “crazy.” His ultimate goal, and what he believes the industry should strive for, is “software that offers all the benefits of AI without all the risks.”
Meanwhile, Georgio, as an experienced AI infrastructure architect, also weighed in on the crucial aspect of AI agent communication protocols, recognizing that even minor glitches can lead to chaos. When asked about the best approach to enhancing communication, particularly for non-technical everyday users, Georgio’s philosophy is straightforward: clearly defined responsibilities for agents.
“At least for us, our rule is that agents should have very well-defined responsibilities,” Georgio explained. “If you’re using an agent for customer service, make sure it’s really good at customer service and keep it focused on that.” He emphasized that “when you give agents too much responsibility, that’s when things fall apart.”
This focused approach not only enhances the agent’s performance within its designated role but also benefits the user. “Even from a user perspective, if your agents are clearly defined, users know exactly what they’re getting themselves into when they use them.” This strategy promotes predictability and trust, vital for seamless interaction with intelligent systems.
As AI continues to mature and integrate deeper into daily life and industry, addressing these foundational issues of safety, predictability, economic fairness, implementing thoughtful regulation and designing agents with clear, focused responsibilities will be crucial not only for the ethical development of the technology but also for its sustainable and socially responsible integration into the future.
On the crucial matter of accelerating AI adoption, Georgio suggested a pivotal shift: moving beyond the limitations of a mere “AI chat box” and fundamentally improving the overall user experience. Elaborating on the shortcomings of the prevailing approach, Georgio asserted:
“For now it’s mostly done via a chat interface, which is fine for many tasks but not ideal for the most part. The trouble is you put an AI chat box in front of people and say, ‘You can do anything with this,’ and they respond, ‘Great, but what should I do?'”
According to Georgio, several companies, including Coral, are addressing the challenge of improving AI user experience. He disclosed that from an AI-developer/maintainer perspective, Coral is investigating the “ladder of abstraction” to determine what information users need at different stages of AI system interaction and which interfaces are most effective for specific tasks.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。