This article will review typical unrestricted LLM tools.
Background
From OpenAI's GPT series to Google's Gemini, and various open-source models, advanced artificial intelligence is profoundly reshaping our work and lifestyle. However, alongside the rapid technological advancements, a concerning dark side is gradually emerging—the appearance of unrestricted or malicious large language models.
Unrestricted LLMs are those specifically designed, modified, or "jailbroken" to bypass the built-in safety mechanisms and ethical constraints of mainstream models. Mainstream LLM developers typically invest significant resources to prevent their models from being used to generate hate speech, misinformation, malicious code, or provide instructions for illegal activities. In recent years, some individuals or organizations, motivated by cybercrime and other reasons, have begun to seek or develop unrestricted models themselves. In light of this, this article will review typical unrestricted LLM tools, introduce their misuse in the cryptocurrency industry, and discuss related security challenges and countermeasures.
How Do Unrestricted LLMs Do Harm?
Tasks that previously required specialized skills, such as writing malicious code, creating phishing emails, and planning scams, can now be easily accomplished with the assistance of unrestricted LLMs, even by ordinary individuals with no programming experience. Attackers only need to obtain the weights and source code of open-source models, and then fine-tune them on datasets containing malicious content, biased statements, or illegal instructions to create customized attack tools.
This model has given rise to multiple risk hazards: attackers can "modify" models based on specific targets to generate more deceptive content, thereby bypassing the content review and safety restrictions of conventional LLMs; the models can also be used to quickly generate code variants for phishing websites or tailor scam scripts for different social media platforms; meanwhile, the accessibility and modifiability of open-source models continue to fuel the formation and spread of an underground AI ecosystem, providing a breeding ground for illegal transactions and development. Below is a brief introduction to such unrestricted LLMs:
WormGPT: The Black Version of GPT
WormGPT is a malicious LLM openly sold on underground forums, with its developers explicitly claiming that it has no moral restrictions and is a black version of the GPT model. It is based on open-source models like GPT-J 6B and trained on a large amount of data related to malware. Users can gain access for as little as $189 for a month. WormGPT's most notorious use is generating highly realistic and persuasive Business Email Compromise (BEC) attack emails and phishing emails. Its typical misuse in the cryptocurrency context includes:
Generating phishing emails/messages: Imitating cryptocurrency exchanges, wallets, or well-known projects to send users "account verification" requests, enticing them to click malicious links or disclose private keys/mnemonic phrases;
Writing malicious code: Assisting less technically skilled attackers in writing malicious code that steals wallet files, monitors the clipboard, records keystrokes, etc.
Driving automated scams: Automatically replying to potential victims, guiding them to participate in fake airdrops or investment projects.
DarkBERT: A Double-Edged Sword of Dark Web Content
DarkBERT is a language model developed in collaboration between researchers from the Korea Advanced Institute of Science and Technology (KAIST) and S2W Inc., specifically pre-trained on dark web data (such as forums, black markets, and leaked information) to help cybersecurity researchers and law enforcement better understand the dark web ecosystem, track illegal activities, identify potential threats, and gather threat intelligence.
Although DarkBERT's design intention is positive, the sensitive content it holds about dark web data, attack methods, illegal trading strategies, etc., could have dire consequences if malicious actors obtain or utilize similar technologies to train unrestricted large models. Its potential misuse in the cryptocurrency context includes:
Implementing targeted scams: Collecting information about cryptocurrency users and project teams for social engineering fraud.
Imitating criminal methods: Replicating mature coin theft and money laundering strategies from the dark web.
FraudGPT: The Swiss Army Knife of Online Fraud
FraudGPT claims to be an upgraded version of WormGPT, with more comprehensive functions, primarily sold on the dark web and hacker forums, with monthly fees ranging from $200 to $1,700. Its typical misuse in the cryptocurrency context includes:
Forging cryptocurrency projects: Generating realistic white papers, official websites, roadmaps, and marketing copy for fraudulent ICOs/IDOs.
Batch generating phishing pages: Quickly creating imitation login pages for well-known cryptocurrency exchanges or wallet connection interfaces.
Social media bot activities: Mass-producing fake comments and promotions to boost scam tokens or discredit competing projects.
Social engineering attacks: This chatbot can mimic human conversation, building trust with unsuspecting users and enticing them to inadvertently disclose sensitive information or perform harmful actions.
GhostGPT: An AI Assistant Without Ethical Constraints
GhostGPT is explicitly positioned as an AI chatbot with no moral restrictions, and its typical misuse in the cryptocurrency context includes:
Advanced phishing attacks: Generating highly realistic phishing emails impersonating mainstream exchanges to issue false KYC verification requests, security alerts, or account freeze notifications.
Smart contract malicious code generation: Without any programming background, attackers can quickly generate smart contracts containing hidden backdoors or fraudulent logic using GhostGPT, for Rug Pull scams or attacks on DeFi protocols.
Polymorphic cryptocurrency stealers: Generating malware with continuous morphing capabilities to steal wallet files, private keys, and mnemonic phrases. Its polymorphic nature makes it difficult for traditional signature-based security software to detect.
Social engineering attacks: Combining AI-generated scripts, attackers can deploy bots on platforms like Discord and Telegram, enticing users to participate in fake NFT minting, airdrops, or investment projects.
Deepfake scams: In conjunction with other AI tools, GhostGPT can be used to generate fake voices of cryptocurrency project founders, investors, or exchange executives to carry out phone scams or Business Email Compromise (BEC) attacks.
Venice.ai: Potential Risks of Uncensored Access
Venice.ai provides access to various LLMs, including some with less censorship or looser restrictions. It positions itself as an open portal for users to explore the capabilities of various LLMs, offering cutting-edge, accurate, and uncensored models for a truly unrestricted AI experience, but it may also be misused by criminals to generate malicious content. The risks of this platform include:
Bypassing censorship to generate malicious content: Attackers can use less restricted models on the platform to generate phishing templates, false propaganda, or attack ideas.
Lowering the threshold for prompt engineering: Even if attackers lack advanced "jailbreaking" prompt skills, they can easily obtain outputs that were originally restricted.
Accelerating the iteration of attack scripts: Attackers can quickly test different models' responses to malicious instructions on this platform, optimizing fraud scripts and attack methods.
In Conclusion
The emergence of unrestricted LLMs marks a new paradigm of attacks that are more complex, scalable, and automated in the realm of cybersecurity. These models not only lower the threshold for attacks but also bring about more covert and deceptive new threats.
In this ongoing game of offense and defense, all parties in the security ecosystem must work together to address future risks: on one hand, there is a need to increase investment in detection technologies to develop systems capable of identifying and intercepting phishing content generated by malicious LLMs, exploiting smart contract vulnerabilities, and malicious code; on the other hand, efforts should be made to enhance the models' anti-jailbreaking capabilities and explore watermarking and tracing mechanisms to track the sources of malicious content in critical scenarios such as finance and code generation; additionally, a sound ethical framework and regulatory mechanisms should be established to fundamentally restrict the development and misuse of malicious models.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。