
Haotian | CryptoInsight|May 27, 2025 01:11
After reading the security "review" report on hacker attacks by @ CetusProtocol, you will find an intriguing phenomenon: technical details are disclosed quite transparently, and emergency response can be considered textbook level. However, in the most critical soul question of "why was it hacked", it seems to avoid the key and focus on the minor:
The report extensively explains the 'checked_sthlw' function in the 'integer mate' library for checking errors (which should be ≤ 2 ^ 192, but actually ≤ 2 ^ 256), and categorizes this as a 'semantic misunderstanding'. This narrative, while technically valid, cleverly shifts the focus towards external responsibility, as if Cetus is also an innocent victim of this technological flaw.
The question is, since 'integer mate' is an open-source and widely used mathematical library, why did you make the ridiculous mistake of obtaining sky high liquidity shares with just one token?
Analyzing the path of hacker attacks reveals that in order to achieve a perfect attack, hackers must simultaneously meet four conditions: incorrect overflow checks, significant displacement calculations, rounding up rules, and lack of economic rationality verification.
Cetus has been "negligent" in every "triggering" condition, such as accepting astronomical numbers like 2 ^ 200 from users, using extremely dangerous large displacement operations, fully trusting the external library's checking mechanism, and most fatally, when the system calculates the absurd result of "1 token for sky high value shares", it executes it without any economic knowledge checks.
So, the points that Cetus should truly reflect on are as follows:
1) Why hasn't security testing been done properly when using a universal external library? Although the 'integer mate' library has features such as open source, popularity, and widespread use, Cetus uses it to manage billions of dollars in assets without fully understanding where the security boundaries of this library are, and whether there are suitable alternative solutions if the library fails. Obviously, Cetus lacks the most basic awareness of supply chain security protection;
2) Why is it allowed to input astronomical numbers without setting boundaries? Although DeFi protocols should seek decentralization, the more open a mature financial system is, the more it requires clear boundaries.
When the system allows the input of astronomical numbers carefully constructed by attackers, the team clearly did not consider whether such liquidity requirements are reasonable? Even the world's largest hedge fund cannot possibly require such an exaggerated liquidity share. Obviously, the Cetus team lacks risk management talents with financial intuition;
3) Why haven't problems been discovered in advance after multiple rounds of security audits? This sentence inadvertently exposes a fatal cognitive misconception: the project party outsources security responsibilities to security companies and treats auditing as a gold medal of exemption. But the reality is cruel: security audit engineers are good at discovering code bugs, who would have thought that something might not be right when testing systems and calculating unrealistic exchange ratios?
This cross mathematical, cryptographic, and economic boundary verification is the biggest blind spot in modern DeFi security. The auditing company will say, 'This is a design flaw in the economic model, not a problem with the code logic.'; The project team complained that "the audit did not find any problems"; And users only know that their money is gone!
You see, what this ultimately exposes is the systemic security weakness of the DeFi industry: teams with purely technical backgrounds severely lack basic 'financial risk awareness'.
However, from Cetus' report, it is evident that the team has not reflected properly.
Compared to simply targeting the technical flaws in this hacker attack, I believe that starting from Cetus, all DeFi teams should overcome the limitations of purely technical thinking and truly cultivate the security risk awareness of "financial engineers".
For example, introducing financial risk control experts to fill the knowledge gaps of the technical team; Conduct a multi-party audit review mechanism, which not only includes code audits but also necessary economic model audits; Cultivate a 'financial sense', simulate various attack scenarios and corresponding response measures, and remain sensitive to abnormal operational moments, etc.
This reminds me of my previous experience in the security industry, including the consensus among industry security experts @ evilcos @ chiachih-wu @ yajinzhou @ mikelee205:
As the industry matures, technical bugs at the code level will gradually decrease, and the biggest challenge is the "awareness bug" of unclear boundaries and unclear responsibilities in business logic.
Audit firms can only ensure that the code is bug free, but how to achieve "logical boundaries" requires project teams to have a deeper understanding of the nature of the business and the ability to control boundaries. (The root cause of many "blame throwing incidents" that were still attacked by hackers after security audits is due to this)
The future of DeFi belongs to teams with strong code technology and profound understanding of business logic!
Share To
HotFlash
APP
X
Telegram
CopyLink