Stop the one-time airdrop!
Author: KERMAN KOHLI
Translation: DeepTechFlow
Recently, Starkware initiated the highly anticipated airdrop activity. Like most airdrops, this has sparked a lot of controversy.
So why does this situation happen over and over again? People may hear some of the following points:
Team members just want to sell and cash out tens of billions of dollars
The team doesn't know a better way and hasn't received the right advice
Whales should be given higher priority because they bring the total locked value
Airdrops are to make participation in cryptocurrencies more democratic
Without people taking advantage, there is no use or stress testing of the protocol
Mismatched airdrop incentives continue to produce strange side effects
These points are not wrong, but none of them are completely correct. Let's delve into some of these points to ensure we have a comprehensive understanding of the current issues.
When conducting an airdrop, you must make choices among three factors:
Capital efficiency
Decentralization
Retention rate
You often find that airdrops perform well in one dimension, but rarely achieve a good balance in two or all three dimensions.
Capital efficiency refers to the standard used to determine how many tokens to provide to participants. The higher the efficiency with which you allocate the airdrop, the more it becomes like liquidity mining (getting one token for every dollar deposited), which benefits whales.
Decentralization refers to who gets your tokens and based on what criteria. Recent airdrops have used methods with arbitrary criteria to maximize the coverage of token recipients. This is generally a good thing because it can help you avoid legal disputes and make people wealthy and gain more reputation.
Retention rate defines the rate at which users are retained after the airdrop. In a sense, this is a measure of how consistent users are with your intentions. The lower the retention rate, the less consistent users are with your intentions. As an industry benchmark, a 10% retention rate means that only 1 out of 10 addresses is your true user!
Setting retention rate aside, let's take a closer look at the first two factors: capital efficiency and decentralization.
Capital efficiency
To understand the first point about capital efficiency, let's introduce a new term: "sybil coefficient." It basically calculates how much benefit you get from allocating one dollar of capital to a certain number of accounts.
Where you fall within this range will ultimately determine how wasteful your airdrop will become. If your sybil coefficient is 1, technically, this means you are running a liquidity mining program, which will anger many users.
However, when you have a project like Celestia, where the sybil coefficient skyrockets to 143, you will see extremely wasteful behavior and rampant liquidity mining.
Decentralization
This brings us to the second point about decentralization: who you ultimately want to help are the "little guys," who are genuine users and are willing to use your product early, even though they are not wealthy. If your sybil coefficient is close to 1, then you are hardly giving much of the airdrop to the "little guys," and most of it goes to the "whales."
Now, the airdrop debate is heating up. There are three types of users here:
"Little guy A," who just wants to make some quick money and leave (maybe using a few wallets in the process)
"Little guy B," who wants to stay after receiving the airdrop and likes your product
"Professional airdrop abusers who behave like many little guys," who are definitely there to take most of your incentives and then move on to the next project.
The third type of person is the worst, the first type is somewhat acceptable, and the second type is the best. Differentiating between these three is a major challenge in the airdrop issue.
So, how do you solve this problem? While I don't have a specific solution, I have a philosophical approach to how to address this issue, which I have been thinking about and observing personally over the past few years: project-relative segmentation.
Let me explain what I mean. Zooming out, think about the meta-problem: you have all these users, and you need to be able to segment them into groups based on some value judgment. The value here is specific to the observer's environment, so it will vary from project to project. Trying to apply some "magical airdrop filter" is never enough. By exploring the data, you can start to understand the true nature of your users and begin to execute your airdrop in a data-driven manner.
Why isn't anyone doing this? This is another article I will write in the future, but the very short summary is that it is a difficult problem that requires data expertise, time, and money. Not many teams are willing or able to do this.
Retention rate
The last dimension I want to discuss is the retention rate. Before we talk about it, it's best to define what retention rate means. I'll summarize it as follows: Retention rate = number of recipients of the airdrop / number of people retaining the airdrop
Most airdrops make a typical mistake, which is to make it a one-time thing.
To prove this point, I think some data might be needed here! Fortunately, the OP actually executed multiple rounds of airdrops! I wish I could find a simple Dune dashboard that would give me the retention data I want, but unfortunately, I was wrong. So, I decided to get the data myself.
I didn't want to get too complicated, I just wanted to understand one simple thing: how does the percentage of users with a non-zero OP balance change as the airdrops continue.
I visited this website and obtained the list of addresses that participated in the OP airdrop. Then I set up a small web crawler program to manually retrieve the OP balance for each address in the list (using some of our internal RPC points for this), and did some data processing.
Before we delve into the research, an important note is that each OP airdrop is independent of the previous one. There are no rewards or links to retain tokens from the previous airdrop.
Airdrop 1
Given to 248,699 recipients based on the criteria provided here, in short, users received tokens based on the following actions:
OP mainnet users (92,000 addresses)
Repeated OP mainnet users (19,000 addresses)
DAO voters (84,000 addresses)
Multi-signature signers (19,500 addresses)
Gitcoin donors on L1 (24,000 addresses)
Users excluded due to Ethereum price (74,000 addresses)
After analyzing all these users and their OP balances, I obtained the following distribution. A 0 balance indicates that the user has sold, as unclaimed OP tokens are sent directly to eligible addresses, details can be found on this website.
In any case, compared to previous airdrops I observed, this first airdrop was surprisingly good! Most airdrop rates were over 90%. Only 40% had a balance of 0%, which is remarkably good.
Then I wanted to understand how each criterion played a role in determining the likelihood of users retaining the tokens. The only issue with this approach is that addresses may belong to multiple categories, which can distort the data. I won't just look at surface values, but rather a rough indicator:
Among the one-time OP users, the proportion of users with a balance of 0 is the highest, followed by users excluded due to Ethereum price. It's clear that these users are not the best user group. The proportion of multi-signature users is the lowest, which I think is a good indicator, as it's not obvious for airdrop farmers to set up a multi-signature for airdrop transactions!
Airdrop 2
This airdrop was distributed to 307,000 addresses, but in my opinion, this airdrop was not well thought out. The criteria were set as follows:
Governance delegation rewards based on the amount of OP delegated and the delegation time.
Partial gas refunds for active OP users who spent a certain amount on gas fees.
Multiplier rewards determined by additional attributes related to governance and usage.
To me, this doesn't feel like a good criterion intuitively, as governance voting is something easily manipulated by bots and quite predictable. As we will see below, my intuition wasn't too far off. I was surprised to see how low the actual retention rate was!
Close to 90% of addresses hold 0 OP balance! This is a common airdrop retention statistic that people are used to seeing. I would love to delve deeper into this issue, but I'm more inclined to move on to the remaining airdrops.
Airdrop 3
This is definitely the best-executed airdrop by the OP team. Its criteria are more complex than before. This airdrop was distributed to approximately 31,000 addresses, so it's smaller in scale but more effective. Here are the details, source click here:
Cumulative amount of OP delegated per day (i.e., 100 days of delegating 20 OP: 20 * 100 = 2,000 OP delegated x days).
Representatives who must vote on the OP governance chain during the snapshot period (0:00 UTC on January 20, 2023, to 0:00 UTC on July 20, 2023).
A key detail to note here is that the criterion for on-chain voting comes after the previous round of airdrops. So, users who participated in the first round may think "Okay, I've done what's required for the airdrop, time to move on to the next thing." This is great because it helps with analysis to see these retention statistics!
Only 22% of airdrop recipients have a token balance of 0! To me, this indicates that the waste in this airdrop is far less than any previous one. This aligns with my argument that retention rate is crucial, and the additional data from multi-round airdrops is more useful than people think.
Airdrop 4
This airdrop was distributed to a total of 23,000 addresses and has more interesting criteria. I personally thought the retention rate for this one would be high, but after some thought, I have a hypothesis for why it might be lower than expected:
You created NFTs for participating transactions on the superchain. Total gas on the OP chain (OP Mainnet, Base, Zora) involved in the transfer transactions of NFTs created by your address. Measured within the past 365 days before the airdrop deadline (January 10, 2023, to January 10, 2024).
You created appealing NFTs on the Ethereum mainnet. Total gas on Ethereum L1 involved in the transfer transactions of NFTs created by your address within the past 365 days before the airdrop deadline (January 10, 2023, to January 10, 2024).
You would think that creating NFT contracts would be a good indicator, right? Unfortunately, that's not the case. The data shows the opposite.
While the situation is not as bad as airdrop 2, we have taken a step back in terms of retention rate compared to airdrop 3.
My hypothesis is that these numbers would significantly improve if they had additional filtering for NFT contracts marked as spam or with some kind of "legitimacy". The criteria are too broad. Additionally, since the tokens are airdropped directly to these addresses (without requiring a claim), you'll find a situation where fraudulent NFT creators would think "Wow, this is free money. Time to sell."
Conclusion
As I wrote this article and obtained the data myself, I managed to prove/disprove some of my assumptions, which turned out to be very valuable. Specifically, the quality of your airdrop is directly related to your selection criteria. Those who attempt to create a generic "airdrop score" or use advanced machine learning models will fail due to inaccurate data or a large number of false positives. Machine learning is great until you try to understand how it arrived at the answers.
In writing the script and code for this article, I obtained the data for the Starkware airdrop, which was also an interesting exercise. I will discuss this in the next article. The key points the team should learn from this are:
Stop the one-time airdrops! This is shooting yourself in the foot. You want to deploy incentive measures similar to A/B testing. Iterate heavily and use past experiences to guide your future goals.
With criteria built on past airdrops, you will improve your efficiency. In fact, give more tokens to those who hold tokens in the same wallet. Make it clear to your users that they should stick to using one wallet and only switch wallets when absolutely necessary.
Get better data to ensure smarter and higher-quality airdrop segmentation. Bad data = bad results. As seen in the article above, the lower the "predictability" of the criteria, the better the retention rate results.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。