AI should be managed by a democratic, open, and decentralized system.
Written by: drnick
Translated by: DeepTechFlow
Without changing the current trajectory, artificial intelligence (AI) will become an extremely centralized tool, which is a real risk. The development of AI has exceeded the lifespan of each of us, but we happen to live in an era experiencing the "technological inflection point," at which AI's technological progress is growing exponentially. Many new accelerationists are rubbing their hands in excitement, saying, "Let's see what happens."
So, on our current path, the following will happen:
Industrialized Data Collection
This is just a continuation of what we have seen on the internet over the past 20 years. Everything is about data, plain and simple. For AI, this is more accurate than ever before. Historically, the purpose of extracting as much data as possible from users was to attract your attention through micro-targeted advertising. But now, the purpose is to improve AI models.
Centralized AI is an arms race. The best model wins. At this point, making money is just an additional benefit. Currently, the key is technological progress and occupying mindshare. This new driving force—the best model or bankruptcy—will leave us far behind in the ranking of importance. Winning at all costs.
Data Frontier (After LLMs)
The data frontier has been completed, and the main frontier models have mined and captured the entire human language history available on the internet, almost including everything.
With LLMs before and after, we have already crossed the boundary in data quality. Now, data quality is collapsing. Indiscriminately grabbing the internet will only damage your model from now on.
Why? Because the internet is now filled with clutter generated by GPT. This is why Claude thinks it's GPT. This is why you see the phrase "in-depth research" everywhere, this is why platforms like this are filled with endless bulleted list posts with project symbols, which exist in incredible valleys.
Your data is a big problem for centralized AI companies living in "grabbing public data." This is why models become stupid and may even lead to "model collapse" because models will eventually fall into a self-ruminating doomsday loop.
Now, obtaining new human data has become a game, and this is exactly what AI assistants are pursuing.
You Are the Product
Perhaps you think we have reached the peak of "you are the product," but think again.
By integrating them into your daily life, when you discuss your future plans, to-do lists, mental health, children's bedtime stories, shopping lists, local files, health data, etc., with your AI, in the background, an artificial intelligence is training on your digital twin.
Your digital twin is there to help you, or you will be led to believe so. Through a convenient veil, your assistant is mining data from you. Eventually, they will initiate product implantation. They will resell your data. Not just your raw data, but also predict what you will buy, predict how you will act if certain events occur. Predict how all of us will act if we do X, then Y, and then Z. This will be a manipulation on a scale we have never seen before, and we have seen a lot already.
New Oracle of Truth
As assistants become operating systems, they will become the new gateway for you to enter the information world. We will be increasingly distant from the original sources, even secondary sources, and the new sources are models. The new battlefield of reality is system prompts.
A few people in the boardroom will decide what you can read, how you should think, and how you should behave. They will decide what is an acceptable culture, what is a tolerance threshold, and the boardroom will be the new fortress of moral framework.
Governments, lobbying groups, and Doge-funded think tanks will pressure these boards to limit your freedom of expression, all in the name of "safety" and national security. Thoughts will become dangerous, and memes will become illegal. Under the guise of arbitrarily reduced "one-size-fits-all" diversity policies, the accuracy of facts will disappear.
Automation is a Game
Do not harbor any illusions; job replacement is the focus, and automation is a game. In the coming years, hundreds of millions or even billions of jobs in the global workforce will be replaced by automation technology.
Where will these "cost savings" go?
They will be routed through the monopolized APIs, which is why they are at war. The stakes are high, and the bet is on us and our future. The radical social change brought about by the wave of automation is the real risk to society, not AI or paperclip generators, which is the biggest psychological warfare of our time.
Regulatory Capture
The knowledge gap between policymakers and AI researchers will be exploited to sell a science fiction ending. As we are in the post-LLM data era, everything can be stolen and has been stolen. It's time to raise the drawbridge, it's time to enforce copyright laws, it's time to implement KYC GPU access, it's time to install compliance institutions that need to review all code before you can publish it.
The higher the compliance cost, the harder it is for new entrants to the market, and the deeper the moat of monopolies.
Fear AI, not those big companies that exploit humans for profit. However, a new model has just been released, and it will replace your job.
We first heard about new innovations from videos released by companies that have undergone AB testing, with subtle soft tones that make us feel safe and comfortable. We don't see anything coming because they are hidden behind nondisclosure agreements.
New technologies explosively emerge from black boxes, with no time to prepare, no time to discuss their meanings or impacts, new products are launched, come and join. Closed-source AI is their privacy, not yours.
Deepening Digital Divide
If you can afford it, you will get limited models for $20 per month, which depend on your prompts. They will get fast, private, unrestricted, and directly sourced unlimited models, and the digital advantage will be secretly sold to the highest bidder.
Perhaps you are lucky enough to have GPU capabilities and technical knowledge at home to harness the open model frontier, but you may be troubled by any consumer model you are allowed to use. And this is still based on the condition that you are allowed, and it is very likely that you have already been banned from using one of the big three AIs for asking questions you shouldn't have asked. Maybe because you dared to use a VPN, or just let it work too hard and caused them to lose money on your account. It doesn't matter, what matters is that you have been punished for violating the terms and conditions. If you have entered a sphere, good luck logging in again. For you, it's all over.
You don't see how the "sausage" is made.
Behind the scenes, artificial intelligence is far more human than most people imagine, because there are humans in this loop. In fact, they are often the cheapest labor on this planet, engaged in data labeling work to drive human feedback reinforcement learning (RLHF). GPT likes to say "deep exploration" because Nigerian laborers who are paid less than $2 per hour to screen toxic content also often use "deep exploration." Amazon recently shelved its "just walk out" AI checkout system from their grocery stores because it relied too much on cheap Indian labor to label the shopping crowd in store surveillance cameras.
Sam's worldcoin physical devices are likely to go to the global south first, as it is actually the cheapest source of labor on this planet. People whose data is collected by iris devices inject value into low-circulation junk coins, just as they provide a source for making sausages for factories. The reality is that AI work like this is crucial to improving these systems, but big companies will maximize shareholder value by using modern slave labor.
Humans will not win, but shareholders will.
The deepest problem with centralized artificial intelligence entities is this. These entities have a legal obligation to achieve shareholder value maximization by any means necessary. This is the problem in everything that should be human-centered. If the future of the AI-mediated world is driven by shareholder value maximization, then we are in big trouble.
Open, Private, and Decentralized
I know the situation is dire, but there is still hope. Fortunately, we have been building the tools we need for the past decade. We can guide the direction forward, and we can do this through truly open artificial intelligence.
AI should not be controlled by centralized companies. It should be managed by a democratic, open, and decentralized system.
AI is free and open source, which is why there are already open models that can run privately and locally, and I firmly believe that over time, these things will surpass and defeat centralized systems.
AI may be the best thing we have encountered, it can be a way to unlock large-scale collaboration through genuine negotiated perception and release collective wisdom, it can be a way to solve problems that we can never solve without it, including the biggest problems of our time. This is the most important issue in the world today, and I hope we can work together to solve it.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。