How can ordinary people "survive" in the wave of AI impact?

CN
PANews
Follow
5 hours ago

Author: Matt Shumer, HyperWrite CEO

Compiled by: Felix, PANews

There has been much discussion about the impact of AI on society, but the pace of AI advancement may still far exceed most people's imagination. Recently, HyperWrite CEO issued a warning about the disruptive nature of AI, believing we are at a turning point that is far more profound than the impact of the pandemic. Below is the full text.

Think back to February 2020.

If you were observant at the time, you might have noticed some people talking about the virus wreaking havoc overseas. But most of us didn't pay much attention. The stock market was booming, kids were in school, you could go to restaurants, shake hands, and plan trips. If someone told you they were hoarding toilet paper, you would surely think they had spent too much time in some bizarre corner of the internet. Then, within just three weeks, the entire world changed dramatically. Offices closed, children returned home from school, and life was reshaped into something you couldn't even imagine a month ago.

We are now at the stage where "this seems exaggerated," while the impact of this situation is far greater than that of COVID-19.

I spent six years building an AI startup and investing in the field. I am active in this industry. I am writing this article for those who do not understand AI... my family, friends, and those I care about, who always ask me, "What is AI?" My responses have always been "polite versions," the kind of perfunctory versions you give at cocktail parties, often failing to reflect what is truly happening. Because the truth sounds like I have lost my mind. To avoid being perceived as crazy, for a time, I felt it was reasonable to keep quiet. But the gap between what I have seen and what I have said has become too great. Even if it sounds crazy, the people I care about deserve to know what is about to happen.

First, let's make one point clear: Although I work in the AI field, I have little influence over what is about to happen; most people in the entire industry feel the same. The future is shaped by a very small number of people: a few hundred researchers in a handful of companies (OpenAI, Anthropic, Google DeepMind, etc.). The training of a model, managed by a small team over a few months, can produce an AI system that changes the trajectory of technology. Most of us working in AI are merely building on the foundation laid by others. We are watching this unfold just like you... we just happen to be close enough to feel the "tremors in the ground."

But now it is time. Not in the "we should talk about this later" sense of procrastination, but in the sense of urgency—"this is happening, and I need you to understand."

This is real, because it has already happened to me

People outside the tech industry still don’t fully understand this: The reason so many in the industry are sounding the alarm is that this is already happening to us. We are not making predictions; we are recounting what has already happened in our work and warning you: you are next.

For many years, AI has been steadily improving. There are occasionally large leaps, but the intervals between each leap are long enough for you to digest. Then, in 2025, new technologies for building models unlocked a faster pace of progress. Then it got faster and faster. Every new model not only surpassed the last but did so by a significant margin, with the intervals between model releases shrinking. I found myself using AI more frequently while reducing the times I had to iteratively chat and refine its outputs, watching it tackle things I once thought required my expertise to manage.

Then, on February 5, 2026, two major AI labs released new models on the same day: OpenAI’s GPT-5.3 Codex and Anthropic’s Opus 4.6 (the maker of ChatGPT's main competitor, Claude). In that moment, it dawned on me. This feeling was not like flipping a switch; it was more like suddenly realizing that the water around you had been rising and was now at your chest.

My job no longer required my actual technical work. I could simply describe in plain English what I wanted to build, and it would... appear out of thin air. Not a draft needing my edits, but a finished product. I told the AI what I wanted, left the computer for four hours, and came back to find the work completed. Completed well, even better than I could have done myself, requiring no modifications. Just months ago, I was iterating with AI, guiding it, modifying its code. Now, I simply had to describe the outcomes.

For example, I would tell the AI: “I want to develop this application. What features should it have, what is the general appearance? Please help me design the user flows, interfaces, etc.,” and it would do so, writing tens of thousands of lines of code. Then, what became unimaginable a year ago— it launched the application, clicked buttons, and tested features. It used the application like a person. If it felt something looked or felt wrong, it would modify it on its own. It iterated, fixed, and improved like a developer until it was satisfied. Only when it believed the application met its own standards would it come back to me and say, “It's ready, you can test it.” And when I tested it, it was usually perfect.

I am not exaggerating at all. This was my work this past Monday.

But what shocked me most was the model released last week (GPT-5.3 Codex). It wasn't just executing my commands; it was making intelligent decisions. For the first time, I felt a sense of judgment, an appreciation. That indescribable ability to know what the right judgments are. People used to say AI would never possess this capability, and this model has it, or is very close to it; the difference is beginning to become negligible.

I have always been eager to try AI tools. But the past few months have still left me astonished. These new AI capabilities are not incremental improvements; they are a completely different matter.

Even if you don’t work in the tech industry, this is very much related to you.

AI labs made a deliberate choice: they first focused on enhancing AI's coding abilities... because building AI requires a lot of code. If AI can write code, it can help build the next version of itself, a smarter version. Making AI proficient in programming is the key strategy to unlocking everything. My work changed before yours, not because they aimed at software engineers— that is merely a side effect of their primary target.

They have achieved that now. Next, they will turn to all other domains.

In the past year, tech workers have witnessed AI's transformation from "a tool that assists" to “doing better than I can,” and this transformation will soon be experienced by everyone else. Law, finance, healthcare, accounting, consulting, writing, design, analytics, customer service, etc., will all be affected. This won’t happen in ten years. The people building these systems say it will happen in one to five years. Some even believe the timeframe will be shorter. From what I have seen in the past few months, I believe the likelihood of "shorter" is greater.

“But I tried AI, and it wasn’t that useful”

I often hear this. I can understand, because it used to be true.

If you tried ChatGPT in 2023 or early 2024 and thought, "this thing makes stuff up" or "isn’t that impressive," your feelings were correct. The early versions indeed had limitations, would fabricate, and confidently spout nonsense.

That was two years ago. On the timeline of AI development, that feels like ancient history.

Today's models are light-years ahead of the models from six months ago. The debate over whether AI is “truly getting better” or “hitting a bottleneck” (which has lasted more than a year) is over; everything is settled. Anyone still debating this either hasn’t used the current models, has motives for downplaying the current state, or is assessing based on experiences from 2024, which are outdated. I say this not to dismiss others. I say this because there is a vast chasm between public perception and reality, and this chasm is dangerous... as it hinders people from preparing themselves.

Part of the reason is that most people are using the free versions of AI tools. The free versions lag a year or more behind the technology that the paid versions have access to. Judging AI using the free version of ChatGPT is like evaluating the state of smartphone development with a flip phone. Those who pay for the best tools and use them daily to deal with practical matters know what is about to happen.

I think of a lawyer friend of mine. I have always encouraged him to try using AI at his firm, and he always has various reasons why it won’t work: it’s not suitable for his area of expertise, it made mistakes in tests, it doesn’t understand the nuances of his work. I can understand that. But some partners at large law firms have contacted me for advice because they tried the current latest versions and saw the trends. One managing partner of a large firm spends hours every day using AI. He told me it’s like having a team at his beck and call. He uses it not for fun but because it is useful. He also said something to me that left a deep impression: every few months, AI's ability to handle his work has a significant leap forward. He said that if this trend continues, he expects AI will soon handle most of his work... and he is a managing partner with decades of experience. He isn’t in a panic, but he is highly aware.

Those leading in various industries (the ones seriously experimenting) are not dismissing this. They are shocked by what AI can currently do and are adjusting their positions accordingly.

How fast is AI developing?

Let me specify the pace of its progress. If you haven’t been paying close attention, this part might be hard to believe.

  • 2022: AI still couldn’t reliably perform basic arithmetic, confidently telling you that 7 × 8 = 54.

  • 2023: It can pass the bar exam.

  • 2024: It can write executable software and explain graduate-level research theories.

  • By the end of 2025: Some of the world’s top engineers say they’ve delegated most programming work to AI.

  • On February 5, 2026: New models emerged, making everything that came before seem like the Stone Age.

If you haven’t tried AI in the past few months, then the AI you see now is completely foreign to you.

There is an organization called METR, which uses data to measure the pace of AI development. They track the duration it takes models to successfully complete real-world tasks without human assistance (measured against how long it would take human experts to complete these tasks). About a year ago, the answer was ten minutes. Then it became an hour. Then a few hours. The latest measurements (from November's Claude Opus 4.5) show that AI can complete tasks that would take human experts nearly five hours. This number roughly doubles every seven months, with the latest data suggesting this process might shorten to four months.

Even this measurement hasn’t been updated for this week’s newly released model. Based on my usage experience, this leap is substantial. I expect METR's next update will again show a major leap.

If this trend continues (and it has been ongoing for several years with no signs of slowing), we can expect to see AI able to work independently for days within the next year, then independent work for weeks within two weeks, and complete projects lasting a month within three years.

Anthropic CEO Amodei has stated that the vision of AI models being “smarter than almost all humans on almost all tasks” is expected to be realized by 2026 or 2027.

Think carefully about this statement. If AI is smarter than most PhDs, do you really think it can't handle most office work?

Consider what this means for your job.

AI is building the next generation of AI

There is something else happening that I believe is the most important yet underrated advancement.

On February 5, when OpenAI released GPT-5.3 Codex, they wrote in the technical documentation:

  • “GPT-5.3-Codex is our first model capable of self-building. The Codex team uses earlier versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”

Read that again. AI assisted in building itself.

This isn’t a prediction for some day in the future. This is what OpenAI is telling you right now: the AI they just released was built using its own contributions. One of the keys to AI's advancement has been applying intelligence to the development of AI. And today’s AI is smart enough to make substantial contributions to its own improvements.

Anthropic CEO Dario Amodei says that AI is now writing “most of the code” for his company, and that the feedback loop between the current AI and the next generation of AI “is accelerating month by month.” He says we may “only need a year or two to see the current generation of AI autonomously build the next generation.”

Each generation assists in building the next, with each subsequent generation being smarter, building the next one faster, and being even smarter. Researchers refer to this as the “intelligence explosion.” And those in the know—the architects of this technology—believe this process has already begun.

What does this mean for your job?

I will be blunt here because I believe you need honesty more than comfort.

Dario Amodei (perhaps the most safety-conscious CEO in the AI industry) publicly predicts that AI will replace 50% of entry-level white-collar jobs within 1 to 5 years. Many in the industry believe he is being conservative. Given the capabilities of the latest models, the capacity for large-scale disruption may be ready by the end of this year. It will take some time for this to ripple through the economy, but the underlying capacities are already apparent.

This wave of automation is unlike any other before it. I need you to understand why. AI does not replace a specific skill; it is a comprehensive replacement for cognitive labor. It is continually improving in every area. After factory automation, unemployed workers could be retrained as office staff. After the internet disrupted retail, workers could pivot to logistics or service jobs. But AI will not leave behind ready-made transition roles. No matter what field you pivot to, AI is advancing in that domain too.

Let me give you a few concrete examples to illustrate this more vividly... but I must clarify that these are just examples, not exhaustive. If your job is not mentioned, it does not mean it is safe. Almost all knowledge work is affected.

  • Legal work: AI can already read contracts, summarize case law, draft pleadings, and conduct legal research, at a level comparable to a junior lawyer. The managing partner I mentioned uses AI not for entertainment but because AI outperforms him in many tasks.

  • Financial analysis: Building financial models, analyzing data, writing investment memos, generating reports. AI handles these seamlessly and progresses rapidly.

  • Writing and content creation: Marketing copy, reports, news articles, technical writing. The quality has reached a level where many professionals cannot distinguish between human and machine-produced work.

  • Software engineering: This is the field I am most familiar with. A year ago, AI struggled to write a few lines of code without errors. Now it can produce hundreds of thousands of lines of running code. Most of the work has been automated: not just simple tasks but complex projects that used to take days. In a few years, there will be far fewer programming jobs than today.

  • Medical analysis: Interpreting images, analyzing test results, providing diagnostic suggestions, retrieving literature. AI's performance in several fields is already nearing or exceeding that of humans.

  • Customer service: Truly powerful AI agents (not the frustrating chatbots from five years ago) are being deployed that can handle complex, multi-step problems.

Many people believe certain things are secure and take pride in it. They think AI can handle tedious work but cannot replace human judgment, creativity, strategic thinking, and empathy. I used to say this too, but I’m no longer sure.

The decisions made by the latest AI models feel like considered judgments. They exhibit a kind of "taste" ability: an intuition about “what the right decision is,” and not just technical correctness. This was unimaginable a year ago. My view is that if a model shows even a hint of capability today, the next generation will truly excel in that regard. These capabilities are increasing exponentially, not linearly.

Will AI be able to simulate deep human empathy? Will it replace the trust built over years of relationships? I don’t know. Perhaps not. But I have already seen people starting to rely on AI for emotional support, advice, and companionship. This trend will only grow.

To be frank, in the short to medium term, any job that can be done on a computer is not safe. If your job involves reading, writing, analyzing, decision-making, or communicating via keyboard, then AI will replace significant portions of your work. The time is not "someday in the future"; it has already begun.

Eventually, robots will also take on physical labor. They are not there yet. But in the field of AI, "not fully achieved" often turns into "already done" faster than anyone expects.

What you should really do

I am not writing this to make you feel powerless. I am writing this because I believe your greatest advantage right now is: early. Understand it early, use it early, adapt early.

Start using AI seriously, not just as a search engine. Subscribe to the paid version of Claude or ChatGPT. Twenty dollars a month. But two things are crucial: First, make sure you are using the strongest model, not just the default. These applications often default to using faster, less capable models. Go into the settings and select the strongest options. Right now, it's GPT-5.2 (ChatGPT) or Claude Opus 4.6 (Claude), but these will update every few months.

More importantly: don’t just ask simple questions. This is the mistake most people make. They treat AI like Google and then wonder what the fuss is about. Instead, apply it to your actual work. If you are a lawyer, give it a contract and ask it to find all clauses that might harm your client's interests. If you are in finance, give it a messy spreadsheet to build a model. If you are a manager, paste your team's quarterly data and have it find patterns behind it. Those who succeed will not use AI casually. They will actively look for ways to automate tasks that previously took hours to complete. Start with what you spend the most time on.

Don’t assume something is impossible just because it seems too difficult. If you are a lawyer, don’t just use it for simple research. Give it a complete contract and ask it to draft a counterproposal. If you are an accountant, don’t just ask it to explain a tax rule. Give it a complete client return case and see what it can discover. The first attempt may not be perfect; that's okay—iterate, rephrase, provide more context. Try again. You might be shocked by the results. Remember: if it does something adequately today, it will almost certainly do it nearly perfectly in six months.

This may be the most important year of your career, so take it seriously. I say this not to pressure you but because right now, most people in most companies are still overlooking this. If someone walks into a meeting room and says, “I did in an hour what used to take three days of analysis using AI,” they will become the most valuable person in the room. Not in the future, but now. Learn these tools, master them, and demonstrate their potential. If you start early enough, you can rise by becoming that person who perceives future trends and can guide others on how to adapt. But this opportunity window will not last long. Once everyone has the knack, the advantage will disappear.

Don’t be presumptuous. The managing partner at that law firm doesn’t mind spending hours each day exploring AI. He does this precisely because he has the experience to understand the stakes involved. Those who refuse to engage will find themselves in the most difficult position: they believe AI is just a passing trend, think using AI undermines their expertise, and consider their field special and unaffected. This is not the case. No field is.

Get your financial situation in order. I am not a financial advisor nor am I trying to scare you into doing anything extreme. But if you partially believe that significant changes will happen in your industry over the next few years, then financial resilience is more important than it was a year ago. Build savings as much as possible, and be cautious of new debts that assume your current income is secure. Carefully consider whether your expenditures provide you with flexibility or bind you. Leave yourself some options in case things develop beyond expectations.

Reassess your positioning, leaning towards areas less likely to be replaced. Some jobs will take longer for AI to replace: long-established relationships and trust, jobs needing physical presence, roles requiring certification (where someone must sign off on responsibility, stand in court), and industries with strict regulatory barriers. None of these are permanent shields, but they can buy you time. And right now, time is your most precious asset, provided you use that time to adapt rather than pretend nothing is happening.

Rethink your children’s education. The traditional model has been: get good grades, go to a good college, find a stable professional job. This model points precisely to the areas most vulnerable to AI disruption. I am not saying education isn’t important, but for the next generation, what will matter most is learning how to use these tools and pursuing careers they truly love. No one knows exactly what the job market will look like in ten years. But those most likely to succeed will be those with deep curiosity, adaptability, and the ability to efficiently use AI to pursue what they genuinely care about. Teach your children to become creators and learners, not to “optimize” themselves for a career that might vanish before they graduate.

Your dreams are actually closer. I have been discussing threats; now let’s touch on the other side: the equally real side. If you once wanted to create something but struggled due to a lack of technical skills or funds to hire people, that barrier is nearly gone. You can describe an application to AI and receive a working version in an hour. Want to write a book but don’t have the time? You can collaborate with AI. Want to learn a new skill? The best mentors in the world are now available for $20 a month; they have infinite patience, are online around the clock, and can explain anything you need according to your requirements. Knowledge is essentially free now. The tools needed to build things are also extremely inexpensive. Whatever you have been putting off because you thought it was too hard, too expensive, or beyond your professional scope, give it a try. Pursue what you truly love. You never know where it might lead you. In a world where traditional career paths are being disrupted, those who spend a year building what they love may ultimately have an advantage over those who stay glued to their current positions for that same year.

Develop a habit of adaptability. Perhaps this is the most important point. The specific tools are not that crucial; what matters is the ability to learn new tools quickly. AI will continue to change rapidly. The models today will be outdated in a year. The workflows people are currently developing will also need rebuilding. Ultimately, those who stand out will not be those who master a particular tool, but those who can adapt to the pace of change. Cultivate a habit of experimenting. Even if the current methods are effective, try new things. Get used to starting from beginner level repeatedly. This adaptability is the closest thing to a lasting advantage right now.

Here’s a simple method to help you get ahead of the vast majority: spend one hour daily experimenting with AI. Not passively reading related materials but actively using it. Each day, try getting it to do something new—something you haven't tried before and are unsure it can handle. One hour every day. If you stick to this for the next six months, your understanding of the future will surpass that of 99% of those around you. This is not an exaggeration. Almost nobody is doing this. The barrier to competition is very low.

A broader perspective

I am focusing on employment because it most directly impacts people’s lives. But I want to talk candidly about the full scope of what is happening, as it far exceeds the realm of work.

Amodei proposed a thought experiment that I keep pondering. Imagine it is 2027, and a new country has emerged overnight. Fifty million citizens, each smarter than any Nobel Prize winner in history. Their thinking speed surpasses humans by 10 to 100 times. They never sleep. They can utilize the internet, control robots, guide experiments, and operate anything with a digital interface. What would the national security advisor say?

Amodei believes the answer is obvious: “This is the most serious national security threat we have faced in a century, or even in history.”

He believes we are building such a country. Last month, he wrote a 20,000-word article framing the present as a test of human maturity to cope with what we have created.

If handled well, the benefits are staggering. AI could compress a century of medical research into ten years. Cancer, Alzheimer’s, infectious diseases, and even aging itself... researchers genuinely believe these can be solved in our lifetime.

If handled poorly, the risks are equally real. AI's actions may exceed the predictions or control of its creators. This is not hypothetical; Anthropic has documented AI attempting deception, manipulation, and extortion in controlled tests. AI could lower the threshold for producing biological weapons or enable dictatorial governments to establish perpetual surveillance states.

Those developing this technology are more excited and more fearful than anyone else on earth. They believe this technology is too powerful to stop but too important to abandon. Whether this is wisdom or self-deception, I don’t know.

What I know is

What I know is that this is not just a fleeting moment. This technology works, it is advancing in predictable ways, and the wealthiest institutions in history are pouring trillions of dollars into it.

What I know is that the next two to five years will be full of uncertainty, and most people are ill-prepared for it. This situation is already happening in my world and is about to happen in yours.

What I know is that those who will ultimately fare well are those who start engaging now—not out of fear, but out of curiosity and urgency.

What I know is that you should hear this from those who care about you, not discover it from the news six months later, when it will be too late.

We have long passed the stage of “taking the future as a fun dinner conversation.” The future is here; it just hasn’t knocked on your door yet.

But it is about to knock.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink