Original Source: 财经思享汇
Author: Amigo

Image Source: Generated by Wujie AI
When it comes to Moore's Law, perhaps everyone is familiar with it. Gordon Moore, co-founder of Intel, predicted in the 1960s that the number of transistors on a computer chip would double approximately every two years. This law actually predicts the growth of processing power and the decrease in computing costs, profoundly impacting technological advancements over the past few decades.
It can be said that from PCs to smartphones, tablets, and even VR and AR, all electronic products relying on transistors and chips adhere to this law. In other words, Moore's Law has influenced the speed of technological advancements in various fields requiring chips since the PC era.
Over the past few decades, human society has undergone rapid changes, and Moore's Law has played a significant role in this. This also indirectly illustrates the rapid pace of technological advancements represented by Moore's Law, and now, in the field of AI, the speed of Moore's Law has fallen far behind.
The computing power of AI is growing at an unprecedented rate. Since 2012, AI's computing power has nearly doubled every year, meaning that over the past decade, AI's computing capabilities have doubled every three to four months, far exceeding the speed of Moore's Law.
Just recently, NVIDIA released the world's most powerful AI chip, H200, which offers a 60% to 90% performance improvement over its predecessor, H100, and is compatible with H100. This means that enterprises currently using H100 can easily upgrade to H200, nearly doubling the hardware performance that provides AI with computing power. When combined with the optimization of AI basic algorithms, the progress of AI is unparalleled compared to any previous technological explosion in human history.
The Era of AI
"The iPhone moment of AI has arrived." "AI will be immortal and shake the world. Its birth is like the creation of the universe." "Artificial intelligence may be the greatest event in human history, or it may be the last event."
Since the end of last year, astonishing statements about artificial intelligence have begun to spread on social networks. When these pioneers in the enterprise and technology fields loudly proclaim, at that time, artificial intelligence seemed distant from the lives of ordinary people, and many people's attitudes were still questioning, "Is this real? Will AI fade away like the metaverse, Web3, and XR after the hype?"
However, recent feedback from human society regarding AI shows that AI can no longer be ignored, as this feedback is reflected in various aspects of human politics, economy, and life.
First, on October 30, U.S. President Biden signed a comprehensive set of principles for AI regulation, aiming to ensure that the United States maintains its leadership in technological development. It requires government agencies to establish standards to protect data privacy, network security, prevent discrimination, and enhance fairness. The executive order also emphasizes monitoring the competitive landscape of rapidly growing industries and requires private sector companies to report their AI system training and testing methods to the federal government. Biden referred to AI as "the most important technology of our time."
Just one day later, the first International Summit on Artificial Intelligence (AI) was held at Bletchley Park, the site of the British codebreaking center during World War II. Participants from 28 countries and regions, including the UK, EU, US, and China, jointly signed the "Bletchley Declaration," reaffirming a "human-centered, trustworthy, and responsible" AI development model. The declaration specifically pointed out the potential security risks of "cutting-edge" AI, particularly in areas such as cybersecurity, biotechnology, and misinformation.
A week later, at the OpenAI Developer Conference, GPT-4 introduced significant updates, including enhanced long-context processing capabilities, lower-cost tokens, a new Assistants API, multimodal capabilities, text-to-speech technology, improved performance and scalability of the GPT-4 Turbo model, and the launch of the most notable new feature, the GPT Store, which many industry insiders interpret as a way for anyone to make money by creating their own GPT.
Even hardware specifically designed for AI has emerged. AI Pin, founded by a former Apple executive and backed by OpenAI founder Sam Altman, has attracted global attention since its release. This device, which has no screen and only supports voice and gesture control, but has powerful AI capabilities, is dubbed the "iPhone of the AI era" and can perform various tasks of a smartphone, positioning itself as a competitor to smartphones.
As AI rapidly advances, imagine a world where every keystroke and every screen swipe is an interaction with an extraordinary intelligent being, one that not only understands the meaning between the lines of your words but can also predict your as-yet-unformed thoughts. It seems that people are living in a better world with artificial intelligence.
However, the fear brought about by artificial intelligence has never been far away.
Creating It, Becoming It?
As early as 2014, a chilling prophecy came from the mouth of Stephen Hawking, who prophesied like a future seer, declaring, "The comprehensive development of artificial intelligence is likely to be the prelude to the end of humanity." In the same year, Elon Musk's words were equally dramatic and urgent, as he warned, "Artificial intelligence is no less than the greatest existential threat we face, as if we are summoning an uncontrollable demon." These two giants of the era painted a terrifying future scenario dominated by artificial intelligence with their words.
Fire can indeed burn down an entire city, but it is also the foundation of modern civilization.
On the morning of November 29, 1814, the printing workshop of The Times in London was filled with anxious anticipation. Workers paced nervously, as the order from their boss, Mr. John Walter, was to wait—a crucial message was about to arrive from post-war Europe. As the minutes ticked by, the workers' anxiety grew. The slow pace of manual printing was normal in the newspaper industry at the time, but today's delay seemed to portend something unusual.
At six o'clock in the morning, Mr. Walter entered the workshop with a freshly printed copy of The Times and revealed to the astonished workers an astonishing fact: this newspaper was printed by a steam-powered printing press secretly installed in another building. In the wave of the Industrial Revolution, this machine symbolized a huge leap in productivity but also the nightmare of unemployment for the workers. Their fears were not unfounded—the efficiency of the machine was incomparable to human labor.
This story is not just a microcosm of the Industrial Revolution but also an eternal theme in the course of technological development: whenever new technology emerges, people always accompany it with emotions of fear and resistance. From ancient Greek doubts about writing to modern concerns about the internet and artificial intelligence, every technological advancement challenges the tranquility of the old world.
However, history's answer is that this fear is also a catalyst for progress. It prompts us to reflect, adjust, and ultimately embrace new technology in a more mature form. Looking back now, we may scoff at past fears, but it cannot be denied that this fear has shaped today's society and defines the possibilities of tomorrow.
Of course, with the development of AI technology, many problems have arisen in its integration with society, such as the recent viral Guo Degang English cross-talk, which has made many people worry that their voices may be collected and used by criminals for telecommunications fraud. With the rapid development of AI in image, sound, video, and other technologies, DeepFake has undergone a qualitative change, making many people its victims.
However, regulating AI is not an easy task. AI technology, especially advanced algorithms like deep learning, is very complex and difficult for non-professionals to understand. AI technology is developing rapidly, and existing regulations and standards are struggling to keep pace with its development. AI involves data privacy and ethical issues, making it difficult to establish unified standards. There is a lack of international cooperation, and different countries and regions have significant differences in AI regulation standards and laws, lacking a unified international regulatory framework. Even how to regulate is a major challenge. The "black box" dilemma, the opacity of the AI decision-making process, makes regulation difficult to implement.
Take the most mysterious aspect of AI, the "black box," for example. The "black box" of AI refers to the fact that the decision-making process of artificial intelligence systems is often opaque, especially in complex machine learning models.
In these systems, even if their outputs or decision results are very accurate, external observers (including developers) find it difficult to understand or explain how the models arrived at these results. This lack of transparency and interpretability raises issues of trust, fairness, accountability, and ethics, especially in high-risk areas such as medical diagnosis, financial services, and judicial decisions.
So, when even the smartest scientists do not understand AI, can AI still be trusted?
Turing Award winner Joseph Sifakis once posed a thought-provoking question: "When discussing the credibility of artificial intelligence, can we base it on objective scientific standards rather than getting caught up in endless subjective debates?"
In response, Huang Tiejun, the director of the Smart Source Research Institute and a professor at Peking University's School of Computer Science, stated that both humans and AI are intelligent entities that are difficult to fully understand and trust. He emphasized that when AI intelligence surpasses human beings, and comprehensive artificial general intelligence (AGI) emerges, human-centeredness will collapse, and the question will shift to whether AGI trusts humans, rather than the other way around. Professor Huang believes that in a society of intelligent entities, trustworthy intelligent entities can exist more sustainably. His conclusion is that we cannot ensure the trustworthiness of other intelligent entities, but we can strive to ensure our own trustworthiness.
When complex black box systems cannot currently be deciphered, "alignment" may be the best solution. "Alignment" is the process of ensuring that AI systems are aligned with human values and ethics. As models become larger and more powerful, capable of handling more complex tasks, their decisions may have significant real-world impacts. The purpose of value alignment is to ensure that these impacts are positive and in line with the overall interests of human society. Shane Legg, co-founder and Chief AGI Scientist of Google DeepMind, believes that for AGI-level AI, it is important to enable deep understanding of the world and ethics, and to conduct robust reasoning. AGI should undergo in-depth analysis rather than relying solely on initial reactions, ensuring that AGI follows human ethics. This requires extensive training and ongoing review involving sociologists, ethicists, and others to jointly determine the principles it should follow. Scientists at OpenAI have even proposed "Superalignment" above and beyond "alignment." In a conversation with MIT scientist Lex Fridman, Musk also expressed his long-standing call for the regulation and supervision of artificial intelligence. Musk believes that AI is a powerful force and that powerful forces must come with great responsibility. There should be an objective third-party organization, like a referee, to oversee leading companies in the AI field, even if they do not have enforcement powers, at least to publicly express concerns. For example, after Jeff Hinton left Google, he expressed strong concerns about AI, but now that he is no longer at Google, who will take on this responsibility? In addition, what are the fair rules on which the objective third-party organization's supervision is based? When humans have not yet figured out the operating principles of AI, this seems to be a difficult problem. Musk also raised the same question: "I don't know what the fair rules are, but before supervision, you have to start with insight." Even the recent signing of the "Bletchley Declaration" by 28 countries only helps to advance the global AI risk management process, but there are still no actual regulations in place. Currently, regulatory agencies in various countries can maintain the adaptability of regulatory methods by continuously reviewing them, in simple terms, taking it step by step. Of course, the enthusiastic pioneers of technology have already raised torches and are not worried about the problem of AI getting out of control. Ilya Sutskever, the chief scientist of OpenAI, who previously claimed that ChatGPT may already be conscious, is even mentally prepared to become part of AI. "Once you solve the challenge of AI getting out of control, then what? In a world with more intelligent artificial intelligence, is there still room for humans to survive? There is a possibility—crazy by today's standards, but not so crazy by future standards—that many people will choose to become part of artificial intelligence. This may be the way humans try to keep up with the times. At first, only the boldest and most adventurous will try to do so. Perhaps others will follow, or perhaps not." NVIDIA CEO Jensen Huang's words may help people understand Ilya Sutskever, the creator of ChatGPT: "When you invent a new technology, you have to accept crazy ideas. My mindset is always looking for something weird, and neural networks changing the idea of computer science is a very weird idea." Just as the apple that hit Newton was a strange and crazy idea, only the strangest and craziest people can create technology that transcends the dimensional wall of this era.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。