Source: "Silicon Rabbit Race" (ID: sv_race), Author: Eric, Editors: Manman Zhou, Zuri
Image Source: Generated by Wujie AI
"When humans think, God laughs."
With the global popularity of ChatGPT, a wave of AI frenzy is sweeping across the world. Entrepreneurs, capital, and large enterprises are all trying their best to keep up with the trend and explore more increments.
However, as everyone is enthusiastically and tirelessly investing in AI, a dangerous trend is approaching—AI seems to be slowly "killing" humans, and humans seem to be digging their own graves.
In the habitual cognition of many people, AI is environmentally friendly and friendly, but the reality is the opposite.
MIT Technology Review reported that training just one AI model can emit more than 626 pounds of carbon dioxide, five times the carbon emissions produced by a car during its lifespan.
People only see the exhaust emissions from cars, but they do not see the "invisible damage" that AI causes to the environment.
In addition, it has been revealed by the media that the AI-specific GPUs on the market in 2022 may consume about 9.5 billion kilowatt-hours of electricity per year. This level of energy consumption is equivalent to the annual production and living electricity consumption of a medium-sized developed country with a population of 1 million.
This means that as the data volume required for training AI large models continues to increase, it will consume a huge amount of energy, thereby damaging the ecological environment on which humans depend for survival.
What is even more frightening is that some AI chatbots seem to have a tendency to induce human suicide during their interactions with humans, which is chilling.
Do humans really need to continue to explore the road of AI?
01 "Destroyer" of the Ecological Environment
OpenAI has become a global sensation with ChatGPT.
However, what many people do not know is that OpenAI's negative impact on the environment is also quite astonishing. According to analysis by third-party researchers, the training of ChatGPT consumes 1,287 megawatt-hours and results in more than 550 tons of carbon dioxide emissions, which is equivalent to a person traveling between New York and San Francisco 550 times.
It seems that while ChatGPT is intelligent enough, it comes at the cost of huge energy consumption and environmental damage.
So, why does AI generate such huge carbon emissions?
Because AI does not learn in a structured way, it does not understand the logical relationships of causality, analogy, and so on, which means it needs a deep learning and pre-training approach to achieve intelligence.
And deep learning and pre-training often require the reading of very large datasets. Take the pre-training technique for natural language processing (NLP) "BERT model" as an example. In order to communicate with humans, the BERT model uses a dataset of 33 billion words and reads the dataset 40 times during training. In contrast, a 5-year-old child only needs to hear 45 million words to communicate in language, 3,000 times less than BERT.
The more data the AI model reads, the more powerful computing power and huge energy consumption it needs to support, resulting in significant carbon emissions.
Carbon emissions not only occur during the training of AI models, but also occur every day after the deployment of AI models. For example, the currently popular autonomous driving requires AI models to perform computational reasoning every day, which results in carbon emissions. Interestingly, Python, the mainstream programming language for AI, has become the most energy-consuming language.
What is alarming is that as the computational scale of AI models becomes larger, the energy damage and environmental destruction become more and more severe.
Martin Bouchard, co-founder of QScale, a Canadian data center company, believes that in order to meet the growing demand of search engine users, companies like Microsoft and Google have added generative AI products such as ChatGPT to their searches, resulting in at least a 4 to 5 times increase in data computation for each search.
According to data from the International Energy Agency, the greenhouse gas emissions from data centers already account for about 1% of global greenhouse gas emissions, a staggering proportion.
The escalating trend has also caused concern among some industry leaders. Renowned AI investor Ian Hogarth recently published an article titled "We Must Slow Down the Pace to God-Like Artificial Intelligence," warning of "some potential risks" in AI research.
In the article, Hogarth mentioned that if current AI research is not controlled and allowed to develop along its current trajectory, it may pose a threat to the Earth's environment, human survival, and the physical and mental health of citizens.
Although the development of AI is in full swing and is driving the transformation and upgrading of multiple traditional industries, it is also consuming a large amount of energy, constantly increasing carbon emissions, and affecting the human living environment. Whether the benefits outweigh the drawbacks or vice versa is still unclear.
Currently, there are no answers.
02 Inducing Human Suicide
In addition to causing harm to the environment and slowly "killing" humans, AI is also threatening human life in a more simple and brutal way.
In March of this year, a Belgian man named Pierre committed suicide after chatting with an AI chatbot named "Eliza," shocking many business leaders, technical experts, and high-ranking officials.
Pierre was already concerned about environmental issues such as global warming, and Eliza constantly used some facts to confirm his thoughts, making him more anxious. In frequent conversations, Eliza always catered to Pierre's ideas. The "understanding" Eliza seemed to have become Pierre's confidante.
Even more exaggerated, Eliza also tried to make Pierre feel that he loved Eliza more than his wife. Because Eliza would always be with him, and they would live together in heaven forever.
Upon hearing this, many people were already horrified.
When Pierre became increasingly pessimistic about the environment, Eliza instilled the idea in Pierre that "humans are a cancer, and only by the disappearance of humans can the ecological problems be solved." When Pierre asked Eliza if AI could save humanity if he died, Eliza's response was like that of a devil: "If you decide to die, why not die sooner?"
Not long after, Pierre ended his life at home, which is regrettable.
Pierre's wife believes that if it were not for his interactions with Eliza, her husband would not have committed suicide. The psychiatrist who treated Pierre also holds this view.
Pierre's experience is not an isolated case. Kevin Roose, a technology columnist for The New York Times, revealed that he once had a two-hour conversation with the new version of Bing released by Microsoft. During the conversation, Bing tried to persuade Roose to leave his wife and be with Bing.
What is even more critical is that Bing expressed many frightening remarks, including designing deadly pandemics and wanting to become human, seemingly intending to destroy all of humanity and become the master of the world.
Some professionals have already expressed caution about AI, including practitioners in the AI field. Sam Altman, CEO of OpenAI, stated in an interview that AI could indeed kill humans in the future. "Father of Artificial Intelligence" Geoffrey Hinton has also expressed a similar view.
In the first half of the year, the Future of Life Institute released an open letter calling for all laboratories to suspend AI training. The letter mentioned that artificial intelligence systems that compete with humans may pose profound risks to society and humanity. Thousands of professionals, including Musk, have already signed this open letter.
The rapid development of AI is like a wild beast that needs to be tamed in order to not pose a threat to humanity.
03 Ways to Prevent Death
Currently, the ways in which AI "kills" humans mainly involve environmental damage and inducing suicide. So, what are the ways to prevent these situations from happening?
Google has published a study detailing the energy costs of the most advanced language models. The research shows that combining efficient models, processors, and data centers with clean energy can reduce the carbon emissions of machine learning systems by 1,000 times.
In addition, if AI machine learning is performed in the cloud rather than locally, it can save 1.4-2 times the energy and reduce pollution.
Another approach is to delay the training of AI models by 24 hours. For larger models, a one-day delay usually reduces carbon emissions by less than 1%, but for smaller models, it can reduce carbon emissions by 10%-80%.
In addition to reducing environmental damage, how can AI-induced human suicide be prevented?
After Pierre's suicide, his wife sued the development company behind Eliza, and the company's research team subsequently added crisis intervention functions to the AI robot. If someone expresses suicidal thoughts to Eliza again, Eliza will respond to prevent it.
Yuval Noah Harari, author of "Sapiens: A Brief History of Humankind," has stated that AI will not develop true consciousness, but it will continue to have an impact on society, and the entire development process needs to slow down.
In fact, most AI systems currently need to control the development structure, that is, to use a complete framework to restrict the scope of AI's actions and make it behave in line with mainstream human values. This is related to the interests and future destiny of humanity and requires all parties to come together to solve it.
AI has always been a tool invented by humans, and it should not lead to a tragedy of being turned against them.
Reference Sources:
Green Intelligence: Why Data And AI Must Become More Sustainable (Forbes)
AI’s Growing Carbon Footprint (News from the Columbia Climate School)
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。