The vision of artificial intelligence in Silicon Valley? It's the echo of repackaged religion.

CN
巴比特
Follow
2 years ago

Author: Sigal Samuel

Source: Vox

Image source: Generated by Wujie AI tool

This is no coincidence - the intertwining of religion and technology has a history of several hundred years.

If I told you that in 10 years, the world as you know it will end. You will live in paradise, with no sickness, no aging, and no death. Immortality will be yours! Even better, your brain will be freed from uncertainty - you will have perfect knowledge. You won't be confined to Earth anymore, you can live in paradise.

If I told you all this, would you think I'm a religious missionary or an artificial intelligence researcher?

Either guess would make sense.

The more you listen to the discussions about artificial intelligence in Silicon Valley, the more you hear echoes of religion. Because the excitement about creating super-intelligent machines is largely a recycling of religious thought. Most secular tech experts building artificial intelligence are not aware of this.

These tech experts suggest hiding from death by uploading our minds to the cloud, where we can live forever in digital form. They describe artificial intelligence as a decision-making entity that can use mathematical certainty to determine what is optimal and what is not. They envision artificial general intelligence (AGI) - a hypothetical system that matches human problem-solving abilities in many fields - as an endeavor that, if successful, can ensure human salvation, but if not, it can bring about catastrophe.

These visions are almost identical to the visions of Christian eschatology, a branch of theology dealing with the "end times" or the ultimate fate of humanity.

Christian eschatology tells us that we will all face the "four last things": death, judgment, heaven, or hell. Those who have died will be resurrected after the Second Coming of Christ and find their eternal destination. Our souls will face the final judgment of God, who is the perfect decision-maker. If all goes well, we will ascend to heaven, but if not, we will descend to hell.

Five years ago, when I started attending conferences in Silicon Valley and first noticed the similarities between religious and artificial intelligence topics, I thought of a simple psychological explanation. Both are responses to core human anxieties: death; the difficulty of judging right from wrong; the uncertainty of our life's meaning and our ultimate place in this universe - or the next universe. Religious thinkers and artificial intelligence thinkers have stumbled upon similar answers to the questions that trouble all of us.

So I was surprised to find that the connection between the two goes much further than that.

Robert Geraci, a professor of religious studies at Manhattan College and author of "Apocalyptic AI," said, "The intertwining of religion and technology has a history of several centuries, although some will tell you that science is value-neutral and has nothing to do with religion." "That's simply not true. It never has been."

In fact, historians tracing the influence of religious thought believe that a straight line can be drawn from medieval Christian theologians to the empiricists of the Renaissance, to futurist Ray Kurzweil, and to heavyweight figures in the Silicon Valley tech industry influenced by him.

Occasionally, some people still vaguely sense the similarities. Jack Clark, co-founder of the AI safety company Anthropic, wrote on Twitter in March this year, "Sometimes I think people's fervor for AGI comes from a misplaced religious impulse in secular culture."

However, in most cases, those who view AGI as a kind of technological eschatology - from Sam Altman, CEO of OpenAI, the maker of ChatGPT, to Elon Musk, who hopes to connect the brain to computers - are expressing their ideas in secular language. They either do not realize or do not want to admit that their vision is largely fused with ancient religious thought.

But it is important to know the origins of these ideas. This is not because "religion" is in some way a pejorative term; the religious nature of ideas does not mean that they are problematic (in fact, quite the opposite is often true). Instead, we should understand the history of these concepts - such as virtual afterlife as a form of redemption, or moral progress being understood as technological progress - so that we understand that they are not immutable or inevitable; some people at some times have put forward these ideas for certain purposes, but if we want, other ideas exist. We do not have to fall into the danger of a single story.

Elke Schwarz, a political theorist at Queen Mary University of London who researches military artificial intelligence ethics, said, "We must be cautious about the narratives we accept." "Whenever we talk about religious things, there is something sacred involved. Having something sacred can be harmful, because if something is sacred, it is worth doing the worst things for."

The concept of artificial intelligence has always had a strong religious color

In shaping the Western Abrahamic religions, all of this can be traced back to shame.

Remember what happened in "Genesis"? After Adam and Eve ate from the tree of knowledge, God expelled them from the Garden of Eden and subjected them to all the humiliations of the flesh: toil and pain, birth and death. After the fall from grace, humanity was never the same. Before the fall, we were created as perfect beings in the image of God; now, we are pitiful vessels.

But in the Middle Ages, Christian thinkers proposed a radical idea, as historian David Noble explains in his book "The Religion of Technology." What if technology could help us restore humanity to the perfect state before Adam's fall?

For example, the influential philosopher John Scotus Eriugena in the 9th century insisted that part of the meaning of Adam being created in the image of God was that he was a creator, a maker. Therefore, if we want to restore humanity to the perfect state before Adam's fall, we must approach this aspect of ourselves. Eriugena wrote, "mechanical art" (also known as technology) is "the connection between humans and God, and cultivating mechanical art is a means of saving humanity."

This idea arose in the monasteries of the Middle Ages, and the motto "ora et labora" (pray and work) began to spread in the monasteries. Even in the so-called Dark Ages, some of these monasteries became hotbeds of engineering, producing inventions such as the first known tidal-powered waterwheel and percussion drilling. Catholics became known innovators; to this day, engineers have four patron saints in the Catholic Church. Some say the Catholic Church was the Silicon Valley of the Middle Ages, and for good reason: as I pointed out in an article in The Atlantic in 2018, the Catholic Church was instrumental in "metallurgy, milling, musical notation, clocks, and the widespread use of the printing press."

This is not about studying for the sake of technology, nor for the sake of profit. Instead, technological progress is synonymous with moral progress. By restoring humanity to its original perfection, we can usher in the kingdom of God. As Noble writes, "technology has been associated with transcendence in unprecedented ways, linked to Christian ideas of redemption."

The medieval idea of equating technological progress with moral progress has influenced generation after generation of Christian thinkers, continuing into modern times. A couple, the Baccons, exemplify the same core belief - that technology will bring about redemption - and how it has influenced both religious traditionalists and those adopting a scientific worldview.

In the 13th century, alchemist Roger Bacon drew inspiration from biblical prophecies, attempting to create a life elixir to achieve a resurrection similar to that described by the apostle Paul. Bacon hoped that this elixir would not only grant immortality but also endow humans with miraculous abilities, such as traveling at the speed of thought. In the 16th century, Francis Bacon emerged. On the surface, he seemed to be at odds with his predecessors - criticizing alchemy as unscientific - but he prophesied that one day we would use technology to overcome our mortality, "to the glory of the Creator, and the relief of man's estate."

During the Renaissance, Europeans dared to dream that we could reshape ourselves in the image of God, not only gradually achieving immortality but also creating consciousness from lifeless matter.

Schwarz points out, "In addition to overcoming death, the possibility of creating new life is an ultimate power."

Christian engineers created automata - wooden robots - that could walk and recite prayers. It is said that Muslims created a mechanical head that could speak like an oracle. In Jewish folklore, there are stories of a rabbi using magical language to bring a clay figure (referred to as a "golem") to life. In these stories, the golem sometimes saves the Jews from persecution. But at other times, the golem also turns against them, committing crimes and using its power for evil.

Yes, all of this sounds very familiar. In mathematician and philosopher Norbert Wiener's 1964 work "God & Golem, Inc." and in the numerous open letters released by tech experts today, you can also hear the same anxieties. They warn that AGI will bring either salvation or catastrophe.

Reading these statements, you might ask: If AGI both threatens doomsday and promises salvation, why create AGI at all? Why not limit ourselves to creating narrower forms of artificial intelligence - which already work miracles in applications such as disease treatment - and stick with that for a while?

To find the answer, let's revisit history and understand how three intertwined movements that have recently emerged have shaped Silicon Valley's vision of artificial intelligence.

Entering Transhumanism, Effective Altruism, and Longtermism

Many people believe that when Charles Darwin published "On the Origin of Species" in 1859, all religious thinkers immediately saw it as a terrifying heretical threat, believing that humans were God's most devout creation. But some Christian thinkers saw it as a splendid new garment for ancient spiritual prophecies. After all, religious thought never truly disappeared; it just put on new clothes.

A typical example is Pierre Teilhard de Chardin, a French Jesuit priest who also studied paleontology in the early 20th century. He believed that, driven by technology, human evolution is actually the vehicle for achieving the kingdom of God, and the fusion of humans and machines will lead to an explosion of intelligence, which he called the Omega Point. Our consciousness will enter a "superconscious state," in which we will be one with God, becoming a new species.

As writer Meghan O'Gieblyn records in her 2021 book "God, Human, Animal, Machine," evolutionary biologist Julian Huxley, who was also the president of the British Humanist Association and the British Eugenics Society, promoted Teilhard's view that we should use technology to evolve our species, calling it "transhumanism."

This, in turn, influenced futurist Ray Kurzweil, who made predictions similar to Teilhard's: we are on the verge of an era where human intelligence and machine intelligence will merge, and human intelligence will become immensely powerful. Kurzweil did not call it the "Omega Point," but instead renamed it the "Singularity."

In his bestselling 1999 book "The Age of Spiritual Machines," Kurzweil wrote, "Humanity and its creations in the form of advanced computing technologies will be able to address ancient problems… and will change the nature of death in the future of post-biology." (Strong New Testament vibes. According to the Book of Revelation: "There will be no more death or mourning or crying or pain, for the old order of things has passed away.")

Kurzweil acknowledges the spiritual similarities between these ideas, as do those who have formed explicit religious movements around worshiping artificial intelligence or using it to lead humanity to piety, from Martine Rothblatt's Terasem movement to the Mormon Transhumanist Association, to Anthony Levandowski's short-lived "Way of the Future" church. But many, such as Oxford philosopher Nick Bostrom, insist that transhumanism is different from religion, relying on "critical rationality and our best existing scientific evidence."

Today, transhumanism has a sister, another movement born in Oxford and thriving in Silicon Valley: Effective Altruism, which aims to figure out how to do as much good as possible for the most people. Effective altruists also say their approach is rooted in secular rationality and evidence.

However, effective altruism actually shares many aspects with religion: functionally (it gathers a community around a shared vision of moral living), structurally (it has a hierarchical system of prophet leaders, classic texts, holidays, and rituals), and aesthetically (it advocates tithing and supports asceticism). Most importantly, it provides a form of eschatology.

The eschatology of effective altruism appears in its most controversial form - longtermism, which Musk has described as "very much in line with my philosophy." It argues that the best way to help the most people is to focus on ensuring the survival of humanity in the distant future (e.g., millions of years from now), as there may be many more billions of people in the future than there are now - assuming our species does not go extinct first.

From here, we begin to get answers to the question of why tech experts are committed to building AGI.

AI Progress as Moral Progress

For effective altruists and longtermists, sticking to narrow artificial intelligence alone is not acceptable. Oxford philosopher Will MacAskill is called the "reluctant prophet" of effective altruism and longtermism. In his 2022 book "What We Owe the Future," he explains why he believes that stagnation in technological progress is unacceptable. He writes, "A period of stagnation," "could increase the risk of extinction and permanent collapse."

He quotes his colleague Toby Ord, who estimates that the probability of human extinction in the next century due to rogue AI and engineered pandemics is one in six. Another EA colleague, Holden Karnofsky, also believes that we are living in a "turning point in history" or the "most important century" in human history, a special period in which we either thrive as never before or bring about our own demise. Like MacAskill and Musk, the book suggests that a good way to avoid extinction is to settle on other planets, so we don't put all our eggs in one basket.

But this is only half of MacAskill's "moral case for space settlement." The other half is that we should strive to make future human civilization as large and utopian as possible. As MacAskill's colleague Bostrom puts it, "space colonization" will provide us with space and resources to run vast numbers of digital simulations, allowing humans to live happy lives. The larger the space, the more (digital) happy humans! This is where the vast majority of moral value lies: not in the present on Earth, but in the future in heaven… sorry, I mean "virtual afterlife."

When we put all these ideas together and summarize them, we arrive at a basic proposition:

Anyone studying religion can see what this is: the logic of the end of the world.

Transhumanists, effective altruists, and longtermists inherit this view: the end of the world is imminent, and technological progress is our best chance for civilizational progress. For those acting on this logic, pursuing AGI seems natural. Despite believing that AGI poses a great risk to survival, they believe we cannot not build AGI, because it could propel humanity from the precarious adolescence of Earth (always on the brink of ending!) to the prosperous adulthood of interstellar space (so many happy people, so many civilizational values!). Of course, we should advance technologically, because it means advancing civilization!

But is this rooted in reason and evidence? Or is it rooted in dogma?

The hidden assumption here is technological determinism, with a touch of geopolitics. This idea suggests that even if you and I don't create powerful, terrifying artificial intelligence, someone else or another country will - so why should we stop ourselves from participating? Altman of OpenAI is a prime example of this belief in the inevitability of technological progress. In 2017, he wrote on his blog, "Superhuman artificial intelligence will come unless we go extinct first." Why? "As we have learned, if the laws of physics don't prevent it, scientific progress will eventually happen."

Have we learned? I haven't seen any evidence that anything that can be invented will definitely be invented. (As Katja Grace, Chief Research Officer of "The Impact of AI," wrote, "Consider a machine that sprays feces into your eyes. We could technically do that, but maybe no one has ever made such a machine.") People seem more inclined to pursue innovation under strong economic, social, or ideological pressures.

In the AGI frenzy sweeping through Silicon Valley, the reconstituted religious thinking under the guise of transhumanism, effective altruism, and longtermism provides social and ideological pressure. As for economic and profit pressure, it has always been present in Silicon Valley.

A poll by Reuters in May showed that 61% of Americans now believe that artificial intelligence could threaten human civilization, a view particularly strong among evangelical Christians. This is not surprising to religious studies scholar Geraci. He points out that the logic of the end of the world is "very, very, very strong in American Protestantism" - so much so that currently 4 out of 10 American adults believe that humanity is living in the end times.

Unfortunately, the logic of the end of the world often breeds dangerous fanaticism. In the Middle Ages, when false messiahs appeared, people would abandon their worldly possessions to follow them as prophets. Today, with talk of AI doomsday filling the media, true believers drop out of college to study AI safety. The logic of the end times, whether of doom or salvation, heaven or hell, prompts people to take huge risks - to be fully committed.

In an interview I conducted last year, MacAskill denied engaging in extreme gambling behavior. He told me that in his imagination, a certain kind of tech bro in Silicon Valley, believing there is a 5% chance of dying in an AGI disaster and a 10% chance of AGI bringing about a happy utopia, would be willing to take those chances and hastily build AGI.

MacAskill told me, "I don't want those people to build AGI, because they are not responsive to moral issues." "Perhaps this means we have to postpone the arrival of the singularity to make it safer. Perhaps this means the singularity won't happen in my lifetime. This would be a huge sacrifice."

As MacAskill told me this, I imagined an image of Moses, gazing at the promised land, knowing he would not be able to reach it. The vision of longtermism seems to require a cruel belief: you yourself will not be saved, but your spiritual descendants will be.

We Need to Decide if This is the Redemption We Want

Believing that technology can fundamentally improve the fate of humanity is not inherently wrong. In many ways, it has clearly done so.

Ilia Delio, who holds two doctoral degrees and holds a theology chair at Villanova University, told me, "Technology is not the problem." In fact, Delio is pleased with the view that we are already in a new stage of evolution, transitioning from Homo sapiens to "techno sapiens." She believes that we should actively evolve with an open mind, with the help of technology.

But she is also clear that we need to be explicit about which values are influencing our technology, "so that we can purposefully - and ethically - develop technology," she said. Otherwise, "technology is blind and potentially dangerous."

Geraci agrees with this. He told me, "It's a bit scary if a lot of people in Silicon Valley say, 'Hey, I support this technology because it can make me immortal.'" "But if someone says, 'I support this technology because I think we can use it to solve world hunger' - these are two very different motivations. This will affect the type of product you are trying to design, the people you are designing it for, and how you are trying to deploy it in the world around you."

While carefully deciding the value orientation of technology, it is also important to be acutely aware of who holds the decision-making power. Schwarz believes that AI designers are selling us a vision of necessary technological progress brought about by AI and positioning themselves as the sole experts in this area, giving them immense power - arguably more power than officials elected through democratic elections.

"The view that developing artificial intelligence is a law of nature has become a sorting principle, and this sorting principle is political. It gives political power to some, and much less power to most others," Schwarz said. "To me, it's strange to say 'we have to be very careful with AGI' rather than saying 'we don't need AGI, it's not up for discussion.' But we have reached a point where power has been consolidated in a way that doesn't even give us a choice, and we can even collectively suggest that AGI should not be pursued."

We have reached this point largely because, for the past thousand years, the West has fallen into the danger of a single story: the story inherited from medieval religious thinkers that equates technological progress with moral progress.

Delio said, "This is our only narrative. This narrative inclines us to heed the opinions of technological experts (who were also spiritual authorities in the past) and to integrate values and assumptions into their products."

"What is the alternative? If the alternative narrative is 'living vitality itself is the goal,'" Delio added, "then our expectations of technology may be completely different." "But we don't have that narrative! Our mainstream narrative is about creating, inventing, manufacturing, and letting them change us."

We need to decide what kind of redemption we want. If our enthusiasm for artificial intelligence comes from a longing to transcend the limits of the earth and physical death, it will produce a certain societal outcome. But if we are committed to using technology to improve this world and the well-being of these bodies, we can achieve different results. As Noble said, we can "begin to direct our astonishing capabilities toward more secular and humane purposes."

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink