Miss Jia's conversation with Kevin Kelly: Judgments about AI that I have never written in a book

CN
链捕手
Follow
1 year ago

Author: Miss Jia Baidu

Editor: Tian Siqi

Slower than it looks, LLMs tend average, Not replacing humans, New, not substitutions, Cloud first, then AI, Must change your org, Just beginning… these seven predictions are not complete. Can you give us some speculations that have never been mentioned elsewhere?" I asked KK.

KK, who took off his glasses, paused for at least 30 seconds, and then asked me a few long rounds of questions until he suddenly interrupted me.

"Then my prediction is here. My prediction is, in 10 years now, training data won't be important," KK said.

Kevin Kelly, known as "KK" to technology enthusiasts, has become a symbol of the era with his stubble and graying hair. He has written books such as "Out of Control," "What Technology Wants," and "The Inevitable," and is known as the "father of Silicon Valley." Thirty years ago, he foresaw trends such as cloud computing, virtual reality, and the Internet of Things. On June 16, 2024, he came to Suzhou to participate in a technology lecture jointly held by the Suzhou Science and Technology Business School and the Shanghai Advanced Institute of Finance at Shanghai Jiao Tong University. The above conversation took place in an exclusive interview in the meeting room after his lecture, which was originally scheduled for 20 minutes but he extended it to nearly an hour.

In this article, Miss Jia has a in-depth conversation with Kevin Kelly, from recent developments to AI innovation and the nature of humans. Apart from some differences in individual judgment, KK and "Jia Zigongnian" have similar views: the "progress bar" of AI changing the world has just begun.

1. Recent Situation: "That occupies all my time"

You must have 1000 hours. Maybe I've trained for 800 hours, but not yet 1000 hours.

Miss Jia: News comes and goes, and the world's attitude towards AI has changed a lot, especially people's views on AI 2.0, AGI, or large models. How much time have you devoted to tracking cutting-edge AI developments recently?

Kevin Kelly: That occupies all my time, all I do is keep reading articles about artificial intelligence.

Miss Jia: Whose articles do you like the most?

Kevin Kelly: Like you said, there are new articles every day, maybe every hour, about some new discoveries about language models.

Just last week, there was a paper from Anthropic about the weighting of features and how to manipulate them, which is related to the concept of the AI black box. They said we can actually see a little bit of the mechanism behind it, which is interesting.

Miss Jia: Do you usually use AI applications like Midjourney, Pika, Runway, etc.?

Kevin Kelly: I draw a picture with AI every day, and it has been going on for a year.

Miss Jia: Are you now a native of the AI industry?

Kevin Kelly: Still on the way. You must have 1000 hours. Maybe I've trained for 800 hours, but not yet 1000 hours.

Miss Jia: You are a philosopher in the field of technology. Has your technological philosophy been iterated or changed in the recent wave of AI?

Kevin Kelly: That's a good question. My technological philosophy has not changed. If there are new phenomena, I would consider it as constantly confirming and strengthening my philosophy.

So far, I have not seen any events that could change my view of technology. So my technological theory is evolving, and everything I see in AI has not changed my underlying technological philosophy.

2. AI View: "What I'm really worried about: the weaponization of artificial intelligence"

What are the best and worst decisions made by OpenAI?

Miss Jia: There is a line of small print at the top of your official website: "OVER THE LONG TERM, THE FUTURE IS DECIDED BY OPTIMISTS." The recent series of advances in AI and the rapid iteration wave, as well as the identity crisis of humanity you just mentioned, do they worry you?

Kevin Kelly: Overall, I'm not particularly worried. There are some things I do care about, but I believe we will solve them. But there are still some problems we don't know how to solve. For example, climate change, we know what to do, but there are some problems in the field of AI that we don't know how to solve, and these problems may cause us trouble in the future, such as the weaponization of AI: should we allow the emergence of a robot soldier? Can AI have the ability to kill? This is something we don't know, and it's really hard to decide. So what I'm really worried about is the weaponization of artificial intelligence.

Of course, I also care about whether AI is open source or closed source, whether it is public or owned by companies. My idea is that it should be public.

Miss Jia: Do you think AI should be open source?

Kevin Kelly: Yes, the source code should be open to the public, and this is another thing I care about.

Miss Jia: You are still an optimist.

Kevin Kelly: I am very optimistic. I believe we will eventually solve these AI-related problems, it's just that we don't know how to do it now. In other words, the result is certain, it's just that the path is not clear, so I am very optimistic. Of course, there are also things that some people worry about that I don't worry about, like I'm not worried about unemployment. Also, I'm not worried that artificial intelligence will pose a threat to us.

Miss Jia: You have fans all over the world and must know many great scientist friends. Do they agree with your views on AI, or are there more opposing views?

Kevin Kelly: This topic is indeed interesting, and there is a huge division now. There are two camps on the topic of super AI, with some very outstanding scientists feeling concerned, and another group of outstanding scientists not feeling concerned, which is interesting. I am in the camp that is not concerned about AI.

Miss Jia: So far, what are the best and worst decisions made by OpenAI?

Kevin Kelly: The worst decision is that OpenAI did not open the large model, which is a very bad decision, and another worst decision is (formerly) firing the founder Sam Altman.

The best decision is that OpenAI has always maintained rapid development, rapid iteration, and continuous innovation, and the faster development allows it to rehire Sam, and it is firm in emphasizing not being too cautious in development, but truly trying to grow rapidly.

3. Boundaries: "AI is good at hill climbing, not hill making"

You can draw a famous astronaut riding a horse in Midjourney or Dall-E, but you can't make a horse ride an astronaut, because that's beyond the scope of learning.

Miss Jia: You mentioned two types of creativity, Type 1 and Type 2, and you drew a fun picture, saying AI is good at hill climbing, not hill making. What is the difference between the two?

Kevin Kelly: The creativity of large language models is actually only one kind of creativity, which is operating within the known range. They are filling and exploring everything within the space I know. They are not inventing entirely new domains.

Breakthroughs basically involve creating new territories, rather than finding solutions within an existing term.

They are now mainly looking for answers within the range we know. You can draw a famous astronaut riding a horse in Midjourney or Dall-E, but you can't make a horse ride an astronaut, because that's beyond the scope of learning.

Miss Jia: Are you a fan of the Scaling law?

Kevin Kelly: There are some, to make it easier for "Jia Zigongnian" users to understand, let me explain first. The Scaling law is about the existence of a mathematical proportion that can describe how large a model becomes, the factors of loss, and how far it is from optimal performance.

We don't know if this is infinitely extendable. Can I expand forever? Will the curve eventually flatten out? So far, I think the evidence suggests that it will tend towards a straight line. This is different from the internet.

Of course, the evidence does not come from the Scaling law itself; the Scaling law itself is an assumption.

Miss Jia: There is a popular view in the AI industry recently - everything is about the dataset. Over time, the effectiveness of AI is not so much about algorithms or other methods, but about the dataset.

Kevin Kelly: There is a paper that says the quality and impact of the data are more significant than the algorithm. I believe this is very likely.

**I predict that we will see an AI company promoting that AI is based on training data. So someone will say, we haven't received any algorithm training, we only trained with the best data. We used high-quality books and other high-quality materials to train it. We trained it with Reddit.

It's like education. If you have a child, how would you educate them? What do you plan to let them read? Are you going to let them read Twitter or let them read classics? Some people say our artificial intelligence only reads classics. They read the highest quality books, the highest quality scientific journals. They don't read Reddit, Twitter, or Weibo. They are reading good things. They have received the best training. Some people will use this idea of planned training data as a selling point, very carefully planned. Yesterday, Getty Images announced that they will release an AI image generator trained only in the Getty image library.

4. Speculation: "In 10 years now, training data won't be important"

Miss Jia: Your fame largely comes from your identity as a prophet, but you just displayed big words on the screen: No predictions. But you also mentioned seven judgments:

Slower than it looks

LLMs tend average

Not replacing humans

New, not substitutions

Cloud first, then AI

Must change your org

Just beginning

These judgments are not complete. Can you give us some speculations that you have never mentioned elsewhere?

Kevin Kelly: (pauses for a long time) Generally, if I have an idea, I will definitely tell others. Let's continue the conversation first, and then I will try to come up with one.

(Continues to pause) Regarding artificial intelligence, I don't know much about AI in China. You are obviously also reading papers, what do you think is happening in the field of artificial intelligence in China at the moment?

Miss Jia: I think the similarities between China and the United States are much greater than people imagine.

Kevin Kelly: Similarities? How so?

Miss Jia: Such as talent. China has many young talents, they are students or in startup companies. They are very similar to the young talents I have encountered in the United States or in some other countries, because AI is so sharp, so novel.

My major is mathematics. When we compare AI and mathematics, the length of history is different. Many of my friends around me think that artificial intelligence is too complex and difficult to understand. But the history of AI is only a little over half a century, and if you just want to understand an overview, history, and discipline classification of AI, reading two or three books is enough to give you a basic overview. From the discipline itself, everyone's starting point is similar. China may not have big names like Musk or Altman, but when you look at young talents, the overall foundation is very similar.

The second dimension is data. Perhaps China may have some advantages.

Kevin Kelly: Who has access to the data? Can a young startup company access this data?

Miss Jia: I think we are just getting started. The government is trying to build basic infrastructure to allow people to access the data they want in a good way.

Kevin Kelly: What is a good way?

Miss Jia: Data markets. You know, data has been written into basic policies, becoming factors, just like capital, labor, technology, land, they are called "production factors" in China.

Kevin Kelly: Do your entrepreneurs have no difficulty accessing data?

Miss Jia: Not without difficulty. But they can, just like in other countries, perhaps more easily, and they must face many challenges, but I think the biggest challenge is not policy, not permissions, but the dataset. The datasets in different languages are different.

Kevin Kelly: Then my prediction is here. My prediction is, in 10 years, data will no longer be important.

All large language models currently require extensive data methods to expand, but they lack other types of cognition and intelligence. Just like a human toddler can distinguish between a cat and a dog after seeing 12 examples, the toddler doesn't need 12 million data to know.

I think in 10 years, we won't need millions of data to have reasoning abilities. This is a huge advantage for startups, because they don't need to have all this data. This is my speculation.

5. Essence: "Consciousness makes humans unique, but we will also give consciousness to AI"

Miss Jia: Can you give me some insights into your views on the essence of humans and artificial intelligence?

Kevin Kelly: The problem is we also don't know what the essence of humans is and the method to find the answer is to create artificial intelligence. We will succeed.

We used to realize, oh, creativity is what makes us different - but now we have changed our minds because artificial intelligence also has creativity; then we will say, well, now we think consciousness is what makes us different, but we will give consciousness to AI

Miss Jia: How far will this "giving" process continue?

Kevin Kelly: Driven by technology and AI, we will continually redefine ourselves. The more important question is not who we are, but who we want to become. What do we hope humans to be? This is a more powerful question.

Because we can have a little bit of choice. This is exciting, for me, this is the ultimate charm of AI it illuminates the fog of who we are and also inspires us to become what kind of people we want to be.

Miss Jia: Accelerated computing is reaching the scientific no-man's-land, what is the limit of this path?

Kevin Kelly: Just as we don't have a theory about intelligence, we also don't have a theory about humans.

We cannot predict where AI is going because we have no theory about AI. We also have no theory that says, if you do this, you will have a prediction; if you do this, then this will happen; if you calculate all these calculations, you will get this… We have no such theory now. This is very unusual.

In physics, we have theories - if you build a big enough collider, you will find that particle. We don't have such a theory in the field of intelligence. But what's exciting is, we will discover together with AI what the meaning of being human is.

Miss Jia: I like your answer.

Kevin Kelly: I like your questions.

ImageRight: Kevin Kelly, Left: Jia Zigongnian Founder & CEO Zhang Yijia (Image source: "Jia Zigongnian" shooting)

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

ad
Gate: 注册赢取$10000+礼包
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink