Author: Lynn Yang, Silicon Release
One evening last week, while having dinner, I listened to a podcast on Spotify.
The guest of honor was Vinod Khosla, the founder of Khosla Ventures, a top-tier technology venture capital firm in Silicon Valley and the first investor in OpenAI.
Obviously, it's not every day that you get the chance to gain a deep understanding of the first investor in OpenAI.
So, I've compiled Khosla's core viewpoints and I'm sharing them with you. Here are 7 core viewpoints on AI from Khosla:
(1)
Background: The host asked Khosla about the AI use cases he mainly uses in his daily life as the first investor in OpenAI.
Khosla: Mainly two, ChatGPT and Tesla's self-driving.
About Tesla, the number of times I've driven, it's amazing. It feels like complete autonomous driving. You know, a few nights ago, I landed at 3 am. I thought, "I'm too tired, I'm not going to be a safe driver." So I just said, "Take me home." The experience was amazing.
These are the main uses of AI for me. And I use these two uses many times a day.
About ChatGPT, I use it to plan my spring garden.
I just told ChatGPT, "I want plants that grow in the 9a zone (referring to plants suitable for planting in the 9a zone according to the USDA's cold hardiness zone classification). I want the height of each zone because I'm layering them."
Then I also said, "I want some flowers that bloom in spring, some that bloom in early summer, some that bloom in late summer, and some that bloom in autumn."
This is actually a design work. I had ChatGPT help me arrange 20 plants, and it gave me all this information, including watering volume, climate zone, height, areas without sunlight, semi-shaded areas, bright shaded areas.
So, ChatGPT did an amazing job. These things would have taken me 3-4 hours. So yes, the garden is designed by me, I didn't hire a designer. And I can assure you: my garden is blooming now, you might not believe it.
(2)
Background: The host asked Khosla, as the first investor in OpenAI, how he views Apple's announcement of its AI strategy and collaboration with OpenAI, and what impact this collaboration will have on the AI startup ecosystem in the coming years.
Khosla: I think Apple needs to do something for AI, especially since Siri's reputation has started to deteriorate.
First, Apple's smart move is to keep it open, allowing users to access any LLM. But Apple has indeed chosen the embedded and built-in approach in iOS, which makes Elon Musk uneasy, and he has threatened to ban Apple devices.
So I think the more important thing about this is that Apple is actually showing something very important: how do we interact with computers?
I think over time, Siri will evolve into the beginning of a real human interface. From this perspective, I think this is big news because we are seeing the beginning of this. It's exciting.
From OpenAI's perspective, this collaboration has clearly defined OpenAI's best position in competition—direct interaction with users. In fact, many people want this business.
On the other hand, I do think Apple should have considered carefully—where does Apple think the best AI will be in 1-2 years?
So in many ways, Apple's collaboration with OpenAI is a validation of OpenAI and a very important milestone in how humans interact with machines.
(3)
Background: The host asked Khosla that Apple's case illustrates that a small model can do a lot. So what is the future positioning of large models? And if everyone wants a small model, will the future become a situation where you can talk to many people, some with an IQ of 50, some with an IQ of 100, and some with an IQ of 10,000? Then the key is, where do you want to spend your money, do you want to spend it on asking a person with an IQ of 10,000 a question, or on asking a person with an IQ of maybe only 70, but who knows the content of your email? This involves the balance of product direction and the cost of model computation. Do you think the future will be such a competition?
Khosla: Small models and large models are different and cannot replace each other.
Also, I may not agree with the IQ assumption about the future. In fact, what I think will happen is that the cost of computation will become very low in the future.
I bet: in a year, the cost of computation will be 1/5 to 1/10 of today's cost. Therefore, my advice to all our startups is: ignore your computation costs, because any assumption you make, any dollar you spend on optimizing software, will be worth nothing within a year.
The reason is: every owner of a large model is trying to reduce the cost of computation. And as engineers at OpenAI, Google, and cloud computing companies work to reduce the cost of expensive AI chips, computation will soon become very cheap.
So forget about it, and rely on the competition between various large models on the market, such as Google's Gemini and OpenAI, to reduce the cost to a negligible level. In fact, as long as it is reduced to 10% or less of the current cost, it doesn't matter.
In addition, the training cost of one large model to be superior to another large model is an order of magnitude higher. This is why I think open-source models are not feasible because the training cost is too high. But once you have accepted the training, you want to be used as widely as possible for two reasons:
First, you want to get the maximum return from it, and the model with the lowest cost will get the maximum return.
Second, and more importantly, there is a lot of data available for you to train the next generation of models.
So for various reasons, you want to maximize usage. If you're playing the long game, I think: the game of AI models is mostly played within a 5-year time frame, not within a year. Within this time frame, costs will decrease.
Today, Nvidia extracts quite a bit of revenue from everyone, but each model will run on multiple types of GPUs or computing, and they need the most data generation. So I believe that in the next few years, revenue will not be the key metric for model companies.
Of course, you don't want to lose too much money that you can't afford. But you don't want to make a lot of money because you're trying to get a lot of user usage, you're trying to get a lot of data from user usage, and learn to become a better model.
I do think there is a lot for models to gain in terms of intelligence, whether it's in reasoning, probabilistic thinking, or some kind of pattern matching, etc., there is still a lot of room for these models to get better.
So I think we will see amazing progress almost every year. Some companies execute better than others, and that's the main difference between companies: OpenAI is very outstanding in execution, Google has outstanding technology, but the execution is not clear enough.
(4)
Background: The host asked Khosla, if we consider a five-year time frame, some people in the tech industry really believe that the value of AI will all go to existing large companies. But even so, it has been commoditized. So what do you think the prospects will be in five years? And what AI topics are you more concerned about, which are not covered by existing large companies?
Khosla: So I don't believe that it's a good position if you're building base models and trying to compete with OpenAI and Google.
Because large LLMs will belong to large participants who can run on very large clusters, and they are people who can pay for proprietary content/data, whether it's paying for Reddit or a company that can access every scientific article.
So the biggest players do have an advantage.
But on the other hand, we recently announced an investment in Symbolica Logic. They take a very different approach to modeling. It doesn't rely on a lot of data or a lot of computation. This is actually a high-risk, high-upside investment. If Symbolica succeeds, it's dramatic.
So I think that even at the model level, there are still other methods. If I were to call my friend Josh Tenenbaum at MIT, he would say that the biggest contribution is probabilistic programming. Because human thinking is probabilistic, different from pattern matching. This is an important factor.
So I think the foundational technology is far from complete. We are increasingly using Transformer models, but there are other models waiting to be developed. It's just that everyone is afraid to invest in anything other than Transformer models. But we're not.
You know, I'm very focused on esoteric things. In fact, Symbolica is a theory called category theory, which most mathematicians haven't heard of.
So, we probably made a big bet about 15 to 18 months ago. I think investing in cloud computing is stupid because people buy GPUs to build cloud computing, but they will lose to Amazon, they will lose to Amazon's scale and efficiency, and to Microsoft.
Both companies are making custom chips, so they don't have to pay Nvidia taxes in a few years. Yes, there's also AMD, there's a lot to be done in the chip field. But at the next level, at the application level, there's a huge opportunity here.
(5)
Background: In the following content, Khosla talked about the specific opportunities he sees as the huge potential for AI applications and listed many examples.
Khosla: One important prediction of mine is that in the future, almost all expertise will be free.
So by this logic, whether you're talking about primary healthcare, teachers, structural engineers, or oncologists. There are hundreds, even thousands, of fields, and each one will produce a very successful company.
Recently, we also invested in a company that builds structural engineers. Of course, we invested in something very popular like Devin. Everyone knows Devin, they are building an AI programmer, not a tool like Copilot for programmers, but an actual programmer. But we just invested in a company building structural engineers, they're called Hedral.
One strange thing, how many structural engineers are there now? How much money do we spend on structural engineering? You give a building structure to a structural engineer, and two months later, you'll get something and a change. But you can get 5 changes from an AI structural engineer in 5 hours and save several months in a construction project. So this is a very good niche example. But this could be a niche market worth hundreds of billions of dollars.
So, my point is: …
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。