NingNing
NingNing|1月 27, 2025 04:31
Let's take a look at how Shaw understands the impact of Deep Seek on AI agents. The attached image shows DeepSeek's understanding of Shaw. Here is my understanding: one ⃣ LLM is a mathematical (or more accurately, statistical) research problem, while AI Agent is an engineering problem. The new LLM model will enhance AI agents rather than disrupt and destroy them; two ⃣ AI agents are an alternative to the path towards AGI, if the AGI we define is not the kind of professional problem solvers defined by OpenAI, but rather intelligent agents that can autonomously survive, grow, and replicate in the environment. The biggest difference between AI agents and large models is that they can extract semantic information from the environment and output semantic information to change the environment to maintain themselves/debug themselves to adapt to the environment to maintain themselves. three ⃣ The methodology for implementing AGI with AI agents is to construct recursive loops. The AI agent retrieves real-time information from the Embedding knowledge base, LLM weight matrix, and API Call, outputs it as semantic information, and then feeds it back into the LLM large model as a corpus loop. Then continue this loop infinitely. four ⃣ AI16Z's ElizaOS is naturally situated in a feedback loop consisting of open social media and an open-source developer community, which gives it an inherent advantage in achieving AGI.
+5
Mentioned
Share To

Timeline

HotFlash

APP

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads