"Current AI tools are powerful, but they have not yet established a social structure and lack the 'connectivity' platform pivot."
Written by: You Xin
From Facebook to TikTok, consumer products have historically driven social evolution by connecting people. However, in the new AI-driven cycle, 'task completion' is replacing 'relationship building' as the main product line. Products like ChatGPT, Runway, and Midjourney represent new entry points that not only reshape content generation but also change user payment structures and product monetization paths.
The five partners at a16z who focus on consumer sector investments revealed in discussions that while current AI tools are powerful, they have not yet established a social structure and lack the 'connectivity' platform pivot.
The absence of blockbuster consumer products reflects a gap between platforms and models. A truly AI-native social system has yet to emerge, and this void may give rise to the next generation of super applications. The evolution of a16z's platform strategy: from VC 'unwilling to wipe butts' to 'full-stack services.'
Meanwhile, product forms such as AI avatars, voice agents, and digital personalities are taking shape, with implications that go beyond companionship or tools, instead constructing new mechanisms of expression and psychological relationships. The core competitiveness of future platforms may shift towards model capabilities, product evolution speed, and the level of cognitive system integration.
AI is rewriting the 2C business model
Over the past twenty years, representative products have emerged in the consumer sector every few years, from Facebook and Twitter to Instagram, Snapchat, WhatsApp, Tinder, and TikTok, each driving an evolution of social paradigms. In recent years, this rhythm seems to have stagnated, raising an important question: Has innovation truly paused, or is our definition of 'consumer products' facing reconstruction?
In this new cycle, ChatGPT is considered one of the most representative consumer products. Although it is not a traditional social network, it has profoundly changed people's relationships with information, content, and even tools. Tools like Midjourney, ElevenLabs, Blockade Labs, Kling, and VEO have rapidly gained popularity in audio, video, and image fields, but most have not yet established a connection structure between people and lack social graph attributes.
Most current AI innovations are still led by model researchers, possessing technical depth but lacking experience in building end products. With the proliferation of APIs and open-source mechanisms, underlying capabilities are being released, and new blockbuster consumer products may emerge as a result.
The development of the consumer internet over the past twenty years, with the success of Google, Facebook, and Uber, is rooted in three underlying waves: the internet, mobile devices, and cloud computing. The current evolution comes from the leap in model capabilities, where the pace of technology is no longer characterized by functional updates but is driven by remotely upgraded models.
The main line of consumer products has also shifted from 'connecting people' to 'completing tasks.' Google was once a tool for information retrieval, and ChatGPT is gradually taking over that role. Tools like Dropbox and Box, while not establishing social graphs, still possess broad penetration on the consumer side. Despite the continuous rise in content generation demand, the connection structure of the AI era has yet to be established, and this void may be the direction for the next breakthrough.
The moat of traditional social platforms is facing reevaluation. In the context of the rise of AI, platform dominance may be shifting from building relationship graphs to building capabilities in models and task systems. Whether technology-driven companies like OpenAI are becoming the next generation of platform companies is worth noting. Can returns rely solely on OpenAI? Founders of Silicon Valley's 20-year dollar fund warn that the VC model is on the verge of failure.
From a business model perspective, the monetization capability of AI products far exceeds that of previous consumer tools. In the past, even leading applications had low average user revenue. Now, top users can pay up to $200 per month, exceeding the limits of most traditional tech platforms. This means companies can bypass advertising and lengthy monetization paths, directly obtaining stable income through subscriptions. The previous overemphasis on network effects and moats was essentially due to weak product monetization capabilities. Today, as long as the tool is valuable enough, users are naturally willing to pay.
This change has brought about a structural shift. The traditional 'weak business model' forced founders to build narratives around user stickiness, lifecycle value, and other metrics, while AI products, with their direct charging capabilities, can close the commercial logic loop from the outset.
Although models like Claude, ChatGPT, and Gemini appear similar in functionality, the actual user experience shows significant differences. This preference difference has given rise to independent user groups. The market has not seen a price war; instead, there is a trend of leading products continuously raising prices, indicating that a differentiated competitive structure has gradually been established.
AI is also redefining 'retention rate.' In traditional subscription products, user retention directly determines revenue retention. Now, users may continue to use basic services but choose to upgrade their subscriptions due to more frequent calls, larger point allocations, or higher quality models. Revenue retention is significantly higher than user retention, which is unprecedented.
The pricing model of AI products is undergoing a fundamental transformation. Traditional consumer subscriptions typically cost around $50 per year, but now many users are willing to pay $200 per month or even more. The acceptability of this price structure stems from a fundamental change in the actual value experienced by users.
AI products can be accepted at high premiums because they are no longer just 'assistive improvements' but truly 'complete tasks for users.' For example, research tools that once required ten hours of manual organization can now generate reports in just a few minutes. Even if used only a few times a year, the service has a reasonable payment expectation.
In the field of video generation, Runway's Gen-3 model is seen as representing the experiential evolution of a new generation of AI tools. It can generate videos in various styles through natural language prompts, supporting voice and action customization. Some users create exclusive videos with friends' names using this tool, while creators generate complete animated works to upload to social platforms. This 'seconds to generate, instant use' interactive experience is unprecedented.
From a consumer structure perspective, future user spending will be highly concentrated in three categories: food, rent, and software. As a universal tool, software's penetration speed is continuously increasing, and its spending share is rising, beginning to consume budget space that originally belonged to other categories.
A true AI social network has yet to emerge
Entertainment, creation, and even interpersonal relationships are gradually being mediated by AI tools. Many tasks that previously relied on offline communication or social interaction can now be achieved through subscription models, from video generation to writing assistance, even replacing some emotional expressions.
In this trend, the connection mechanism between people is also facing the necessity of rethinking. Although users remain active on traditional platforms like Instagram and Twitter, a truly new generation of connection methods has yet to appear.
The essence of social products has always revolved around 'status updates.' From text to images, and then to short videos, the medium continues to evolve, but the underlying logic remains 'What am I doing?'—aimed at establishing a sense of presence and obtaining feedback. This structure forms the foundation of the previous generation of social platforms.
The current question is whether AI can give rise to a completely new way of connecting? Model interactions have deeply penetrated users' lives. In daily conversations with AI tools, highly personalized emotions and needs are inputted. This long-term input is likely to understand users better than search engines, and if systematically extracted and externalized as a 'digital self,' the logic of connection between people may be reconstructed.
Some early phenomena have begun to emerge. For example, on TikTok, personality tests, comic generation, and content imitation based on AI feedback have started to appear. These behaviors are no longer merely content generation but a form of social expression through 'digital mapping.' Users not only generate but also actively share, triggering imitation and interaction, showing a high interest in 'digital self-expression.'
However, all of this remains confined within the structure of old platforms. Whether on TikTok or Facebook, despite smarter content, the information flow structure and interaction logic have hardly changed. Platforms have not truly evolved due to the explosion of models; they have merely become hosting containers for generated content.
The leap in generative capabilities has yet to find a matching platform paradigm. A large amount of content lacks structured presentation and interactive organization, instead being dissolved into information noise by the existing content architecture of the platforms. Old platforms serve the function of content hosting rather than being the engines for reconstructing social paradigms.
Current platforms resemble 'old systems in new skins.' Although short videos, Reels, and other formats appear modern and youthful, the underlying logic still does not escape the constraints of information flow pushing and like distribution paradigms.
An unresolved core question is: What will the first truly 'AI-native' social product look like?
It should not be a model-generated image collage or a visual refresh of information flow, but a system capable of carrying real emotional fluctuations, triggering connections and resonance. The essence of social interaction has never been about perfect performance but about uncertainty—awkwardness, failure, and humor constitute the tension structure of emotions. Today, many AI tools output the 'most ideal user version,' always positive and always smooth, yet making real social experiences singular and hollow.
Products currently referred to as 'AI social' are essentially still model-based replicas of old logic. A common practice is to reuse the interface structure of old platforms, using models as content sources, but without bringing fundamental changes to product paradigms and interaction structures. Truly breakthrough products should reconstruct platform systems from the underlying logic of 'AI + human.'
Technical limitations remain a significant obstacle. Almost all blockbuster consumer products have emerged on mobile platforms, while the deployment of large models on mobile devices still faces challenges. Real-time response, multimodal generation, and other capabilities place extremely high demands on edge computing power. Before breakthroughs in model compression and computational efficiency, 'AI-native' social products will still struggle to be fully realized.
The individual matching mechanism is another direction that has yet to be fully activated. Although social platforms possess vast amounts of user data, there has always been a lack of systematic advancement in the 'actively recommending suitable connections' phase. If a dynamic matching system can be built based on user behavior, intent, and language interaction patterns in the future, the underlying logic of social interaction will be reshaped.
AI can not only capture 'who you are' but also depict 'what you know,' 'how you think,' and 'what you can bring.' This capability is no longer limited to static label-like 'identity profiles' but forms a dynamic, semantically rich 'personality model.' Traditional platforms like LinkedIn construct static self-indexes, while AI has the ability to generate a knowledge-driven living personality interface.
In the future, people may even communicate directly with a 'synthetic self,' gaining experiences, judgments, and values from digital personalities. This is no longer an optimization of information flow structures but fundamentally reconstructs the mechanisms of personality expression and social connection itself.
In the AI era, there are no moats, only speed
In addition to social interactions not yet experiencing a paradigm leap, the user diffusion path of AI tools is also undergoing a reversal. Unlike the past internet logic of taking off from the consumer side and gradually penetrating the business side, AI tools now show a reverse propagation model where enterprises adopt them first, followed by consumer diffusion in multiple scenarios.
Taking voice generation tools as an example, early users were primarily concentrated in niche circles such as geeks, creators, and game developers, with applications including voice cloning, dubbing videos, and game mods. However, the real driving force behind growth comes from the large-scale systematic adoption by enterprise clients, applied in various fields such as entertainment production, media content, and voice synthesis. Many companies have embedded these tools into their workflows, achieving enterprise penetration earlier than expected.
This path is no longer an isolated case. Multiple AI products exhibit similar trajectories: initially gaining attention through viral spread on the consumer side, followed by B2B clients becoming the main drivers of monetization and scaling. Unlike traditional consumer products that struggle to transition to the enterprise side, many companies are now identifying AI tools through communities like Reddit, X, and newsletters, actively piloting them, with consumer enthusiasm becoming an information entry point for enterprise AI deployment.
This logic is being productized and engineered into systematic strategies. Some companies have established mechanisms where, upon detecting multiple employees from the same organization registering and using a particular tool, they proactively trigger B2B sales processes through payment data or domain ownership. The migration from consumer to enterprise is no longer an incidental event but a replicable business path.
This "bottom-up" diffusion mechanism also raises a larger question: Are these hot AI products the foundational platforms of the future, or merely transitional products like MySpace and Friendster?
Current assessments tend to be cautiously optimistic. AI tools have the potential to evolve into long-term platforms, but they must navigate the technical pressures brought about by continuous evolution at the model level. For instance, the new generation of multimodal models not only supports role-playing, text-image collaboration, and real-time audio generation but also rapidly enhances the depth of expression and interaction methods. Even in the relatively stable text domain, there remains significant room for model optimization. As long as continuous iteration is possible, whether through self-development or efficient integration, tool-based products can potentially remain at the forefront without being quickly replaced.
"Don't fall behind" has become the most practical competitive proposition today. In an increasingly segmented market, image generation is no longer judged solely by "who is the strongest," but rather by "who is most suitable for illustrators, photographers, and lightweight users," leading to precise positioning competition. As long as updates are continuous and users remain engaged, products can achieve long-term sustainability.
Similar professional differentiation is also emerging in video tools. Different products excel in different content forms; some focus on e-commerce advertising, others emphasize narrative pacing, and some specialize in structural editing. The market capacity is large enough to support the coexistence of various positions, with clarity and stability in structural positioning being key.
The discussion around whether the concept of "moat" still applies in the AI era is undergoing fundamental changes. Traditional logic emphasizes network effects, platform binding, and process integration, but many projects once thought to have "deep moats" ultimately failed to become winners. Instead, it is those small teams that frequently experiment and iterate on the margins, continuously evolving their models and products, that eventually enter the center of the main track.
Currently, the most noteworthy "moat" is speed: first, the speed of distribution, meaning who can enter the user's view the earliest; second, the speed of iteration, meaning who can launch new features the fastest and stimulate usage inertia. In an era of scarce attention and highly fragmented cognition, those who appear first and continue to change are more likely to lead to revenue, channels, and market scale accumulation. "Continuous updates" are replacing "steady-state defense," becoming a more realistic strategy in the AI era.
"Speed brings mental occupation, and mental occupation drives revenue closure," has become one of the most important growth logics today. Capital resources can feed back into R&D, enhancing technological advantages and ultimately forming a snowball effect. This mechanism aligns more closely with the cyclical dynamics of AI products and adapts better to rapidly evolving market demands.
"Dynamic leadership" is replacing "static barriers" as the essence of the new generation of moats. The standard for assessing whether an AI product can exist long-term is no longer static market share but whether it can consistently appear at the forefront of technology or user cognition.
The traditional notion of "network effects" has not yet fully manifested in AI scenarios. Most products are still in the "content creation" stage and have not formed a closed-loop ecosystem of "generation-consumption-interaction." User relationships have not yet solidified into a structural network, and platforms with social-level network effects are still in the making.
However, in some vertical categories, new barrier structures have begun to emerge. For example, in voice synthesis, certain products have established process bindings in multiple enterprise scenarios, building a dual barrier of "efficiency + quality" through frequent iterations and high-quality outputs. This mechanism may become one of the realistic paths for constructing product moats today.
In terms of experience, some voice platforms have begun to show the early signs of network effects. By continuously expanding their databases through user-uploaded corpora and character voice samples, platform models receive ongoing training feedback, forming user dependency and a positive content cycle. For instance, for targeted voice needs like "elderly wizard," mainstream platforms can provide over twenty high-quality versions, while general products may only offer two or three, reflecting the gap in training depth and content breadth.
This sedimentation path has begun to establish a new type of user stickiness and platform dependency mechanism in the specific scenario of voice generation. Although it has not yet reached platform-level scale, it has shown signs of forming a closed loop.
Whether voice can become the underlying interactive interface for AI is also transitioning from technical imagination to product reality. As the most primitive form of human interaction, voice has experienced multiple rounds of failed attempts over the past few decades, from VoiceXML to voice assistants, and has never become an efficient human-computer interaction channel. It wasn't until the rise of generative models that voice finally gained the technical foundation to support a "universal interactive entry."
The path for voice AI to land is also rapidly penetrating from consumer applications to enterprise scenarios. Although initial concepts revolved around AI coaches, psychological assistants, and companionship products, the industries that are currently adopting it the fastest are those that have a natural reliance on voice, such as financial services and customer support. High turnover rates in customer service, poor service consistency, and heavy compliance costs are beginning to reveal the systemic value of AI voice controllability and automation advantages.
Some tools have already emerged, such as Granola, which is starting to enter enterprise usage scenarios. While there is not yet a "universal voice product," the path has been preliminarily opened.
More notably, AI voice is entering critical scenarios with high trust costs and high-value information transmission. This includes sales conversion, customer management, partnership negotiations, and internal cultural communication, all of which rely on high-quality dialogue and judgment transmission. Generative voice models have demonstrated more consistent, uninterrupted, and controllable execution capabilities in these complex dialogue scenarios than humans.
Once such systems continue to evolve in the future, enterprises will have to reassess the fundamental understanding of "who is the most important conversationalist in the organization."
Behind all these trends, a new structural judgment is taking shape: the moat in the AI era no longer comes from user numbers or ecological binding but from the depth of model training, the speed of product evolution, and the breadth of system integration. Companies with early accumulation, continuous updates, and high-frequency delivery capabilities are reshaping technological barriers with an "engineering rhythm." The new generation of product infrastructure may be gradually taking shape in these seemingly vertical small tracks.
Sequoia Capital's Roelof Botha discusses the VC observation model in the AI era—AI does not weaken centralization like the internet, but there are still structural opportunities.
The AI Avatar That Knows You Best
The evolution of voice technology is just the prologue; the concept of AI avatars is gradually moving out of the laboratory and into the path of productization. More and more teams are beginning to think: In what scenarios will people establish long-term interactions with their "synthetic selves"?
The core of AI avatars is no longer about "amplifying top influence" but about empowering every ordinary person to express and extend themselves. In reality, there are many individuals with unique knowledge, experience, and personal charm who have long been unseen due to barriers to expression and media. The proliferation of AI cloning has provided these individuals with the infrastructure for "being recorded, being called upon, and being inherited."
Knowledgeable personality agents are one of the typical paths that have been realized. For example, in a voice course system, the instructor's voice is constructed as an interactive character, combined with retrieval-enhanced generation technology, allowing users to ask any questions related to the course, with the system generating answers in real-time based on a vast corpus. The course is no longer just a passive playback of content but an active participation of knowledge personalities, transforming what originally required hours of viewing into a personalized Q&A experience completed in minutes.
This marks the elevation of digital personalities from the "content expression layer" to the "cognitive interaction entry." When AI avatars can continuously present a familiar, ideal, and even transcendent interpersonal experience in terms of semantics, rhythm, and emotional structure, the trust and reliance users establish will transcend the tool level and enter the realm of "psychological relationships."
This evolutionary path also drives updates in cognitive concepts. Future digital interactions may diverge into two core forms: one is the extended personality built around real individuals (such as mentors, idols, and extensions of friends and family), and the other is the "virtual ideal other" generated based on user preferences and idealized settings. Although the latter has never existed in reality, it can form highly effective companionship and feedback relationships.
In the creator field, this trend is also beginning to emerge. Some individuals with publicly available corpora are being "cloned" into callable digital personality assets, which may participate in content production, social interaction, and commercial licensing as part of personal IP, reshaping "individual boundaries" and "modes of expression."
This has given rise to "AI celebrities." One type consists of completely fictional image idols, fully constructed by generative models in terms of images, voices, and behaviors; the other type consists of multiple digital avatars of real stars interacting with users in different personality states across various platforms. These "AI cultural personalities" have been extensively tested in social networks, evaluated based on image realism, behavioral consistency, and semantic modeling depth.
In the content ecosystem, AI tools have lowered the barriers to creation but have not changed the scarcity of high-quality content. Compelling content still depends on the creator's aesthetic judgment, emotional tension, and sustained expressiveness. AI plays more of a role as an "enabler of realization" rather than a "substitute for creative motivation."
The group of "creators liberated by tools" is emerging. They may not have traditional artistic backgrounds but have used AI tools to release their expressive intentions. AI provides an entry point, not the end of the channel; whether they can stand out still depends on individual capabilities, thematic uniqueness, and narrative structure.
This mode of expression has already been reflected in content products. For example, video content in the form of "virtual street interviews" essentially involves structured interactions with AI-generated characters. The characters can be elves, wizards, or fantastical creatures, and the platform can generate entire dialogues and scenes with one click, automating the entire process from character setting and language logic to video rendering. This mechanism has gained high attention across multiple platforms and indicates that the product form of narrative AI is taking shape.
A similar trend is also seen in the music field, but model outputs still face challenges in expressiveness and stability. The biggest issue with AI music currently is its tendency toward "averageness." Models naturally tend to fit the center, while truly impactful artistic content often arises from "non-average" cultural conflicts, emotional extremes, and resonances with the times.
This is not due to insufficient model capabilities but rather because the algorithmic objectives do not cover the tension logic of art. Art is not about "accuracy" but about "new meanings in conflict." This also prompts a rethinking of whether AI can participate in generating culturally deep content rather than merely serving as an accelerator for repetitive expressions.
This discussion ultimately centers on the value of "AI companionship." The layer of relationship between AI and humans may be one of the earliest mature and commercially viable scenarios.
In the early stages of companion products, many users expressed that even simulated responses created a psychological safe zone. AI does not need to truly "understand"; as long as it can construct a subjective experience of "being heard," it can alleviate loneliness, anxiety, and social fatigue. For some groups, this simulated interaction is even a prerequisite mechanism for rebuilding real social skills.
AI relationships are not merely enhancers of comfort zones. On the contrary, the most valuable companionship may stem from the cognitive challenges it brings. If AI can moderately pose questions, guide conflicts, and challenge ingrained perceptions, it may become a facilitator of psychological growth rather than a confirmer. This confrontational interaction logic is the direction truly worth developing in future AI avatar systems.
This trend also reveals a new functional positioning for technology: moving from interactive tools to "psychological infrastructure." When AI can participate in emotional regulation, relationship support, and cognitive updates, what it carries is no longer just text or voice capabilities, but an extension mechanism of social behavior.
The ultimate proposition of AI companionship is not to simulate relationships but to provide dialogue scenarios that are difficult to construct in human experience. In various contexts such as family, education, psychology, and culture, the value boundaries of AI avatars are being expanded—not just as responders but also as conversationalists and relationship builders.
The next step for AI terminals is social interaction itself.
After AI avatars, virtual companions, and voice agents, industry attention is further returning to the hardware and platform level—does the future form of human-computer interaction hold the potential for disruptive reconstruction?
a16 believes that, on one hand, the position of smartphones as the main interactive platform remains highly solid, with over 7 billion smartphones deployed globally, making their penetration rate, ecological stickiness, and usage habits difficult to shake in the short term. On the other hand, new possibilities are brewing in personal devices and continuously interactive devices.
One path is the "evolution within the phone": models are moving towards localized deployment, with significant room for optimization around privacy protection, intent recognition, and system integration. Another path is the development of new device forms, such as "always-on" headphones, glasses, and brooch devices, focusing on seamless activation, voice-driven interaction, and proactive engagement.
The truly decisive variable may still be breakthroughs in model capabilities rather than changes in hardware appearance. Hardware forms provide boundary carriers for model capabilities, while model capabilities define the upper limit of device value.
AI should not just be an input box on a webpage but should exist as "being with you." This view is increasingly becoming an industry consensus. Many early attempts have begun to explore the path of "presence-based AI": AI can see user behavior, hear real-time voice, understand the interaction environment, and actively intervene in decision-making processes. Transitioning from a suggestion provider to a behavioral participant is one of the key leap directions for AI implementation.
Some devices are already capable of recording user behavior and language data in real-time for retrospective analysis and behavior pattern recognition. There are also products attempting to actively read user screen information and provide operational suggestions or even execute actions directly. AI is no longer a reactive tool but a part of the life process.
A further question is: Can AI help users understand themselves? In daily life lacking external feedback systems, most people have a limited systematic understanding of their abilities, cognitive biases, and behavioral habits. An AI avatar that accompanies users for a long enough time and understands their paths could potentially become an intelligent mechanism for guiding cognitive awakening and potential release.
For example, it could point out to users: "If you invest 5 hours a week in a certain activity, you will have an 80% chance of becoming a professional in that field in three years"; or recommend networking resources that best match their interest structures and behavioral patterns, thereby building a more precise social map.
The core of such intelligent relationship systems lies in: AI is no longer an intermittently used functional tool but is structurally embedded in users' lives. It accompanies work, assists growth, and provides feedback, forming a continuous "digital companion" relationship.
On the device side, headphones are seen as the most likely terminal form to carry such AI assistants. Headphone devices represented by AirPods are naturally worn, have smooth voice channels, and possess the dual advantages of low interaction resistance and long-term wear. However, their social cognition in public scenarios remains limited—the cultural presumption that "wearing headphones = unwelcoming to communication" still affects the path of device proliferation.
The evolution of device forms is not just a technical issue but also a redefinition of social context.
As sustainable recording becomes the industry default trend, new social habits are also being rebuilt. The era of "default recording" is quietly unfolding among a generation of young users.
Although continuous recording brings privacy anxieties and ethical reflections, people are gradually forming a cultural tacit understanding of "recording as background." For instance, in some mixed work and social scenarios in San Francisco, "recording presence" has gradually internalized as a default setting; whereas in areas like New York, such cultural tolerance has not yet formed. The differences in acceptance and adaptation speeds of technological experiments between cities are becoming micro-variables in the rhythm of AI product implementation.
When recording behavior shifts from a tool choice to a social background, the real norm reconstruction will revolve around "boundary setting" and "value building."
We are currently in the "early stage of synchronous construction of technological paths and social norms"—with many blanks, few consensuses, and unclear definitions. But this is the most critical period for raising questions, setting boundaries, and shaping order.
Whether it is AI avatars, voice agents, digital personalities, virtual companions, or hardware forms, social acceptance, and cultural friction points, the entire ecosystem remains in its most primitive and undefined state. This means that in the coming years, many assumptions will be falsified, and some paths will rapidly amplify, but more importantly, it is crucial to continuously raise real questions during this stage and build a more sustainable answer structure.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。