Article Author: a16z New Media
_Article Translation: Block unicorn
_
As investors, our responsibility is to delve into every corner of the tech industry to grasp future development trends. Therefore, every December, we invite our investment teams to share a significant concept they believe tech companies will tackle in the coming year.
Today, we will share insights from the Infrastructure, Growth, Bio+Health, and Speedrun teams. Stay tuned for contributions from other teams tomorrow.
Infrastructure
Jennifer Li: How Startups Navigate the Chaos of Multimodal Data
Unstructured, multimodal data has long been the biggest bottleneck for enterprises and their greatest untapped treasure. Every company is mired in a sea of PDFs, screenshots, videos, logs, emails, and semi-structured data. Models are becoming increasingly intelligent, but the input data is becoming more chaotic, leading to failures in RAG systems, agents failing in subtle and costly ways, and critical workflows still heavily relying on manual quality checks. The constraint faced by AI companies today is data entropy: in the world of unstructured data, freshness, structure, and authenticity are continuously degrading, with 80% of enterprise knowledge now residing in this unstructured data.
For this reason, clarifying unstructured data presents a once-in-a-lifetime opportunity. Enterprises need a continuous approach to clean, build, validate, and manage their multimodal data to ensure downstream AI workloads can truly perform. Use cases are everywhere: contract analysis, onboarding processes, claims processing, compliance, customer service, procurement, engineering searches, sales enablement, analytics pipelines, and all agent workflows that rely on reliable context. Startups that can build platforms to extract structure from documents, images, and videos, resolve conflicts, repair pipelines, or maintain data freshness and retrievability hold the keys to the kingdom of enterprise knowledge and processes.
Joel de la Garza: AI Revitalizes Cybersecurity Recruitment
For much of the past decade, the biggest challenge faced by Chief Information Security Officers (CISOs) has been recruitment. From 2013 to 2021, the number of unfilled cybersecurity positions grew from fewer than 1 million to 3 million. This is because security teams hired a large number of technically skilled engineers to perform tedious tier-one security tasks, such as reviewing logs, a job no one wants to do. The root of the problem lies in the fact that cybersecurity teams purchased products capable of detecting everything, creating this cumbersome work, which means their teams need to review all information—resulting in a false labor shortage. It’s a vicious cycle.
By 2026, AI will break this cycle and fill the recruitment gap by automating many repetitive tasks for cybersecurity teams. Anyone who has worked in a large security team knows that half of the work could be easily solved through automation, but when the workload piles up, it becomes difficult to determine which tasks need automation. Native AI tools that can help security teams address these issues will ultimately enable them to free up time to do what they truly want to do: hunt down bad actors, build new systems, and patch vulnerabilities.
Malika Aubakirova: Native Agent Infrastructure Will Become Standard
By 2026, the biggest infrastructure impact will not come from external companies but from within enterprises. We are shifting from predictable, low-concurrency "human-speed" traffic to recursive, bursty, and large-scale "agent-speed" workloads.
Today's enterprise backends are designed for a 1:1 ratio of human operations to system responses. They are not architected to handle the recursive fan-out of triggering 5,000 subtasks, database queries, and internal API calls at millisecond intervals for a single agent "target." When an agent attempts to refactor a codebase or fix security logs, it does not appear as a user. In the eyes of traditional databases or rate limiters, it resembles a DDoS attack.
Building systems for agents in 2026 means redesigning the control plane. We will witness the rise of "agent-native" infrastructure. Next-generation infrastructure must treat "thundering herd" effects as the default state. Cold start times must be reduced, latency fluctuations must be significantly lowered, and concurrency limits must be multiplied. The bottleneck lies in coordination: achieving routing, locking, state management, and policy execution in large-scale parallel execution. Only those platforms that can handle the ensuing flood of tool execution will ultimately prevail.
Justine Moore: Creative Tools Move Towards Multimodal
We now have building blocks for storytelling with AI: generative voice, music, images, and video. However, for any content beyond one-off snippets, obtaining the desired output is often time-consuming and frustrating—even impossible—especially when you want to approach a traditional director-level of control.
Why can't we feed a model a 30-second video and have it continue the scene with new characters created from reference images and sounds? Or reshoot a video so we can observe the scene from different angles, or have actions match a reference video?
2026 will be the year AI moves towards multimodal capabilities. You can provide the model with any form of reference content and use it to create new content or edit existing scenes. We have already seen some early products, such as Kling O1 and Runway Aleph. But there is still much work to be done—we need innovation at both the model and application layers.
Content creation is one of the most powerful applications of AI, and I expect to see a surge of successful products emerge, covering a variety of use cases and customer segments, from meme creators to Hollywood directors.
Jason Cui: The AI-Native Data Stack Continues to Evolve
In the past year, we have seen the integration of the "modern data stack" as data companies shift from focusing on specialized areas like data ingestion, transformation, and computation to bundled unified platforms. For example: the merger of Fivetran/dbt and the continued rise of unified platforms like Databricks.
Although the entire ecosystem has clearly matured, we are still in the early stages of a truly AI-native data architecture. We are excited about how AI will continue to transform multiple aspects of the data stack and are beginning to realize that data and AI infrastructure are becoming inseparable.
Here are some directions we are optimistic about:
How data will flow into high-performance vector databases alongside traditional structured data
How AI agents will solve the "context problem": continuously accessing the correct business data context and semantic layer to build powerful applications, such as interacting with data and ensuring these applications always have the correct business definitions across multiple record systems
How traditional business intelligence tools and spreadsheets will change as data workflows become more agentified and automated
Yoko Li: A Year of Immersion in Video

By 2026, video will no longer be content we passively watch but rather a space we can truly immerse ourselves in. Video models will ultimately be able to understand time, remember what they have previously shown, respond to our actions, and maintain a reliable consistency akin to the real world. These systems will no longer just generate a few seconds of fragmented imagery but will be able to sustain characters, objects, and physical effects long enough for actions to have meaning and show their consequences. This transformation will turn video into a medium that can continuously evolve: a space where robots can practice, games can evolve, designers can prototype, and agents can learn in practice. What is ultimately presented will no longer resemble a video clip but rather a vibrant environment, one that begins to bridge the gap between perception and action. For the first time, we will feel we can be present in the videos we generate.
Growth
Sarah Wang: Record Systems Lose Dominance
By 2026, the true disruptive change in the enterprise software space will be that record systems will finally lose their dominance. AI is narrowing the gap between intent and execution: models can now directly read, write, and reason about operational data, transforming IT Service Management (ITSM) and Customer Relationship Management (CRM) systems from passive databases into autonomous workflow engines. With the continuous accumulation of advancements in reasoning models and agent workflows, these systems will not only respond but also predict, coordinate, and execute end-to-end processes. Interfaces will transform into dynamic agent layers, while traditional record systems will retreat to the background, becoming a generic persistent layer—its strategic advantage will be ceded to those who truly control the agent execution environment that employees use daily.
Alex Immerman: AI in Vertical Industries Evolves from Information Retrieval and Reasoning to Multi-Party Collaboration
AI is driving unprecedented growth in vertical industry software. Healthcare, legal, and real estate companies have reached over $100 million in annual recurring revenue (ARR) in just a few years; the finance and accounting sectors are close behind. This evolution began with information retrieval: finding, extracting, and summarizing the right information. 2025 brought reasoning capabilities: Hebbia analyzes financial statements and builds models, Basis reconciles trial balances across different systems, and EliseAI diagnoses maintenance issues and dispatches the right vendors.
2026 will unlock multi-party collaboration modes. Vertical industry software benefits from domain-specific interfaces, data, and integrations. However, the nature of work in vertical industries is inherently collaborative among multiple parties. If agents are to represent the workforce, they need to collaborate. From buyers and sellers to tenants, consultants, and vendors, each party has different permissions, workflows, and compliance requirements, which only vertical industry software can understand.
Today, parties operate independently using AI, leading to a lack of authorization during handoffs. An AI analyzing procurement agreements will not communicate with the CFO to adjust the model. Maintenance AI will also be unaware of what promises field workers made to tenants. The transformation of multi-party collaboration lies in coordination across stakeholders: routing tasks to functional experts, maintaining context, and synchronizing changes. Counterparty AI will negotiate within established parameters and flag asymmetries for human review. Senior partner markings will be used to train the entire company's systems. Tasks executed by AI will be completed with a higher success rate.
As the value of multi-party collaboration and multi-agent collaboration increases, the cost of switching will also rise. We will see the network effects that AI applications have historically failed to achieve: the collaboration layer will become a moat.
Stephenie Zhang: Designed for Agents, Not Humans
By 2026, people will begin to interact with the web through agents. What has been optimized for human consumption will no longer be equally important for agent consumption.
For years, we have focused on optimizing predictable human behavior: ranking high in Google search results, appearing prominently in Amazon search results, and starting with concise "TL;DR" summaries. In high school, I took a journalism class where the teacher taught us to write news using "5W1H," and feature articles should start with an engaging lead to attract readers. Perhaps human readers might miss those highly valuable, insightful discussions hidden on the fifth page, but AI will not.
This transformation is also reflected in the software domain. Applications were originally designed to meet human visual and click needs, with optimization meaning good user interfaces and intuitive workflows. As AI takes over retrieval and interpretation tasks, the importance of visual design for understanding is gradually diminishing. Engineers no longer stare at Grafana dashboards; AI systems reliability engineers (SREs) can interpret telemetry data and post analysis results on Slack. Sales teams no longer need to laboriously sift through customer relationship management (CRM) systems; AI can automatically extract patterns and summaries.
We are no longer designing content for humans but for AI. The new optimization goal is no longer visual hierarchy but machine readability—this will change the way we create and the tools we use.
Santiago Rodriguez: The End of the "Screen Time" KPI in AI Applications
For the past 15 years, screen time has been the best metric for measuring the value delivery of consumer and enterprise applications. We have lived in a paradigm where Netflix streaming duration, mouse clicks in electronic health record user experiences (to prove effective use), and even time spent on ChatGPT serve as key performance indicators. As we move towards outcome-based pricing models that perfectly align the incentives of vendors and users, we will first abandon screen time reporting.
We have already seen this in practice. When I run DeepResearch queries on ChatGPT, I can derive immense value even with almost zero screen time. When Abridge magically captures doctor-patient conversations and automatically executes follow-ups, doctors hardly need to look at the screen. When Cursor develops a complete end-to-end application, engineers are planning the next feature development cycle. And when Hebbia writes presentations based on hundreds of public documents, investment bankers can finally get a good night's sleep.
This brings a unique challenge: the single-user pricing model for applications will require more complex return on investment (ROI) measurement methods. The proliferation of AI applications will enhance doctor satisfaction, developer efficiency, financial analyst well-being, and consumer happiness. Companies that can articulate ROI in the simplest terms will continue to outpace their competitors.
Bio + Health
Julie Yoo: Healthy Monthly Active Users (MAU)
By 2026, a new healthcare customer segment will come into focus: "healthy monthly active users."
Traditional healthcare systems primarily serve three user groups: (a) "sick monthly active users": a group with fluctuating needs and high costs; (b) "sick daily active users": for example, patients requiring long-term intensive care; and (c) "healthy young active users": relatively healthy individuals who rarely seek medical care. Healthy young active users face the risk of transitioning into sick monthly active users/daily active users, and preventive care can slow this transition. However, our treatment-focused healthcare reimbursement system rewards treatment rather than prevention, so proactive health checks and monitoring services have not been prioritized, and insurance rarely covers these services.
Now, the healthy monthly active user segment is emerging: they are not sick but wish to regularly monitor and understand their health status—and they may represent the largest segment of consumers. We anticipate that a number of companies—including AI-native startups and upgraded versions of existing enterprises—will begin to offer regular services to cater to this user group.
As AI lowers the cost of healthcare services, new preventive-focused health insurance products emerge, and consumers become increasingly willing to pay out-of-pocket for subscription models, "healthy monthly active users" represent the next highly promising customer segment in the healthcare technology field: they are continuously engaged, data-driven, and focused on prevention.
Speedrun (the name of an internal investment team at a16z)
Jon Lai: World Models Shine in the Narrative Domain
By 2026, AI-driven world models will fundamentally change the way narratives are told through interactive virtual worlds and digital economies. Technologies like Marble (World Labs) and Genie 3 (DeepMind) are already capable of generating complete 3D environments based on text prompts, allowing users to explore them as if in a game. As creators adopt these tools, entirely new forms of narrative will emerge, potentially evolving into a "generative Minecraft," where players can co-create vast and ever-evolving universes. These worlds can combine game mechanics with natural language programming, allowing players to issue commands like "create a brush that turns anything I touch pink."
Such models will blur the lines between players and creators, making users co-creators of a dynamically shared reality. This evolution may give rise to interconnected generative multiverses, allowing different genres like fantasy, horror, and adventure to coexist. In these virtual worlds, the digital economy will thrive, with creators earning income by building assets, mentoring newcomers, or developing new interactive tools. Beyond entertainment, these generative worlds will also serve as rich simulation environments for training AI agents, robots, and even artificial general intelligence (AGI). Thus, the rise of world models not only signifies the emergence of a new genre of games but also heralds the arrival of a new creative medium and economic frontier.
Josh Lu: "My Year"
2026 will be "My Year": products will no longer be mass-produced but tailored to you.
We have seen this trend everywhere.
In education, startups like Alphaschool are building AI tutors that can adapt to each student's learning pace and interests, allowing every child to receive an education that matches their learning rhythm and preferences. Such a level of attention would be impossible without spending tens of thousands of dollars on tutoring for each student.
In health, AI is designing daily supplement combinations, workout plans, and meal plans tailored to your physiological characteristics. No coaches or labs needed.
Even in media, AI allows creators to recombine news, shows, and stories to create a personalized information stream that perfectly aligns with your interests and preferences.
The biggest companies of the last century succeeded because they found the average consumer.
The biggest companies of the next century will win by finding individuals among the average consumers.
In 2026, the world will no longer optimize for everyone but will begin to optimize for you.
Emily Bennett: The First AI-Native University
I anticipate that in 2026 we will witness the birth of the first AI-native university, an institution built from the ground up around AI systems.
In recent years, universities have been trying to apply AI to grading, tutoring, and course scheduling. But what is emerging now is a deeper level of AI, an adaptive academic system capable of real-time learning and self-optimization.
Imagine an institution where courses, consultations, research collaborations, and even building operations are continuously adjusted based on data feedback loops. The course schedule will self-optimize. Reading lists will be updated every night and automatically rewritten as new research emerges. Learning paths will be adjusted in real-time to accommodate each student's learning pace and circumstances.
We have already seen some signs. Arizona State University (ASU) has partnered with OpenAI for a university-wide collaboration that has spawned hundreds of AI-driven projects covering teaching and administrative management. The State University of New York (SUNY) has now incorporated AI literacy into its general education requirements. These are all foundational steps for deeper deployment.
In an AI-native university, professors will become architects of learning, responsible for data management, model tuning, and guiding students on how to question machine reasoning.
Assessment methods will also change. Detection tools and plagiarism bans will be replaced by AI awareness assessments, where students' grading criteria will no longer be whether they used AI but how they used it. Transparency and strategic use will replace prohibition.
As industries strive to recruit talent capable of designing, managing, and collaborating with AI systems, this new university will become a training ground, producing graduates proficient in AI system coordination to support the rapidly changing labor market.
This AI-native university will become the talent engine of the new economy.
That's all for today; we look forward to seeing you in the next part.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。