YC CEO Garry Tan: Personal AI Will Free Individuals from Predatory Institutions
Garry Tan stated that the goal of personal AI is to enable individuals to independently accomplish impactful work with AI augmentation, without being bound by large companies, platforms, or bureaucratic systems, which he refers to as "predatory institutions." Starting from the "freedom to write one's own prompts and control one's own data," the power struggle of the next decade will shift back to the contest between individuals and giants.
In the path he envisions, personal AI means that everyone has a personal "intelligent external brain," akin to a privatized AI agent: users control their own data and prompt libraries, with AI assisting in complex decision-making, creative labor, and high-leverage output, rather than feeding all behaviors, preferences, and workflows to centralized platforms. This idea aligns closely with the current approach of "human-machine collaboration, enhancing rather than replacing" augmented intelligence, and resonates with recent reflections on privacy and surveillance capitalism.
He concluded with the declaration that "2034 will not be like 1984," clearly referencing warnings from figures like Microsoft President Brad Smith about the potential evolution into a "1984-style surveillance society" without regulation: one path involves AI being used by a few governments and platforms to create pervasive surveillance and control, while the other path is to make AI a controllable, de-platformed tool for data and decision-making. The former concentrates power in the "eyes"; the latter concentrates power in the "external brain held in hand." Tan clearly bets on the latter and defines it as the "new battleground" for the next decade.
Source: Public Information
ABAB AI Insight
From historical behavior, Garry Tan has long stood on the narrative side of "individual vs institution": on one hand, as the president of YC, his main business is investing in one-person companies and small teams, using software and the internet to amplify individual capabilities into organizational-level output; on the other hand, he has publicly criticized large platforms and governments for their centralized control over data and speech, emphasizing that the significance of entrepreneurship lies in empowering individuals with more means of production, rather than just "helping giants create a more respectable job." This public definition of "personal AI" as a weapon against "predatory institutions" is highly consistent with his support for small teams using AI to create one-person companies—YC has notably increased projects in the direction of "personal AI assistants, personal knowledge bases, personal agents" in recent batches, betting on this structure rather than purely on computing power or large model infrastructure.
In terms of capital pathways, "freedom to write one's own prompts and own one's data" is not just a slogan, but a direct challenge to the existing SaaS and large model platform business models: the current mainstream commercialization path concentrates user data, interactions, and workflows into a few cloud models and products, using this data to feed back into models and behavioral profiles, and then forming lock-in through subscriptions, pay-per-use, and upstream-downstream restrictions; for Tan's concept of personal AI to structurally hold, it must technically and product-wise turn data, prompts, and individual workflows into user-side assets—such as privatized vector databases in local or personal clouds, transferable prompt libraries, or edge-side or self-hosted models—so that the marginal benefits of model improvements for individuals are not entirely internalized by platforms. Capital will redraw the lines here: platform companies will continue to profit from models and infrastructure, while personal AI tools will shift more from "selling tools" to "selling control," such as one-time local agents and exportable personal knowledge systems, rather than locking everything into a "monthly subscription + data hosting" black box.
In analogy, the "new battleground" he describes is highly similar to several historical forks: the open protocols of the early internet vs closed platforms, the "personal computer" of the PC era vs the "terminal" of the mainframe era, and the struggle over "whether users can control devices and data" on smartphones; today’s debates about AI similarly oscillate between "fully managed cloud large models" and "user-controlled AI, edge-side or local enhancement." One path continues down the road of surveillance capitalism and big data economy, turning individuals into free fuel for model training and behavior prediction, while the other path attempts to shift some power back to terminals and individuals, like open-source software, cryptocurrencies, and privacy computing—such as running open-source models locally, users controlling their own embedding and prompt libraries, and collaborating with cloud services in a verifiable manner.
In structural judgment, he proposes a "reconstruction of pricing power and control," with the core not in "technological replacement," but in "how power is distributed": if AI infrastructure and data are completely controlled by a few platforms and state machines, then who can access models, whose content is amplified, and whose accounts are banned will all become centralized political and commercial decisions, which is the reality path of the "technological version of 1984" mentioned by figures like Brad Smith—every request goes through a center, which both keeps accounts and makes decisions. The personal AI route attempts to decentralize some intelligence and data sovereignty to individuals, turning "prompts, knowledge, preferences, historical decisions" into personal assets, allowing platforms to only collaborate within agreements and boundaries—mechanically similar to moving from "renting a brain on someone else's server" to "bringing your own brain to connect to the network," shifting pricing power from platforms charging users "on demand" to users choosing platforms "based on contribution."
ABAB News · Cognitive Law
AI is not scary; what is scary is that your intelligence and data are all recorded in someone else's ledger.
True personal AI is not about having one more assistant, but about having one less institution that can cut off your power at any time.
What determines the future social form is not the scale of model parameters, but who writes the prompts and who keeps the accounts.