The Architecture of Total Capture
When you hesitate before typing that last sentence, it reveals more about you than anything you actually wrote. The delete key you pressed four times. The three drafts you attempted at an opening line before you settled on one. The twelve seconds you paused while composing a question about your disagreement with your company's layoff strategy. You think these are private moments of thought. But they're actually data points.
Privacy debates currently focus on the wrong thing. Arguments center on who can access purchase histories, location data, browsing patterns. These are still important debates but they're missing the new frontier entirely. Who has access to information about you is a breach of privacy. But who is observing how you think? That's a breach of cognitive security.
Cognitive privacy is the right to mental self-determination. The freedom to think, wonder, question, struggle, and form ideas without those processes being observed, recorded, or manipulated by external systems. Current privacy laws attempt to protect data items (what you said, what you purchased, where you went). They don't protect the cognitive processes themselves (how you thought about it, what you struggled with, what you almost did but reconsidered).
This gap has become a national security vulnerability, a developmental crisis for children, and a fiduciary liability for enterprises. Traditional privacy asks who has your data. Cognitive privacy asks who is watching you think.
A 2024 paper in Neuron by Magee, Ienca, and Farahany introduced the term "cognitive biometrics." The researchers define cognitive biometric data as "neural data, as well as other data collected from a given individual or group of individuals through other biometric and biosensor data" that can "be processed and used to infer mental states." This includes not just brain-computer interfaces but heart rate variability, eye-tracking patterns, and behavioral data from everyday devices.
These pieces of data on their own mean next to nothing. A heart rate of 105 is only one data point. But cognitive biometric data becomes an inferential data point when given semantic value by advanced algorithms. Raw data like EEG signals, heart rate variability, or eye-tracking movements have no intrinsic meaning in isolation. But when analyzed by an algorithm, they can show patterns corresponding to mental states or intentions. Like ink on paper that conveys meaning through specific arrangements, these physiological traces become readable maps of cognition.
The Neuron paper documents the research that demonstrates algorithmic ability to predict highly personal traits using EEG, eye-tracking, and heart rate data with remarkable accuracy. These include sexual orientation, personality traits, drug use history, and mental health conditions. Combined with contextual information like location, visual field, and time of day, cognitive biometric data is getting better at revealing responses to environmental stimuli that users themselves may not consciously recognize.
Brittan Heller termed this capability "biometric psychography": the extraction of psychological profiles from physiological data. Researchers have used these techniques to uncover proxies for PIN numbers and bank account details, romantic attractions, and skill levels in various tasks. Your body betrays your mind.
Four developments have created what I call "the architecture of total capture." Each is concerning individually. Together, they represent something unprecedented in human history. Picture the comprehensive observation and extraction of human thought processes at population scale to influence your decision making and beliefs.
1. Pre-Upload Scanning
Major social media platforms can now analyze content while it's being created, even if it's never posted. When someone drafts a message or video and deletes it, that deletion isn't private. It’s analyzed. When a user starts typing a search query and abandons it, that hesitation is recorded. If an employee composes an email, reconsiders the phrasing, and revises it three times before sending, those revision patterns can become behavioral data.
The content users chose not to share reveals more about their vulnerabilities, insecurities, and manipulability than content they did share. A published post shows what someone is willing to defend publicly. A deleted draft shows what they're actually thinking. This has been documented. Keyboard tracking during composition is technically straightforward. Platforms that capture this data gain insight into the formation of thought itself, not just its final expression.
2. Biometric Sentiment Tracking
Eye-tracking, micro-expression analysis, and emotional response measurement now occur during content consumption. This creates real-time psychological profiles. It moves beyond what users watch toward how they react while watching. Consumer wearables (smartwatches, VR headsets, earbuds) collect cognitive biometrics continuously. The Neuron paper found that over one in five Americans regularly use fitness wearables, with the global market projected to reach $290 billion by 2032. These devices monitor heart rate and other physiological functions that can infer cognitive states without any direct neural measurement.
The researchers reviewed privacy policies of seventeen BCI, XR, and fitness wearable brands. All BCI and fitness wearable companies indicated they collect cognitive biometric data from users in at least some circumstances. Five of six XR companies either explicitly collect this data or maintain privacy policies vague enough to permit it. Only one company, Magic Leap, explicitly guarantees that biometric information is processed on-device and never collected by the company.
Heart rate variability while reading a news article. Pupil dilation during a product presentation. Stress response to a particular political claim. These physiological traces of cognition are captured, aggregated, and monetized. The platforms know which content makes users anxious, which ideas excite them, which arguments trigger defensive responses. They know this before users know it themselves.
3. Analytic Atrophy
When users rely on AI for synthesis (making sense of information) rather than retrieval (looking up facts) they outsource the cognitive processes that would normally build analytical capability. I call this "analytic atrophy." The progressive degradation of independent analytical capacity through habitual cognitive offloading.
There's a meaningful difference between using a calculator and using AI. When someone uses a calculator, they still have to understand the equation. They know what numbers to punch in and why. They direct the tool. But when someone asks a chatbot to explain a complex topic, they bypass the neural work required to actually understand that topic themselves. They're not offloading the calculation. They're offloading the synthesis.
Gerlich's 2025 study found a significant negative correlation between frequent AI tool usage and critical thinking abilities. Younger participants showed the strongest dependence and the lowest critical thinking scores. The relationship was non-linear. Moderate use showed minimal impact, but heavy reliance produced measurable cognitive decline. This suggests that in high-stakes environments like intelligence analysis, medical diagnosis, legal reasoning the inability to synthesize information independently creates operational vulnerability. The analyst who cannot reason without AI assistance cannot detect when AI-generated conclusions are inaccurate or compromised.
4. Cognitive Prompt Injection
In AI security, "prompt injection" describes attacks where adversaries insert instructions that language models treat as authoritative. The system cannot reliably distinguish between trusted instructions and untrusted input, causing outputs aligned with the attacker's objectives rather than the user's intent.
The same vulnerability exists at population scale. Just as an LLM can be tricked by hidden instructions embedded in seemingly innocent content, a population conditioned by algorithmic recommendation can be "injected" with adversarial narratives that bypass critical filters. Actors who understand what content algorithms reward can feed them material that achieves their own objectives while appearing as organic engagement. The algorithm becomes the distribution mechanism for influence operations.
Attribution is nearly impossible. The influence appears organic. The target population perceives no external manipulation because the cognitive process of "forming an opinion" feels autonomous even when algorithmically shaped. The infrastructure that extracts cognitive data for commercial purposes creates the attack surface for adversarial manipulation of collective decision-making.
Consider a law firm using AI to research case strategy. Lawyers are allowed to ask questions, explore hypotheses, and draft and discard arguments. All of this is protected by attorney-client privilege.
If these lawyers use an AI tool to brainstorm their strategy, these cognitive processes flow to AI systems as data. Even if the policy claims that the data won’t be used to train the model, the data is still provided to that company operating the software. This is a fiduciary breach of confidentiality. Not through data theft, but through architectural design. And oftentimes it’s unintentional.
A review of industry practices reveals how normalized this exposure has become. A white paper by the Neurorights Foundation found that all thirty neurotechnology companies reviewed retained broad rights over the neural data they collected. Most companies provide little information about how collected data is stored. Of the companies whose privacy policies Magee and colleagues reviewed, only Apple explicitly states that they encrypt biometric data in a way that prevents even their own employees from accessing it.
The remaining companies vary widely in their guarantees. Meta's privacy policy offers virtually no insight into their data storage practices. While six companies claim to use encryption for some types of data, only one clearly indicates this includes biometric data. None mention that encrypted data are inaccessible to the company and its employees. These limited disclosures provide no assurance that sensitive insights derived from cognitive biometric data will be kept confidential.
The same exposure affects every protected relationship:
Attorney-Client: Communications are privileged. But research queries, draft arguments, and abandoned strategies? Exposed through the AI system's observation of how lawyers think.
Physician-Patient: Medical records are protected. But diagnostic reasoning, differential considerations, the physician's uncertainty while working through a difficult case? Visible to whatever system processed those cognitive steps.
Financial Advisor-Client: Investment decisions are confidential. But the analysis process, the risk assessment methodology, the advisor's actual reasoning patterns? Captured as data.
Corporate Board: Deliberations are protected. But strategic analysis, scenario modeling, competitive assessments conducted through AI tools? Available to systems the board doesn't control.
Organizations deploying AI for analytical work are externalizing cognitive processes that should remain internal. The tool doesn't just assist with thinking. It observes the thinking.
The OODA Loop Under Observation
Military strategist John Boyd's OODA loop (Observe, Orient, Decide, Act) describes the decision cycle that determines competitive advantage. The organization that completes its OODA loop faster than its adversary can act inside the adversary's decision cycle, creating confusion and gaining initiative. When analytical processes run through AI systems that may be compromised, or whose training data may be accessible to adversaries, the Orient and Decide phases become observable.
An adversary that can see how an organization makes sense of data gains the subtle ability to shape what conclusions the organizations make. They don't need to access the decision. They influence the decision-making process itself.
Cognitive dependency compounds this vulnerability. Habitual reliance on AI for analysis means organizations may lose the human capacity to audit whether AI-generated conclusions are sound. When the tool is compromised, the organization lacks independent means to detect the compromise. The most dangerous AI failure won’t be if the system crashes. It's when it produces wrong conclusions that nobody in the organization has the independent capacity or awareness to detect.
Defending cognitive privacy requires architectural changes over policy updates. The Neuron paper proposes a "privacy floor" for cognitive biometrics built on four principles that should guide enterprise deployment: informed consent, data minimization, data rights, and data security.
Edge Processing: The paper's most actionable recommendation. Raw cognitive biometric data should be processed on-device rather than transmitted to company servers. When data doesn't leave the device, it can't be aggregated into profiles. Apple Vision Pro and Magic Leap 2 already process eye-tracking data entirely on the edge. This should be the default standard, not the exception.
Zero-Digital Environments: Designate rooms and times where no devices are allowed. Critical thinking and sensitive ideation happen on paper or with people. Ideas get digitized only after formation, not during it. Don't let the algorithm watch you struggle.
Ephemeral Processing: Data retention policies must default to immediate deletion. Every interaction that persists becomes a profile component. Organizations need fiduciary-grade AI — systems that guarantee input will never become training data. The paper calls for "end-to-end encryption" as default when edge processing isn't feasible, ensuring data remains inaccessible to unauthorized parties including the company itself.
Human Synthesis: AI is for retrieval only, not synthesis. The work of connecting dots, drawing conclusions, and making sense of conflicting information must remain human tasks. That cognitive struggle is what builds analytical capacity. Bypass the struggle, bypass the growth.
The Neuron researchers note that current laws are both overspecified and underinclusive. These laws target specific technologies like neural interfaces while missing the broader category of data capable of inferring mental states. Colorado's 2024 law was the first law to explicitly expand the definition of sensitive data but it’s not perfect. For instance, it protects "neural data" but excludes eye-tracking data from VR headsets that can reveal equally sensitive information. It. Without protecting the full scope of privacy, it leaves people and organizations exposed.
Organizations that understand cognitive privacy first will retain the capacity to act on their own behalf. They'll maintain a workforce that can think without algorithmic dependency, earn the trust of clients requiring genuine confidentiality, and build resilience against population-scale influence operations.
Organizations that treat cognitive processes as extractable resources will create something else. Analytic atrophy. A workforce dependent on tools they cannot audit. Fiduciary exposure. Clients' cognitive processes will be visible to external systems. And last, strategic vulnerability. Decision-making may become influenced or manipulable on potentially compromised infrastructure.
The capacity to think independently, in private, without comprehensive observation is not a luxury but the precondition for human agency in AI-mediated environments. If our decision-making processes are observable, they are exploitable. The algorithm doesn't have to control what you decide. It only has to observe how you decide. In an era of total capture, the most dangerous vulnerability is no longer a leaked secret or a hacked server. It is the human mind. To defend it, we must first defend the privacy of the thought that precedes our actions.
Timothy Cook is the author of Unautomatable: The Human Capacities That Make Learning Meaningful (MIT Press, peer review) and Director of The Cognitive Privacy Project. He is a writes the "Algorithmic Mind" column for Psychology Today and is a founding team member of the Coaching Ethics and AI Forum. His research on the necessity of developing human skills is shared on Connected Classroom.

You form an opinion. It feels like yours. But the article was surfaced by an algorithm. The pattern was amplified because it triggered engagement. You weren't hacked. You were nudged. At scale.