On February 9, 2026—one day after Anthropic ran Super Bowl ads lampooning the concept of advertising in AI chatbots—OpenAI simultaneously updated its US Privacy Policy and launched ad testing in ChatGPT. The timing was not subtle. The policy changes, however, deserve the kind of careful reading that most users will never give them.
This analysis breaks down the updated policy through an OSINT and security lens: what data OpenAI now collects, what changed from previous versions, what the policy language actually authorizes versus what the company publicly claims, and what it all means for professionals who handle sensitive information.
Scope and Structure of the Update
The updated US Privacy Policy is dated February 9, 2026, and replaces the previous version. OpenAI now maintains separate privacy policies for US users, EEA/UK/Switzerland users, and a Korea-specific addendum. The US version is notably less restrictive than the European counterpart—unsurprising given GDPR constraints, but worth flagging for anyone who assumed a single global standard.
The policy covers “website, applications, and services” collectively as “Services.” Critically, it excludes API customers—their data handling falls under separate customer agreements. This distinction matters: if your organization accesses OpenAI through the API with a Data Processing Addendum and Zero Data Retention, you operate under a fundamentally different regime than consumer ChatGPT users.
New Data Collection Categories
Advertising Data Inflows
The most consequential addition is a new paragraph buried in “Information We Receive from Other Sources”:
“We may receive information from advertisers and other data partners, which we use for purposes including to help us measure and improve the effectiveness of ads shown to Free and Go users on our Services. For example, we could receive information about purchases you make from these advertisers.”
This is not merely showing ads. OpenAI is now ingesting purchase data from third-party advertisers and linking it to user profiles. The data flow is bidirectional: your conversations inform ad targeting, and your real-world purchase behavior flows back to OpenAI for measurement. This creates a closed-loop attribution system functionally identical to what Meta and Google operate.
OpenAI’s public messaging states that “advertisers do not have access to your chats.” That is technically accurate—advertisers receive only aggregate performance data (views, clicks). But the policy authorizes OpenAI to receive granular purchase data from advertisers about you, creating an asymmetry that the marketing language obscures.
Atlas Browser Data
The policy now references “Atlas” throughout—OpenAI’s Chromium-based browser launched in October 2025. The relevant clause:
“If you use the Atlas browser we may also collect your browser data according to your controls and use of the service.”
“Browser data” is undefined in the policy. External reporting and OpenAI’s own documentation reveal that Atlas’s “memories” feature tracks which sites you visit, time spent on pages, text highlighted, tab-switching behavior, and the context of ChatGPT queries made during browsing. This constitutes a behavioral profile that exceeds traditional cookie-based tracking by combining browsing behavior with conversational AI context.
Atlas uses a custom integration layer called OWL (OpenAI’s Web Layer) that decouples the browser UI from the Chromium engine. Communication between Atlas and Chromium occurs through Mojo, Chromium’s inter-process messaging system, with rendering handled via GPU-backed layers embedded in native views. This architecture means OpenAI controls the full pipeline from content rendering to data collection—there is no browser extension layer where third-party privacy tools can intervene in the standard way.
OpenAI currently exempts Atlas from ad display. Per The Register’s reporting, this likely reflects both a growth incentive for the browser and an acknowledgment that ad-blocking extensions could circumvent ads if they were served through the browser.
Contact Data Harvesting
A new “Contact Data” category has been added:
“If you choose to connect your device contacts, we upload information from your device address books and check which of your contacts also use our Services. If any of your contacts aren’t yet using our Services, we’ll update you if they sign up for our Services later.”
Two critical observations. First, “upload information from your device address books” means the data leaves the device and goes to OpenAI’s servers—this is not on-device matching. Second, OpenAI retains contact information for people who do not use their services, creating shadow profiles of non-users. This mirrors the practice that led to Facebook facing regulatory action in the EU.
For anyone handling contacts that include classified or sensitive networks—military, intelligence, law enforcement, diplomatic—this feature represents a direct OPSEC risk. A single user connecting contacts on a personal device that also contains professional contacts could expose organizational relationship graphs to OpenAI’s infrastructure.
Expanded Content Collection
The “User Content” definition has been broadened to explicitly include “files, images, audio and video, Sora characters, and data from connected services.” The previous policy was narrower. This expansion tracks with OpenAI’s product growth (Sora, voice mode, file upload capabilities) but also means the scope of what constitutes “Content” that can be used for model training has widened.
The policy also now treats interactions between users—“post, comment, or send messages”—as Content. If you share a ChatGPT conversation or interact with other users through any OpenAI service, those interactions fall under the same data use provisions as your direct prompts.
How Data Is Used: The Advertising Architecture
Section 2 now includes two advertising-specific use cases that did not exist in prior versions:
-
“To personalize and customize your experience across our Services” — vague enough to authorize nearly any cross-service data use.
-
“For Free and Go users, to personalize the ads you see on our Services (subject to your settings), and to measure the effectiveness of ads shown on our Services.”
The mechanics of ad targeting, drawn from OpenAI’s help center documentation and podcast statements from executive Assad Awan, work as follows:
- Ads are matched to the topic of your current conversation.
- If personalization is enabled (it is by default), ads also draw on past chats and previous ad interactions.
- If ChatGPT’s memory feature is enabled, saved memories and recent chats inform ad selection.
- Ad interactions (views, clicks, dismissals) are tracked but reportedly not added to ChatGPT memory—unless you explicitly share an ad with ChatGPT via the “Ask ChatGPT” feature.
OpenAI claims “ads do not influence the answers ChatGPT gives you” and that “the model doesn’t know when ads are present.” These are architectural claims that cannot be independently verified. The ad selection system operates alongside the response generation system, and both draw on the same conversation context.
Retention: What Gets Deleted and What Doesn’t
The retention section has been restructured with three tiers:
Tier 1 — User-controlled deletion: You can delete chats and account data. OpenAI removes it from systems within 30 days. However, data that has already been “de-identified and disassociated from your account” for model training is not deleted — it persists in training data.
Tier 2 — Automatic deletion: Temporary Chats auto-delete within 30 days. Atlas incognito browsing history is not saved after session end.
Tier 3 — Indefinite retention for cause: This is the broadest carve-out. OpenAI retains data when:
- Content or accounts are banned for policy violations (no time limit specified)
- Legal obligations require it (duration of the obligation)
- Financial transactions are involved (accounting and regulatory compliance)
- Deletion requests themselves generate audit records that are retained
The absence of defined retention periods for Tier 3 categories means that banned content could theoretically be retained indefinitely. The policy does not specify a maximum retention period for any category.
Disclosure: Who Gets Your Data
The disclosure section includes a new category: “Parents or guardians of teen users” for linked account oversight. It also now references disclosure to “search and shopping partners” — reflecting ChatGPT’s integration with third-party commerce services.
The “Government Authorities or Other Third Parties” section authorizes disclosure across six broad conditions, including the subjective standard of OpenAI determining “in our sole discretion, that there is a violation of our terms, policies, or the law.” This grants OpenAI unilateral authority to disclose user data to government entities based on their own assessment of policy violations.
The Legal Carve-Outs
No “sale” of data. OpenAI states it does not “sell” personal data, “share” it for cross-contextual behavioral advertising, or process it for “targeted advertising” as those terms are defined under state privacy laws. This is a carefully constructed legal position. The ad system they have built—contextual targeting within their own platform based on conversation content—does not meet the statutory definitions of these terms in California, Virginia, Colorado, or other state privacy laws. It is functionally advertising personalization, but legally it is not “targeted advertising” because it occurs within a single context (the OpenAI platform) rather than across contexts.
Data processing jurisdiction. Personal data is processed “on servers located in various jurisdictions, including processing and storing your Personal Data in our facilities and servers in the United States.” There is no data residency commitment. For organizations with data sovereignty requirements, this is a disqualifying factor.
OPSEC Implications for Defense and IC Professionals
The cumulative effect of these policy changes creates several specific risk vectors:
Contact Graph Exposure. The contact upload feature, if activated on a device containing professional contacts, transmits organizational relationship data to OpenAI. Even if the user is careful with their own prompts, the contact data itself reveals network structure.
Behavioral Profiling via Atlas. Atlas’s combined browsing + conversational data creates profiles of unprecedented depth. An analyst using Atlas for both personal browsing and professional research—even if they avoid classified topics—generates behavioral patterns that reveal interests, knowledge gaps, and operational tempo.
Conversation-to-Purchase Loop. The bidirectional data flow between OpenAI and advertisers means that a user’s ChatGPT conversations and their real-world purchasing behavior are now linked in OpenAI’s systems. For anyone whose purchasing patterns could reveal operational activity (travel, equipment, services), this creates a correlation risk.
Retention Opacity. The open-ended retention provisions for policy violations, legal holds, and financial records mean that data you believe you have deleted may persist. There is no mechanism to verify deletion.
Third-Party Shopping and Search Partners. Information shared with integrated search and shopping services is “governed by their own terms and privacy policies.” OpenAI disclaims responsibility for downstream data handling by these partners.
Recommended Mitigations
For individuals and organizations operating in sensitive environments:
- Do not use the contact sync feature on any device that contains professional or organizational contacts.
- Do not use Atlas browser for any research activity that could reveal professional interests or operational patterns. Prefer purpose-built, privacy-focused browsers for sensitive work.
- Disable ad personalization immediately in account settings if using Free or Go tiers. Better yet, use the API with a DPA and ZDR if organizational use is required.
- Disable ChatGPT memory for any account used in professional contexts.
- Use Temporary Chat mode for any queries that you do not want persisted or used for training.
- Audit which OpenAI tier your personnel are using. Enterprise and API customers operate under materially different data handling regimes than consumer users. A single analyst using a personal Free account for work-adjacent queries creates organizational exposure.
- Assume no deletion is complete. De-identified training data persists. Operate accordingly.
The Larger Pattern
OpenAI’s February 2026 privacy policy formalizes a transition that has been underway since the company’s conversion from a nonprofit to a capped-profit entity and its subsequent for-profit restructuring. The company now operates three monetization layers simultaneously: subscriptions (Plus/Pro/Enterprise), API usage fees, and advertising. Each layer involves progressively more data collection and less user control.
The policy is written to support a surveillance advertising business while maintaining technically accurate claims about privacy. Conversations are “private from advertisers” because advertisers receive only aggregate metrics—but OpenAI itself maintains the detailed profiles. Data is not “sold” because the statutory definitions of sale require cross-context sharing that does not occur. These are not lies. They are the precise kind of legally defensible half-truths that privacy policies have always been designed to deliver.
For OSINT professionals and security practitioners, the takeaway is straightforward: treat OpenAI consumer products as you would any ad-supported platform. The data collection, retention, and disclosure provisions are now functionally equivalent to those of Meta or Google. Plan your operational security posture accordingly.
Larry Wigington is the founder of Wigington Security Group, LLC, providing OSINT consulting and security analysis for government and private sector clients.