The company’s Advanced Account Security program, launched with Yubico support, makes phishing-resistant login available to users handling sensitive personal, professional and cybersecurity work.
OpenAI is adding a stronger security layer to ChatGPT accounts, introducing an optional program that moves users away from conventional passwords and toward passkeys and physical security keys, including YubiKeys, in an effort to reduce the risk of phishing and account takeover attacks.
The program, called Advanced Account Security, was announced on April 30 and is aimed at people who face heightened digital risk, including journalists, elected officials, political dissidents, researchers and highly security-conscious users. It is also available more broadly to users who want stronger protection for accounts that may contain sensitive conversations, work files, coding projects or links to connected tools.
The launch reflects a changing understanding of what an AI account represents. For many people, ChatGPT is no longer just a place to ask casual questions. It may contain drafts of confidential documents, business strategies, legal questions, health-related concerns, software code, research notes, political communications or personal reflections. As AI systems become embedded in daily work, a compromised account can expose more than a password. It can reveal context, intent, relationships and unfinished decisions.
OpenAI’s new setting attempts to address that risk by making phishing-resistant authentication the default for enrolled users. Once Advanced Account Security is enabled, users must rely on passkeys or physical security keys, and password-based login is disabled. The goal is to reduce dependence on credentials that can be stolen through fake login pages, intercepted messages, reused passwords or social engineering campaigns.
The change places OpenAI alongside a broader movement among major technology companies to reduce reliance on passwords, which remain one of the most common points of failure in consumer and enterprise security. Passkeys use cryptographic credentials stored on a device or security key and are typically protected by a biometric check, a device PIN or a physical touch on a hardware key. In practical terms, that makes them far harder to steal than a password typed into a website.
OpenAI said Advanced Account Security applies to ChatGPT accounts and also protects Codex when accessed through the same login. That detail matters because Codex is used by developers to work with software projects, and those accounts may contain code, credentials, architecture notes or other sensitive technical material. For users involved in cybersecurity research or software development, an account takeover could create risks that extend beyond personal privacy.
The program also tightens account recovery. Under normal online account systems, a user who loses access may recover an account through email, text message or customer support. Those channels can help legitimate users regain access, but they also create openings for attackers. If an attacker compromises a user’s email inbox or phone number, they may be able to trigger recovery flows and take control of another account. If an attacker successfully manipulates a support process, they may achieve the same result through social engineering.
Advanced Account Security closes much of that route. For enrolled users, OpenAI disables email and SMS recovery and requires stronger recovery methods such as backup passkeys, security keys and recovery keys. The trade-off is significant: OpenAI Support will not be able to help recover accounts enrolled in the program if the user loses access to the required recovery methods.
That design choice is central to the program. It improves protection by removing a human support channel that attackers might exploit, but it also shifts more responsibility to the user. Someone who enables the feature must manage backup credentials carefully, store recovery keys securely and understand that losing all recovery options could mean losing the account. The security gain is real, but so is the operational burden.
OpenAI is also shortening sign-in sessions for enrolled accounts, reducing the window of exposure if a device or active session is compromised. Users will receive alerts when there is a login to their account and can review and manage active sessions across devices. These measures do not eliminate risk, but they give users more visibility and faster warning when something unusual occurs.
Another major part of the program is automatic exclusion from model training. OpenAI already provides ways for users to control whether their conversations may be used to improve models. With Advanced Account Security enabled, that preference becomes automatic: conversations from those accounts will not be used to train OpenAI’s models. For people working with sensitive information, the pairing of stronger account security and stronger privacy defaults is likely to be one of the program’s most important features.
The partnership with Yubico is intended to make physical security keys easier to adopt. OpenAI said it will offer preferred pricing on a customized bundle of YubiKeys, including the YubiKey C Nano and YubiKey C NFC. The C Nano is designed to stay in a laptop for low-friction daily authentication, while the C NFC can serve as a backup key and work across laptops and mobile devices. Users will also be able to use other FIDO-compliant security keys or software-based passkeys.
Hardware keys are widely regarded by security professionals as one of the strongest defenses against phishing because they authenticate the legitimate website rather than simply sending a code that a user can be tricked into sharing. In a traditional phishing attack, a victim may enter a password and one-time code into a fake site controlled by an attacker. With a properly implemented security key, authentication is tied cryptographically to the real service, making that style of attack far less effective.
The program is not mandatory for most users. It is opt-in and available through the Security section of ChatGPT accounts on the web. But OpenAI is making it a requirement for a narrower group: individual members of its Trusted Access for Cyber program who access the company’s most capable and permissive cybersecurity models will need to enable Advanced Account Security beginning June 1, 2026. Organizations with trusted access can instead attest that they use phishing-resistant authentication through enterprise single sign-on.
That requirement shows how OpenAI is treating high-risk AI access as a cybersecurity issue in its own right. As frontier AI models become more capable in coding, vulnerability analysis and cyber defense, the accounts that can access them become more valuable targets. Locking down those accounts is not only about protecting the individual user. It is also about reducing the chance that powerful tools are accessed through stolen credentials.
For everyday users, the new feature may feel more demanding than standard multi-factor authentication. It requires planning, backup methods and careful storage of recovery materials. It may also be less forgiving than the familiar account recovery processes people expect from consumer software. OpenAI is effectively asking users to choose between convenience and a higher level of protection.
The timing is notable. Phishing attacks have become more sophisticated, and AI tools themselves can help attackers draft convincing messages, impersonate trusted contacts and scale social engineering attempts. At the same time, AI accounts are becoming more valuable because they may hold a detailed record of a person’s work and thinking. That combination makes stronger login security more urgent.
OpenAI’s move does not solve every security problem associated with AI services. A compromised device, malicious browser extension, careless file sharing habit or insecure third-party integration can still create exposure. Organizations will still need clear policies on what employees can enter into AI tools, how data is retained and how access is monitored. Individuals will still need to practice caution with links, downloads and unknown prompts.
But Advanced Account Security is an important step in treating AI accounts as high-value digital assets rather than ordinary app logins. It brings stronger authentication, tighter recovery, session visibility and privacy defaults into a single setting. By partnering with Yubico, OpenAI is also trying to reduce the friction that has kept hardware keys mostly in the hands of security professionals and large enterprises.
The broader message is clear: as AI becomes a routine part of work and personal life, account security must rise to match the sensitivity of what users share with these systems. For OpenAI, the launch of Advanced Account Security is both a product update and a statement about the role ChatGPT now plays. The more people rely on AI for confidential work, the more a secure login becomes not a convenience feature, but a core safeguard.

