The Silent Takeover: When Your Microsoft 365 Tenant Gets Compromised
You wake up to a nightmare. Someone's inside your Microsoft 365 environment. They've created a global admin account you didn't authorize. They've disabled every alert. They've set up rules to siphon emails—and money—to external RSS feeds. And when you go looking for answers, Microsoft's logs only go back 30 days. The creation event? That was 31 days ago.
This isn't hypothetical. It's exactly what happened to that sysadmin on Reddit. Hundreds of thousands of dollars redirected. Two legitimate global admins with MFA enabled, completely unaware. A breach that stayed hidden until the financial damage was done.
What's terrifying isn't just the attack itself—it's how perfectly it exploited the gaps in what most organizations consider "good enough" security. MFA? Check. Limited admins? Check. But still, complete compromise.
In this deep dive, we're going to dissect exactly how this happens, why your current defenses probably aren't enough, and—most importantly—what you can do right now to prevent it. This isn't theoretical security advice. This is the hard-won knowledge from people who've been through the fire.
How Attackers Bypass Your "Secure" Microsoft 365 Setup
Let's start with the obvious question: How did they get in with MFA enabled on all accounts?
The Reddit post mentions only two global administrators with MFA. That's actually a decent starting point. But here's what most organizations miss: MFA protects authentication, not authorization. Once someone's in, what they can do depends entirely on what permissions they have—or can get.
Attack vectors I've seen in real investigations:
1. Service Principal or App Registration Compromise
This is the sneakiest path. An attacker doesn't need to compromise a user account at all. If your organization has granted high privileges to an application registration or service principal (common in DevOps scenarios), and that app's credentials leak? Game over. The attacker can use those credentials to authenticate as the application, which might have permissions to create new users or modify roles.
And here's the kicker: This activity often doesn't show up in the same audit logs as user activity. You need to be looking at service principal sign-ins specifically.
2. Consent Phishing
An employee gets a convincing-looking prompt to grant permissions to a "Microsoft Teams extension" or "productivity tool." They click accept. Suddenly, that third-party app has permissions to read mail, send as users, or access directory data. From there, privilege escalation to global admin is often possible through existing vulnerabilities or misconfigurations.
3. Session Token Theft
MFA happens once per session (or less frequently with conditional access policies). Steal the session cookie or token, and the attacker has access without needing to re-authenticate. This is increasingly common with malicious browser extensions or compromised devices.
The common thread? The initial breach often doesn't look like a traditional "account takeover." It looks like legitimate activity—until it's not.
The 30-Day Log Retention Trap (And How to Escape It)
"Microsoft logs only go back 30 days."
That sentence from the Reddit post should send chills down your spine. Because it's true—by default. Azure AD audit logs and sign-in logs have a 30-day retention period for free. Premium P1 licenses get you 30 days. Premium P2? Still 30 days for most logs, though some security reports go further.
So when our Reddit admin discovered the breach, the account creation event from December 23 was already gone. Poof. No forensic trail.
This isn't a Microsoft flaw—it's a design choice. But it's one that assumes you're doing your part in log collection and retention. And most organizations aren't.
Here's what you need:
- Continuous export to a SIEM or log management platform: This is non-negotiable. Azure Sentinel, Splunk, Elastic, Datadog—pick one. But get those logs out of Microsoft's volatile storage and into something you control.
- 90-day minimum retention for security logs: Honestly, I recommend 180 days. Storage is cheap compared to not knowing how you were breached.
- API-based collection, not manual exports: Don't rely on someone remembering to download logs weekly. Automate it.
Pro tip: The Microsoft Graph API for audit logs has some quirks. Sometimes it delays. Sometimes it misses things. You need to implement retry logic and validation in your collection pipeline. I've seen organizations think they're collecting everything, only to discover gaps during an incident.
Email Forwarding Rules: The Silent Money Siphon
Let's talk about the actual attack vector mentioned: "create rules to push emails to rss feeds."
This is brilliant from an attacker's perspective. Not exfiltration through massive downloads. Not ransomware locking files. Just quiet, persistent redirection of specific emails.
Think about what emails get forwarded:
- Invoice notifications
- Wire transfer confirmations
- Vendor payment communications
- Financial report distributions
The attacker sets up a rule that says: "Any email with 'invoice,' 'payment,' or 'wire' in the subject gets forwarded to an external RSS feed address I control." They might even make it more sophisticated—only forward if the amount is over $10,000, or only from specific senders.
And here's the real problem: Users rarely check their forwarding rules. Admins rarely monitor rule creation at scale. It's a perfect blind spot.
Worse, Microsoft's default alerts for forwarding rules are... underwhelming. You might get a notification if a user sets up forwarding to an external domain. But what if the rule only forwards specific messages? Or what if, as in this case, the attacker disables alerts first?
Disabling Alerts: How Attackers Cover Their Tracks
This is the most sophisticated part of the attack chain. The breacher didn't just create an account and start stealing. They methodically disabled detection mechanisms.
In Microsoft 365, there are multiple alert systems:
- Azure AD Identity Protection alerts (risky users, risky sign-ins)
- Microsoft Defender for Office 365 alerts (malware, phishing, suspicious email rules)
- Microsoft Cloud App Security alerts (if you have it enabled)
- Custom alert policies you might have created
A global admin can disable or modify most of these. They can whitelist their own IP addresses. They can lower sensitivity thresholds. They can turn off notifications.
And they do this before the main attack. Because why wouldn't they?
This creates a terrifying scenario: Your security tools are working perfectly. They're just configured to ignore the attacker.
Building an Automated Defense: Detection as Code
Traditional security monitoring fails here because it relies on the attacker triggering predefined alerts. But what if the attacker controls the alert definitions?
You need detection that exists outside the attacker's reach. Detection as code.
Here's my approach:
1. Immutable Log Collection
Set up log collection with write-once, append-only storage. AWS S3 with object lock, Azure Blob Storage with immutable storage, or similar. The moment logs leave Microsoft 365, they go to a bucket that no one in your organization can modify or delete—not even global admins. This requires separate cloud credentials with minimal permissions.
Yes, it's a pain to set up. No, it's not optional after a certain scale.
2. Automated Baseline Monitoring
Write scripts or use tools that continuously check for anomalies:
- New global admin accounts (should be zero in most organizations)
- Changes to conditional access policies
- Disabled security alerts
- Mail forwarding rules to external domains
- Service principal credential additions
These checks should run from a separate environment with its own authentication. If your monitoring system uses the same Azure AD tenant it's monitoring, an attacker can compromise it too.
3. Change Control for Critical Settings
Treat security settings in Microsoft 365 like you treat production infrastructure. Changes should require:
- A ticket or request
- Approval from someone other than the requester
- Automated logging of the change
- Post-change verification
Tools like Azure DevOps, GitHub Actions, or even PowerShell scripts with approval workflows can enforce this. The key is that no single admin—even a global admin—can silently change security settings.
Practical Steps You Can Implement This Week
Enough theory. Here's what to actually do:
Immediate Actions (Day 1)
Review all global admins right now. Go to Azure AD > Roles and administrators > Global Administrator. Every account listed should have a business justification and an owner you can contact. Remove any you don't recognize or can't justify.
Check for hidden admins. Some roles have equivalent permissions. Check Privileged Role Administrator, Exchange Administrator, SharePoint Administrator, and Security Administrator. Attackers often use these instead of Global Admin to fly under the radar.
Audit service principals with directory permissions. This is in Azure AD > App registrations. Look for any with Directory.ReadWrite.All or similar high-privilege permissions. Question every one.
Week 1: Logging and Monitoring Foundation
Set up log export. If you don't have a SIEM, start with Azure Monitor Log Analytics. It's included with many licenses. Configure diagnostic settings to send Azure AD logs to Log Analytics. Do this today—the 30-day clock is always ticking.
Create critical detection queries. In Log Analytics or your SIEM, build these KQL queries and set them to run daily:
// New global admins AuditLogs | where OperationName == "Add member to role" | where TargetResources has "Global Administrator" // Mail forwarding rules to external domains OfficeActivity | where Operation == "New-InboxRule" | where Parameters contains "@"
Enable audit log retention. If you have Microsoft 365 E5 or equivalent, turn on the 10-year audit log retention. It's often not enabled by default.
Month 1: Hardening and Automation
Implement privileged identity management (PIM). This is Azure AD's just-in-time elevation system. No one should have permanent global admin access. With PIM, they request elevation for a specific time period. Every elevation is logged and requires approval.
Create break-glass accounts. At least two cloud-only accounts with global admin rights, not synchronized from on-premises. These should have long, complex passwords stored in a physical safe. MFA should be enabled but with bypass options (like printed backup codes in the safe). These accounts are for when everything else fails.
Build automated reports. Use PowerShell or the Graph API to generate weekly reports of admin activity, new applications, and permission changes. Send these to multiple people—including someone outside IT. Separation of duties matters.
Common Mistakes (And How to Avoid Them)
Mistake 1: Assuming MFA Solves Everything
MFA is necessary but insufficient. You need conditional access policies that consider device compliance, location, and user risk. You need to monitor for token theft. You need to limit what authenticated users can do.
Mistake 2: Syncing On-Premises Admins to Cloud
If you sync your on-premises Active Directory to Azure AD, and your on-premises domain admins become cloud global admins... you've just extended your attack surface dramatically. Compromise one on-premises server, and the attacker gets cloud global admin. Use cloud-only accounts for cloud administration.
Mistake 3: Ignoring Service Accounts and Applications
That PowerShell script running nightly with stored credentials? That CI/CD pipeline service principal? Those often have excessive permissions that never get reviewed. Treat them with the same scrutiny as human accounts.
Mistake 4: No Regular Access Reviews
Azure AD Access Reviews can automatically ask managers to confirm their team members still need access. Set these up quarterly for admin roles. Better yet, use automated user provisioning and deprovisioning scripts to ensure when someone leaves the company, their access is removed immediately—not weeks later.
Mistake 5: Thinking You're Too Small to Target
Attackers automate these breaches. They don't care if you're a 50-person company or 50,000. If your email domain is publicly known, you're being probed. Automated scripts are testing for weak credentials, misconfigured applications, and exposed APIs right now.
When Prevention Fails: Incident Response Essentials
Let's say it happens. You discover a breach. What now?
- Don't panic and start changing everything. You'll destroy forensic evidence. Take a breath.
- Isolate the compromised account(s). Reset passwords, revoke sessions, disable the account. But do it methodically.
- Preserve logs immediately. Export everything you can before the 30-day window expires on any activities.
- Check for persistence mechanisms. New admin accounts, mailbox rules, hidden inbox rules, app registrations, conditional access policies that whitelist attacker IPs.
- Assume all credentials are compromised. This is painful, but necessary. Plan for a organization-wide password reset and MFA re-registration.
- Engage professional help if needed. Sometimes you need external incident response. Security consultants on Fiverr can provide affordable expertise if you don't have in-house incident response specialists.
And document everything. What you found, when, what you did, in what order. This matters for insurance, legal, and improving your security posture.
The New Reality of Cloud Security
That Reddit post isn't an anomaly. It's the new normal. As more business moves to Microsoft 365, attackers are following. And they're getting sophisticated.
The old perimeter security model is dead. Your "perimeter" is now a set of identities in Azure AD. Protect those identities like your business depends on it—because it does.
This isn't about buying more security products. It's about using what you have effectively. It's about assuming breach and designing your detection accordingly. It's about automation that works even when human-controlled systems are compromised.
Start with the logging. Today. Before you need it. Because when you're staring at a breach with no logs, no trail, and hundreds of thousands gone, "I'll get to it next quarter" isn't going to cut it.
Then build your detection. Automate your reviews. Limit standing privileges. Make the attacker's job harder at every step.
And maybe—just maybe—you'll be the one who catches them instead of the one writing the Reddit post.