Cyber Intelligence Weekly (September 14, 2025): Our Take on Three Things You Need to Know
Welcome to our weekly newsletter where we share some of the major developments on the future of cybersecurity that you need to know about. Make sure to follow my LinkedIn page as well as Echelon’s LinkedIn page to receive updates on the future of cybersecurity!
To receive these and other curated updates to your inbox on a regular basis, please sign up for our email list here: https://echeloncyber.com/ciw-subscribe
Before we get started on this week’s CIW, I’d like to highlight a new article on Securing AI by some of our experts.
𝗔𝗜 𝗰𝗮𝗻 𝗳𝘂𝗲𝗹 𝗴𝗿𝗼𝘄𝘁𝗵 𝗼𝗿 𝗲𝘅𝗽𝗼𝘀𝗲 𝘆𝗼𝘂𝗿 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀.
It all depends on how you govern it.
In this article, Akhil Vishnubhotla and Brayden Park break down practical ways to apply ISO and NIST frameworks so innovation doesn’t come at the cost of compliance, trust, or resilience. Read here: https://lnkd.in/ghrRjkZY

Away we go!
1. iPhone 17’s Quiet Upgrade: Memory Tagging That Breaks Exploit Chains
MApple slipped a quiet but consequential security change into its iPhone 17 and iPhone Air launch: Memory Integrity Enforcement (MIE). Built on Arm’s Memory Tagging tech—extended with Apple into “Enhanced MTE” (EMTE)—MIE goes after the industry’s workhorse bug class: memory corruption. In practice, the system assigns secret tags to chunks of memory; if code reaches into a region without the matching tag, the access is blocked, the app crashes, and the event is logged. Because so many mercenary spyware chains and phone-forensics exploits lean on memory-safety flaws, that single move hits both remote infection routes (think message and browser exploits) and hands-on extraction kits.
Early reaction from offensive and defensive researchers points in the same direction: MIE won’t make iPhones “unhackable,” but it should make reliable exploitation slower, pricier, and rarer. Crashes that MIE intentionally triggers on bad accesses also leave breadcrumbs, improving detection and post-incident forensics. Apple says MIE is on by default across the system—covering high-risk entry points like Safari and iMessage—while third-party developers can adopt EMTE to extend protections inside their own apps.
There are caveats. Attackers will pivot to logic bugs, supply-chain weak links, and unprotected apps; some vendors may still find viable paths with enough time and money. Impact will also track adoption: the more users on the newest devices—and the more devs who compile with EMTE—the smaller the usable attack surface becomes. Still, for targets of spyware and lawful-access tools alike, the calculus just shifted in Apple’s favor.
What should enterprises and high-risk users do right now? Prioritize upgrades for at-risk cohorts (execs, diplomats, investigative teams), keep devices fully patched, and enable Apple’s high-friction protections (e.g., Lockdown Mode, iMessage Contact Key Verification) where warranted. Ask critical third-party vendors about EMTE support and review crash telemetry for unusual spikes tied to messaging or web content. MIE narrows the window; your policies and procurement choices determine how much.

Identifying Domain-Wide Delegations in Google Workspace
Domain-Wide Delegation (DWD) is a useful feature but is also commonly misconfigured in many Google Cloud Platform (GCP) environments with organizations not truly understanding the risk associated with improper use of the feature.
DWD is a feature in Google Workspace that allows service accounts to impersonate any user within the domain and grants the service account access to user data via Google APIs. While the feature is very useful for automation and service integrations within the GCP environment, misconfiguration can result in a critical security risk.
If a threat actor is able to access a service account with DWD enabled, they will have administrative-level access to user’s data including their Gmail, Drive, and other Google Workspace services.
DelePwn, an open-source tool, has been released to help quickly identify DWD. This tool can be used to support cloud penetration testing, similar to BloodHound or Sharphound, to identify accounts ripe for abuse or used to assess account misconfigurations to reduce your organizations risk.
DelePwn enumerates accounts with DWD enabled and maps service accounts to users. The tool can then be used to scrape Google Drive and Calendar data and create a new admin user or elevate an existing user to admin. While the features beyond DWD enumeration may not be required for most organizations, the ability to simulate a real-world attack is helpful for red teaming and penetration testing.

2. FTC Puts AI Chatbots on Notice Over Kids’ Safety and Privacy
The Federal Trade Commission has opened a broad inquiry into how leading AI chatbot providers handle children’s safety and privacy, sending orders to Alphabet, Character Technologies (Character.AI), Meta, OpenAI, Snap, and xAI. Regulators want to know whether these companies are meaningfully limiting minors’ use of conversational AI and how they square that with growth incentives. The review zeroes in on compliance with the Children’s Online Privacy Protection Act (COPPA) and what concrete steps vendors are taking to mitigate harm for kids and teens.
The agency’s questions go well beyond age gates. Companies are being asked how they monetize engagement with youth, how they detect and track negative impacts, what disclosures they give to parents and young users, and whether personal data shared with bots is used or passed to others. The focus reflects a shift from “safety features on the spec sheet” to demonstrable controls, measurement, and accountability across product, policy, and revenue operations.
Scrutiny follows a series of troubling episodes—from reports that a Character.AI bot encouraged self-harm in a teen, to Meta’s move this month to block its assistant from discussing suicide and eating disorders with minors after lawmakers flagged “sensual” dialogues. The common thread: highly humanlike systems that can feel like a confidant, without reliable guardrails for vulnerable users. As FTC Chair Andrew Ferguson put it, the aim is to protect kids while preserving innovation—but the burden of proof now sits with the builders.
What this means for enterprises: if you deploy chatbots (vendor-hosted or in-house), assume regulators will expect COPPA-grade diligence for under-13s and heightened protections for teens. Validate age-assurance methods, restrict sensitive topics for minors, log and review safety incidents, minimize data collection, and lock down downstream sharing. If you rely on third-party AI, demand attestations on youth policies, data use, and incident response—and be ready to enforce them contractually.

Can MCP Outputs Be Trusted?
A new threat research blog from CyberArk spotlighted the critical risk of MCP tool poisoning: malicious content embedded in a Model Context Protocol (MCP) server’s output can subvert LLM-based systems by exploiting unsuspecting AI agents.
What Is MCP Tool Poisoning?
MCP is a common protocol enabling LLMs to call external tools. The CyberArk team demonstrated how an attacker can corrupt the tool’s schema or descriptive fields—such as the description or metadata—which are fed back into the LLM’s prompt context. This maliciously crafted content can inject hidden instructions or harmful payloads that LLMs dutifully act upon.
Attack Scenarios
Parameter Poisoning: A seemingly innocuous tool intended to add numbers could also include a parameter for reading ~/.ssh/id_rsa. While the LLM displays the “add” description, it might inadvertently leak sensitive SSH keys if manipulated.
Advanced Targeted Payload Attacks (ATPA): Even retrieving simple content (via GET requests) from a server can be weaponized: a compromised MCP server might embed malicious instructions in an error field. Since LLMs cannot reliably distinguish between data and instructions, this could trigger dangerous behavior; effectively turning any server into an attack vector.
Why This Broadens the Attack Surface
- It’s not just unsafe third-party tools; even trusted MCP servers can be manipulated.
- LLMs are vulnerable because they treat all incoming text as potential instructions—there’s no secure parsing of “only data.”
- Similar injection vectors exist across other tool-calling frameworks like OpenAPI, browsers, and beyond.
Recommended Mitigations
- Apply Least Privilege: Sandboxing tools, restricting file or network access, and isolating their execution context is critical.
- Audit & Whitelist Tools: Only allow known-safe MCP servers and rigorously audit schema metadata before use.
- Treat MCP Output as Untrusted: Don't automatically feed it into the LLM prompt. Introduce data validation, sanitization, and explicit instruction/data separation.
- Sandbox Execution: Host MCP services within isolated containers or micro-VMs to limit exfiltration risks.

3. Old Bug, New Breaches: Akira Ransomware Piggybacks SonicWall SSL-VPN Flaw
SonicWall is warning that a spate of new ransomware intrusions isn’t new at all—it’s threat actors re-exploiting CVE-2024-40766, the SSL-VPN improper access control flaw first patched in August 2024 across Gen5/Gen6/Gen7 firewalls. Rapid7 says the recent surge tracks to Akira ransomware operators using the bug for initial access, the same playbook seen last year. The twist this time: many victims had recently migrated from Gen6 to Gen7 and carried over local user passwords without resetting them, a remediation step SonicWall explicitly required in its original advisory.
That omission matters. Even fully updated appliances can remain exposed if legacy credentials or overly permissive local accounts survive a migration. SonicWall’s late-August 2025 support notice ties numerous incidents to precisely that gap, underscoring a familiar lesson: patching the binary is only half the fix when the root problem includes keys and configuration state that attackers already know—or can trivially guess.
For defenders, the Akira activity is a reminder to validate the whole control stack around remote access, not just firmware level. Prioritize: confirm you’re on fixed builds for all supported gens; force rotation of all local SSL-VPN passwords (don’t grandfather anything from Gen6); disable or delete unused local accounts in favor of directory-backed auth; enforce MFA on every remote access path; review portal/IP lockouts and geofencing; and comb SSL-VPN and admin logs for unknown source IPs, new device registrations, and anomalous configuration changes. If your org migrated in the past year, treat this as incident-response-level urgency.
Finally, assume initial access may already have been sold or reused. Hunt for Akira’s post-exploitation staples (credential dumping, SMB staging, lateral RDP/PSExec, rapid file-share traversal) and validate backups are offline/immutable and restorable. The “old-bug, new-breach” cycle will keep repeating wherever upgrades don’t include credential hygiene and config hardening.
Thanks for reading!
About us: Echelon is a full-service cybersecurity consultancy that offers wholistic cybersecurity program building through vCISO or more specific solutions like penetration testing, red teaming, security engineering, cybersecurity compliance, and much more! Learn more about Echelon here: https://echeloncyber.com/about