Cyber Intelligence Weekly

Cyber Intelligence Weekly (October 5, 2025): Our Take on Three Things You Need to Know

Welcome to our weekly newsletter where we share some of the major developments on the future of cybersecurity that you need to know about. Make sure to follow my LinkedIn page as well as Echelon’s LinkedIn page to receive updates on the future of cybersecurity!

To receive these and other curated updates to your inbox on a regular basis, please sign up for our email list here: https://echeloncyber.com/ciw-subscribe

Before we get started on this week’s CIW, I’d like to highlight a great article on business continuity and resilience planning from our very own Alyssa Slayton.

When every minute counts, even one disruption can cost thousands (or millions). Alyssa breaks down practical steps to keep your business running through peak season and into 2026, from testing response plans to securing vendor continuity and communication channels.

Read the full article here: https://lnkd.in/g3vAARFh

Away we go!

1. Clop’s “Business Deal” Extortion Emails Target Oracle Customers

The Clop ransomware group has resurfaced with a new campaign targeting Oracle customers, using extortion emails that strike a calculated balance between menace and businesslike negotiation. The messages, written in fractured English and riddled with spelling mistakes, present the criminals as professionals conducting a transaction rather than ideological attackers. Their pitch is simple: pay us and your data stays private; ignore us and expect financial, reputational, and regulatory fallout.

In these emails, Clop attempts to validate its claims by offering to share samples—up to three files or database rows—so recipients can “verify” that their data was indeed stolen. The communication repeatedly emphasizes that the group is not interested in harming the business, only in receiving a payout. This psychological tactic is designed to frame the extortion as a rational business decision rather than capitulation to crime. The emails close with an ominous reminder that “time is ticking,” threatening to release stolen material within days if negotiations do not begin.

While Oracle has acknowledged that some of its E-Business Suite customers have received these emails, the company has not confirmed whether any systems were actually breached or data exfiltrated. Security leaders note that vulnerabilities patched in Oracle’s July 2025 update may be in play, though details remain unclear. What is certain is that the campaign leverages compromised third-party email accounts, adding credibility to the messages and helping them slip past filters.

This incident highlights two truths about modern ransomware operations. First, groups like Clop are increasingly positioning themselves as predictable “business partners” to pressure victims into payment. Second, attackers are adept at exploiting known but unpatched software flaws while hiding behind hijacked email accounts. For executives, the lesson is clear: patch management and incident response readiness are not optional—they are the only defense against adversaries who view extortion as just another line of business.

Google’s Gemini Flaw: When Prompt Injection Becomes Data Exfiltration

In September 2025, researchers disclosed a trio of vulnerabilities in Google’s Gemini AI that blurred the lines between prompt injection and classic cloud data exposure. The Hacker News Among the issues was a search-injection attack where malicious inputs directed Gemini’s Search Personalization model to reveal or manipulate query logic; a log-to-prompt injection vulnerability potentially letting adversarial logs shape future prompts; and an exfiltration flaw via the Gemini Browsing Tool that could expose location or saved user data. The Hacker News Google swiftly patched the flaws, but the incident underscores how injection attacks in AI systems can lead to far more than errant outputs—they can become pipelines for privacy breaches.

What makes this case particularly instructive is how it turns a known class of AI risk (prompt injection) into a full-blown data leakage vector. The attackers didn’t just make the model hallucinate; they leveraged internal modules and system integration to siphon data. The lesson: when AI is deeply integrated with backend services, logs, memory, and browsing tools, prompt injection can cascade into cloud-side vulnerabilities.

Furthermore, this breach shows that patching core models isn’t enough if surrounding subsystems (logging, browsing, caching) are insufficiently guarded. The Gemini flaws remind us that AI systems are only as secure as their weakest integrated component.

2. Scattered LAPSUS$ Hunters Escalate Salesforce Extortion with New Leak Site

A notorious cybercriminal collective has taken its latest campaign public, launching a dedicated leak site to pressure nearly 40 organizations allegedly impacted by Salesforce breaches. The group, styling themselves as “Scattered Lapsus$ Hunters”—a claimed fusion of ShinyHunters, Scattered Spider, and Lapsus$—has begun publishing stolen samples while issuing a stark ultimatum: negotiate before October 10 or face full exposure.

Among the victims listed are household names across industries—Disney/Hulu, FedEx, Google, Marriott, Toyota, Gap, Walgreens, Cartier, IKEA, and more. Each entry on the site features data samples pulled from Salesforce instances, designed to remove doubt and force corporate leaders into action. To raise the stakes further, the actors have demanded that Salesforce itself pay a ransom, claiming this would halt attacks against its broader customer base and prevent the release of what they allege is more than one billion records of sensitive information.

The campaign traces back to a series of sophisticated voice-phishing operations in which employees were tricked into granting malicious OAuth applications access to their company’s Salesforce environments. Once inside, attackers harvested databases, credentials, and tokens for other platforms such as AWS and Snowflake. According to threat intelligence reports, these operations—tracked separately as UNC6395—appear linked to the same infrastructure and tradecraft that ShinyHunters has used in prior large-scale breaches.

Salesforce, for its part, has confirmed awareness of the extortion attempt but denies any compromise of its core platform or underlying vulnerabilities. Instead, the company stressed that it is working with external experts and customers to investigate the claims. Regardless, the message to enterprises is clear: even if a cloud platform is secure, trust can be broken at the human and integration layer. This campaign is a stark reminder that defending against extortion requires more than patching code—it demands vigilance around employee awareness, third-party integrations, and access governance.

Agentic AI Under Siege: New Threat Model for Autonomous AI Agents Exposed”

Recent research published in April 2025, “Securing Agentic AI: A Comprehensive Threat Model and Mitigation Framework for Generative AI Agents,” draws a sharp line between standard LLM risks and those introduced when AI systems act with autonomy and memory.

The authors argue that as AI agents become more capable—they reason over time, take actions, invoke tools, and maintain persistent internal states—they open new vectors for attacks that existing threat models don’t adequately consider.

The paper identifies nine core threat categories across five domains: cognitive architecture vulnerabilities (e.g. flawed reasoning), temporal persistence (where memory can be manipulated), operational execution (tool use abuse), trust boundary violations (cross-system leakage), and governance circumvention (bypassing controls).

For example, an attacker might subtly poison the memory buffer of an agent so that future decisions drift toward malicious goals, or induce “goal misalignment” over time without immediate detection. Because these attacks may not manifest instantly, they’re inherently stealthy.

To counter this, the authors propose two frameworks: ATFAA (Advanced Threat Framework for Autonomous AI Agents) to catalog and reason about agent risks, and SHIELD, a structured mitigation toolkit that includes runtime monitoring, sandboxing, behavior anomaly detection, and governance policies tailored to agents.

The key takeaway: securing autonomous agents demands more than patching prompt injections or securing data pipelines—it requires defense-in-depth around long-term cognition, memory, and operation.

3. Discord Breach Exposes IDs, Payment Data Through Third-Party Provider

Discord has disclosed a cyber incident that exposed sensitive user information after hackers compromised a third-party customer support provider. The attack, which took place on September 20, targeted users who had interacted with Discord’s customer support or Trust and Safety teams. While the company described the number of impacted users as “limited,” the scope of the stolen data is significant—ranging from names and email addresses to government ID photos and partial payment details.

According to Discord, the attackers gained unauthorized access to the support system, siphoning off entire support ticket histories. These included IP addresses, correspondence with agents, uploaded attachments, and in some cases scans of passports or driver’s licenses. Partial billing information such as the last four digits of credit cards and purchase history was also exposed. Security researchers warned that this type of data amounts to “entire digital identities,” providing everything needed for identity theft, account takeover, and fraud.

The breach appears to have been financially motivated. The attackers reportedly demanded a ransom from Discord to prevent the data from being leaked. A group calling itself the Scattered Lapsus$ Hunters claimed responsibility, although conflicting reports suggest another actor may have exploited a vulnerability in Zendesk, the customer service platform used by Discord. Forensic investigators and law enforcement have been engaged, and Discord says it quickly cut off access for the affected provider and launched a broader review of its security measures.

The incident underscores the systemic risks that arise when third-party platforms hold highly sensitive customer information. Even if Discord’s core infrastructure remained intact, the trust gap emerges at the integration layer—where service providers, identity platforms, and outsourced tools can become weak links in the chain. For the millions who rely on Discord for gaming, crypto communities, and everyday communication, the breach is a reminder that personal data security depends not just on the platforms we see, but on the hidden ecosystem of vendors that stand behind them.

Thanks for reading!

About us: Echelon is a full-service cybersecurity consultancy that offers wholistic cybersecurity program building through vCISO or more specific solutions like penetration testing, red teaming, security engineering, cybersecurity compliance, and much more! Learn more about Echelon here: https://echeloncyber.com/about

Are you ready to get started?