Cyber Intelligence Weekly

Cyber Intelligence Weekly (October 12, 2025): Our Take on Three Things You Need to Know

Welcome to our weekly newsletter where we share some of the major developments on the future of cybersecurity that you need to know about. Make sure to follow my LinkedIn page as well as Echelon’s LinkedIn page to receive updates on the future of cybersecurity!

To receive these and other curated updates to your inbox on a regular basis, please sign up for our email list here: https://echeloncyber.com/ciw-subscribe

Before we get started on this week’s CIW, I’d like to highlight a great article about the future of FedRAMP.

𝗧𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗙𝗲𝗱𝗥𝗔𝗠𝗣 𝗶𝘀 𝗵𝗲𝗿𝗲, 𝗮𝗻𝗱 𝗶𝘁’𝘀 𝗳𝗮𝘀𝘁. FedRAMP 20x is streamlining compliance with automation-first validation and instant monitoring.

Alyssa Slayton explains what’s new and what it means for every player in federal cloud.

🔗 Read the full article: https://lnkd.in/gddYPNa9

Away we go!

1.  ShinyHunters Go Public: Leak Site, Deadlines, and a Push to Pay

ShinyHunters has moved from quiet extortion to public spectacle. After a months-long campaign that began with voice-phishing employees into authorizing a malicious OAuth app against corporate Salesforce tenants, the group has launched a “name-and-shame” site listing more than three dozen household brands and warning that stolen customer data will drop if ransoms aren’t paid by October 10. The same crew (often styling itself “Scattered LAPSUS$ Hunters”) is also claiming links to two other high-impact episodes: a third-party support breach at Discord that exposed sensitive user data, and a Red Hat incident involving a compromised GitLab server said to contain tens of thousands of code repositories and customer engagement reports.

What makes this spree notable isn’t just the victim list—it’s the playbook. ShinyHunters blends social engineering, token abuse, and supply-chain angles to jump trust boundaries: convince an employee to approve a rogue app, inherit API-level Salesforce access, harvest records and cloud tokens (Snowflake, AWS, more), then pivot to broad extortion at scale. The crew’s public posts even dangle a “global settlement” pitch to Salesforce itself: pay once and we’ll drop individual customer shakedowns. Salesforce has flatly rejected that proposition, emphasizing collaboration with law enforcement and affected customers instead of negotiation.

The pressure campaign has widened beyond data dumps to overt harassment. Security researchers and journalists have received malware-laced messages tied to the group’s branding; analysis points to an AsyncRAT payload with screen capture, keylogging, and credential-theft plugins—classic tradecraft for reconnaissance and secondary access. Meanwhile, the extortion site has flickered offline as infrastructure shifts, but the disclosures already seeded carry-on effects: incident response across multiple enterprises, regulatory notice obligations, and heightened scrutiny of OAuth governance, vendor access, and support desk ecosystems.

There’s a broader signal here for security leaders: the soft spots are not exotic zero-days alone (though Oracle E-Business Suite customers are now racing to patch a critical RCE cited in adjacent activity); they’re the seams between identity, SaaS integrations, and third-party operations. Controls that blunt this class of threat are knowable and measurable—least-privilege OAuth scopes, app allow-listing, step-up verification for marketplace installs, aggressive token hygiene, vendor isolation for support workflows, and continuous monitoring for unusual data egress. The extortion economy thrives on convenience turned into access; closing those seams is how you devalue the threat.

The Hidden Risk of AI Workloads in Hybrid Clouds

In the 2025 State of Cloud Security Report published by Orca, one of the emerging red flags is how AI workloads exacerbate existing cloud vulnerabilities. The report notes that 84% of organizations now run AI operations in cloud environments, and that creates new classes of risk—especially AI-related CVEs enabling remote code execution, mixed with misconfiguration and inadequate isolation.

The study also observed that many organizations remain immature in their cloud security posture: security is often the top spending priority, yet tooling, visibility, and operational maturity lag far behind.

Meanwhile, the Tenable State of Cloud & AI Security 2025 survey shows that hybrid and multi-cloud deployments are now the norm, with 82% of organizations using hybrid environments. Yet 34% of respondents with AI workloads have already experienced a breach—or believe they have.

These findings underscore how cloud and AI risks compound each other. An AI workload running in a misconfigured cloud environment might expose internal model APIs, inference endpoints, or data stores. Attackers exploiting privileges, lateral movement, or container escape could piggyback on AI service infrastructure to extend their reach. Defenses must evolve accordingly: better workload segmentation, runtime protection (e.g. Web Application Firewalls around AI endpoints), IAM controls tuned for AI services, model input/output validation, and continuous anomaly detection for cloud-AI interaction patterns.

In sum: AI is not just a new workload—it’s a force multiplier in cloud risk. As organizations deploy LLMs and agents across clouds, they must treat the AI layer as a first-class citizen in the threat model, not as a sidecar afterthought.

2.  AI Code Is Reshaping Software Security. Not Always for the Better

As a recent Wired research report explains, a quiet but troubling shift is happening in software development. As developers lean more heavily on AI-generated “vibe coding” to accelerate their work, they’re inheriting an old security problem in a new and harder-to-detect form. Just as open-source libraries once became the go-to building blocks for rapid development—introducing both innovation and risk—AI-generated snippets are now filling that same role. But unlike open source, where code can often be traced, audited, and attributed, vibe coding brings a lack of transparency that is fundamentally reshaping the software supply chain.

Security researchers are warning that this shift is already creating blind spots. AI models often train on vast troves of public code, including outdated or vulnerable components, which can reintroduce flaws long thought resolved. And because LLMs generate code on demand, even the same prompt from two developers can produce slightly different results—introducing subtle inconsistencies that are difficult to track or standardize. “We’re hitting the point where AI is about to lose its grace period on security,” says Alex Zenla, CTO at Edera. “The vulnerabilities it learned from are finding their way back into production.”

Unlike traditional open source, where developers can at least review commit histories or vet the reputation of contributors, vibe coding obscures provenance. It’s not always clear where a particular function originated or whether it has been reviewed with security in mind. A recent survey from Checkmarx found that over 60% of code in some organizations was generated by AI tools last year, but fewer than one in five had a formal approval list for which tools were allowed. That lack of governance leaves companies exposed to untracked dependencies, unvetted logic, and invisible security debt.

The appeal of vibe coding is obvious: speed, simplicity, and reduced development costs. But the cost of neglecting security can be far higher. Vulnerable AI-generated code can easily ripple through production environments, making exploitation at scale far easier for attackers. The same accessibility that empowers small businesses to build faster also risks creating a new class of soft targets. As AI becomes a standard part of development workflows, organizations will need to rethink their software security lifecycle—not just to keep pace, but to keep control.

Beyond Shadow AI: Utilizing Governance-Driven Generative AI

RAs organizations accelerate the integration of AI technologies to drive efficiency, innovation, and competitiveness, the unmonitored and unauthorized adoption of AI systems by individuals and teams has emerged as a parallel risk vector. Referred to as Shadow AI, this phenomenon poses significant challenges to governance, data protection, identity management, and organizational trust.

Shadow AI refers to the use of generative AI tools—such as large language models (LLMs), code assistants, image generators, and AI-enabled SaaS applications—without formal approval or oversight from IT, security, or risk management functions. These tools are often adopted organically by employees who are motivated by perceived gains in productivity, convenience, or creativity.

In his Forbes analysis, Art Gilliland, CEO of Delinea, articulates the scope and complexity of the Shadow AI problem, positioning it not merely as a compliance issue, but as a core security and operational resilience concern.

Key Risks and Threat Vectors

Gilliland outlines several interrelated risk domains arising from Shadow AI usage:

 

  • Data Exposure and Leakage: Employees may upload proprietary data, intellectual property, or regulated personal information into AI systems with unclear data retention, sharing, or training policies. These actions may contravene contractual, regulatory, or ethical obligations.
  • Credential Mismanagement: AI tools, particularly when integrated into workflows or automated pipelines, often rely on long-lived credentials or hardcoded secrets. When used without privileged access governance, these machine identities can become attack surfaces for lateral movement or privilege escalation.
  • Visibility and Control Gaps: Shadow AI usage typically bypasses central IT governance processes, such as vendor security reviews, data flow mapping, or identity access controls. This lack of visibility undermines the organization’s ability to enforce security baselines or respond effectively to incidents.
  • Model Behavior and Drift: Publicly accessible AI models are updated frequently by third-party providers. As a result, output behaviors may change in ways that are neither predictable nor auditable, compounding governance challenges over time.

Mitigation Strategies

To address the operational and security challenges posed by Shadow AI, Gilliland recommends a multi-pronged governance approach:

  • Codify AI Usage Policies Organizations should develop formal, enforceable policies that define acceptable use of AI tools, delineate approval workflows, and establish clear accountability for data shared with external models.
  • Establish Centralized Oversight Mechanisms Cross-functional governance bodies—comprising representatives from IT, security, legal, and business units—should be empowered to assess the risks of proposed AI tools and to oversee their deployment.
  • Enhance Visibility Through Tool Discovery and Monitoring Security teams must invest in systems that can detect unauthorized AI tool usage, including browser-based tools, public APIs, or third-party plugins. Visibility should extend to understanding what data is being shared and what outputs are being operationalized.
  • Strengthen Privileged Access and Identity Controls Role-based access control (RBAC), privileged access management (PAM), and machine identity governance should be extended to cover AI-driven systems and service accounts. Credentials must be rotated regularly, and access should follow the principle of least privilege.
  • Deploy Anomaly Detection for AI-Linked Behaviors Behavioral analytics systems should monitor AI tool activity for anomalies such as unusual data transfers, excessive model queries, or access pattern deviations, enabling early detection of misuse or compromise.
  • Educate End Users and Promote a Culture of Secure Innovation Security and compliance teams should invest in awareness programs that inform users of both the utility and the risks of generative AI tools. Clear communication channels should exist for users to request approval of new tools rather than circumvent policy.

3.  SonicWall Incident Exposes Firewall Configurations for All Cloud Backup Customers

SonicWall has confirmed a significant security incident affecting every customer that used its cloud backup service, revealing that firewall configuration backups were accessed by an unauthorized party. This incident, first disclosed in September, was initially believed to affect only a subset of users. The company’s latest bulletin clarifies that all customers relying on SonicWall’s cloud portal to store firewall configuration files were impacted.

The exposed data includes firewall configuration backups (.EXP files), which contain AES-256 encrypted credentials and network configuration information. While the encryption itself provides some protection, the configurations reveal valuable details that can dramatically lower the barrier to exploitation for attackers. Armed with these files, threat actors could more efficiently identify weak points in network defenses, enabling targeted follow-on attacks.

SonicWall’s remediation guidance is unambiguous: administrators must reset and rotate every credential associated with their affected firewalls and integrations. This includes local user passwords, temporary access codes, shared VPN secrets, LDAP/RADIUS/TACACS+ credentials, API keys, and other critical trust relationships. Even auxiliary elements—such as AWS logging keys and SNMPv3 credentials—are recommended for rotation to eliminate any latent risk. Organizations can check for affected devices through the MySonicWall portal under Product Management → Issue List.

While SonicWall has wrapped up its investigation in partnership with Mandiant, this incident underscores a larger truth about cloud-based configuration storage: convenience can amplify exposure. Firewall configuration files are effectively a blueprint to an organization’s defensive posture. In the wrong hands, they offer attackers a shortcut around the hard work of reconnaissance. For organizations affected, this is not just a password reset exercise—it’s a moment to reevaluate credential hygiene, firewall segmentation, and exposure management.

Thanks for reading!

About us: Echelon is a full-service cybersecurity consultancy that offers wholistic cybersecurity program building through vCISO or more specific solutions like penetration testing, red teaming, security engineering, cybersecurity compliance, and much more! Learn more about Echelon here: https://echeloncyber.com/about

Are you ready to get started?