Cyber Intelligence Weekly

Cyber Intelligence Weekly (April 26, 2026): Our Take on Three Things You Need to Know

Welcome to our weekly newsletter where we share some of the major developments on the future of cybersecurity that you need to know about. Make sure to follow my LinkedIn page as well as Echelon’s LinkedIn page to receive updates on the future of cybersecurity!

To receive these and other curated updates to your inbox on a regular basis, please sign up for our email list here: https://echeloncyber.com/ciw-subscribe

Before we turn to this week’s edition of Cyber Intelligence Weekly, I want to introduce a new Personal Spotlight Series: The Human Side of Cybersecurity.

This series is grounded in conversation rather than commentary. It centers on CISOs and other cyber leaders who are in the seat—navigating real leadership pressure, complex risk decisions, and the human realities of building and sustaining security programs. Some are earlier in their journey, others further along paths many of you may recognize or aspire toward. What they share isn’t theory. It’s experience—earned through moments of progress, frustration, growth, and reflection. These conversations are for the professionals who show up every day to quietly carry the weight of this industry.

Jamie Giroux (Platinum Equity) — “The goal isn’t compliance. It’s commitment.”

In this episode, I sat down with Jamie Giroux, CISO at Platinum Equity, whose cybersecurity career spans 30 years, beginning long before the industry was even called “cyber.” Jamie has worked across audit, compliance, IT controls, penetration testing, law enforcement, finance, healthcare, and other complex environments. Today, in addition to his CISO role, he is also the author of The Giroux Methodology, a human-centered roadmap focused on emotional intelligence, resilience, and the people side of cybersecurity.

What stood out most in our conversation was Jamie’s deep conviction that cybersecurity has been training the wrong things for too long. He described a defining moment from early in his career when he chose not to renew a client after years of assessments produced no meaningful change. Later, while working in cyber ranges, he saw teams measuring mean time to detect and respond, but almost nobody measuring the human experience of the people under pressure. That realization became central to his work: tools matter, but human behavior, emotional intelligence, and culture determine whether security actually works.

Jamie also shared a deeply personal story about surviving a devastating car accident in 1994, an experience that shaped his views on self-awareness, reactions, resilience, and the importance of choosing responses instead of being ruled by triggers. That lesson carries directly into how he leads security teams today. Leaders set the emotional tone. If they panic, react poorly, or become a “dumpster fire,” their teams will mirror that behavior. If they lead with calm, vulnerability, and awareness, they create trust.

Additional takeaways from the conversation:

  • EQ is a muscle. Emotional intelligence has to be trained intentionally, just like technical skill.
  • Security culture should move from enforcement to enablement. The best security teams do not just say no, they help people succeed safely.
  • AI is overhyped as a cure-all, but data governance is underhyped. Garbage in, garbage out becomes even more dangerous when organizations feed sensitive or poorly governed data into AI systems.
  • Security awareness has not solved the human problem. After decades of phishing tests and warnings, people still struggle because we often treat human problems with technical solutions.
  • Ask about capacity, not just workload. Someone may have manageable tasks but no emotional capacity to carry them well.
  • Burnout is emotional weight, not just too much work. Security professionals carry the stress of knowing systems are imperfect and attackers are moving faster.
  • Normalize saying “I’m tapped.” Teams need to know there is no career penalty for needing support.
  • Learn, don’t blame. Postmortems should focus on growth, not finger-pointing.
  • The CISO role is not primarily technical. It is about communication, influence, leadership, risk translation, business enablement, and keeping teams steady under pressure.
  • Peer support matters. There is no CISO finishing school, so leaders need to lean on other leaders, ask for help, and be willing to be vulnerable.

His billboard message for every new CISO was powerful: The goal isn’t compliance. It’s commitment. That line captures Jamie’s philosophy perfectly. Security leadership is not about checking the box. It is about committing to the people, the mission, and the culture required to make security meaningful.

If there was one thread that defined this conversation, it was this: cybersecurity is ultimately a human discipline. The strongest leaders are not just building controls. They are building capacity, trust, resilience, and commitment.

Watch the Full Interview Here: https://www.youtube.com/watch?v=3NSmSv3QcHw

Echelon Events & Thought Leadership Highlight

Modern attacks don’t follow a straight line.

The decisions you make today across tools, workflows, and detection logic will determine whether the next intrusion is contained or missed entirely.

Led by Matt Donato , Devin Jones , and Bryce Hayes, join our Offensive and Defensive Security teams on May 13 for a live, end-to-end simulation that walks both sides of a modern attack, showing exactly how adversaries operate and how defenders can keep up.

You’ll see:

  • How attackers gain access, move laterally, and evade detection
  • How security teams investigate, validate, and respond in real time
  • What actually works when bridging the gap between offense and defense

Purple teaming isn’t theory, it’s how you close real detection gaps and build defenses that hold.

Reserve your spot now and see the full attack chain before you’re forced to respond to it live: https://lnkd.in/eqhrCKgu

Article content

Away we go!

1. Vercel Incident Highlights the New Risk Frontier: AI Tools, OAuth Tokens, and Developer Supply Chains

Vercel, one of the most widely used cloud platforms for modern web development, disclosed a security incident that should get the attention of every technology leader. According to the company, attackers gained access through a third-party AI tool installed on an employee device, then leveraged that foothold to compromise the employee’s Google Workspace account and access portions of Vercel’s internal environment. The company said a limited number of customer credentials were exposed and has urged affected users to rotate secrets immediately.

What makes this case especially important is the path of compromise. This was not a traditional server exploit or phishing email. Instead, the attack appears tied to trust relationships between SaaS tools, browser extensions, OAuth permissions, and enterprise identity systems. The third-party platform, Context.ai, later confirmed it had previously experienced unauthorized access to its AWS environment and believes some user OAuth tokens were also compromised. That token access was allegedly used to pivot into Vercel’s workspace environment.

For organizations building in the cloud, this is a reminder that the attack surface now extends far beyond production infrastructure. Developer ecosystems are filled with AI assistants, productivity plug-ins, CI/CD integrations, package managers, browser extensions, and identity connectors. Any one of those tools can become the first domino. Even if core systems remain secure, exposed environment variables and tokens can still provide attackers with access to downstream services, APIs, repositories, and production workloads.

The lesson here is clear: security programs must evolve alongside the modern development stack. Review OAuth permissions, restrict third-party app access, enforce strong identity controls, monitor token use, and rotate secrets regularly. As AI tools become standard in developer workflows, governance around those tools is no longer optional. It is now part of core cyber defense.

Microsoft SharePoint Zero-Day and Why Hybrid Environments Should Pay Attention

This week’s cloud security focus is the actively exploited Microsoft SharePoint Server vulnerability (CVE-2026-32201), patched in April’s Patch Tuesday release. While SharePoint Server is an on-premises product, many organizations still use it in hybrid environments tied to Microsoft 365, Teams, OneDrive, Power Platform, and Azure identity services. That makes this more than an on-prem problem. In modern enterprises, a compromise of SharePoint can become a gateway to broader cloud-connected data, user trust, and collaboration workflows.

The flaw is a spoofing vulnerability that could allow attackers to present malicious or falsified content inside trusted SharePoint environments. In real-world terms, imagine an employee opening what appears to be a legitimate internal HR page, invoice approval request, or executive communication hosted on SharePoint. If abused successfully, that trust could be leveraged for credential theft, malware delivery, business email compromise, or lateral movement into connected systems.

What to do now:

  • Patch SharePoint Server immediately across all environments
  • Review internet-exposed SharePoint instances and restrict external access where possible
  • Validate MFA and conditional access policies for connected Microsoft 365 identities
  • Hunt for suspicious SharePoint uploads, new pages, unusual admin activity, or unexpected permission changes
  • Confirm backups and recovery procedures for collaboration platforms
  • Segment legacy collaboration servers from broader cloud workloads

Why it matters: The cloud attack surface is no longer limited to SaaS apps or public cloud workloads. Hybrid platforms often become the bridge attackers use to move from one trust zone to another. Security leaders should treat collaboration systems like critical infrastructure, not just productivity tools.

2. New Global Warning: China-Linked Hackers Are Hiding Behind Everyday Devices

A new joint cybersecurity advisory from leading government agencies across the United States, United Kingdom, Europe, Canada, Australia, Japan, and other allies is sounding the alarm on a major shift in Chinese cyber operations. Rather than relying on rented servers or traditional attacker infrastructure, China-nexus threat groups are increasingly routing their campaigns through massive covert networks made up of compromised home routers, IoT devices, cameras, NAS appliances, and small business edge hardware. These hijacked devices allow attackers to blend in with normal internet traffic while masking the true origin of their activity.

The tactic is not theoretical. The advisory points to well-known campaigns tied to groups such as Volt Typhoon and Flax Typhoon, which have used these networks for espionage, persistence, and pre-positioning against critical infrastructure. In one example, the Raptor Train botnet reportedly infected more than 200,000 devices worldwide. Many of the impacted systems were older or end-of-life devices that no longer received security updates. That creates a low-cost and renewable infrastructure model for adversaries: compromise neglected devices, rotate nodes as they are discovered, and maintain plausible deniability.

For defenders, this changes the playbook. Static IP blocklists alone are no longer enough when malicious traffic can come from thousands of residential broadband connections or legitimate-looking devices scattered across the globe. Organizations need stronger identity controls, better visibility into remote access activity, anomaly detection, and a deeper understanding of what normal network behavior looks like. Baselines for VPN traffic, geographic access patterns, device fingerprints, and authentication behavior are becoming essential.

The practical takeaway is simple: patch internet-facing devices, retire unsupported hardware, enforce multifactor authentication, and move toward zero trust architectures where possible. Security leaders should also evaluate dynamic threat intelligence feeds and monitoring programs that can adapt in real time. The future of cyber defense is not just blocking known bad actors. It is detecting malicious behavior hidden inside what appears to be ordinary traffic.

Open Source AI Tools Become a New Supply Chain Risk

This week’s AI security spotlight is the growing risk around open source AI libraries and developer tooling after recent software supply chain compromises involving popular packages used in modern development environments. One notable example involved a backdoored version of LiteLLM, a widely used framework that helps developers connect applications to multiple large language model providers through one interface. With AI adoption accelerating, these tools are becoming deeply embedded in production systems, internal copilots, and customer-facing applications.

Why does this matter? Because many AI teams move quickly, often pulling packages directly from public repositories into development pipelines. If a compromised package is introduced, attackers may gain access to API keys, model credentials, environment variables, cloud tokens, prompts, or sensitive business data flowing through AI workflows. In some cases, malware can also establish persistence on developer workstations or CI/CD runners.

What to do now:

  • Inventory all AI-related packages, SDKs, plugins, and dependencies in use
  • Pin dependencies to approved versions and use integrity checks where possible
  • Rotate API keys and tokens used by AI applications
  • Monitor outbound traffic from AI workloads for suspicious exfiltration
  • Require code review and security scanning for new AI libraries
  • Separate development, testing, and production credentials for AI systems

Real-world takeaway: Many organizations are focused on prompt injection and model misuse, but the more immediate risk may be the software supply chain around AI itself. Before an attacker targets your model, they may target the tools you used to build it.

3.  Alleged Unauthorized Access to Anthropic’s Mythos Model Shows the Real AI Risk Is Operational Security

Anthropic’s highly restricted Mythos model was introduced as a breakthrough system so capable in cybersecurity research that access was limited to a small group of trusted organizations through Project Glasswing. The company has described Mythos as a model able to identify serious software flaws and help develop advanced exploits, which is why its rollout was tightly controlled. But this week, reports surfaced that a small group of unauthorized users gained access anyway, raising a different kind of concern: even the most advanced AI safeguards can be undermined by basic operational security gaps.

According to published accounts, the users allegedly accessed Mythos through a mix of methods that included contractor-related access paths and educated guesses about internal model locations based on naming conventions. Anthropic said it is investigating the claims and stated there is currently no evidence of compromise to its core systems. Even so, the incident underscores a lesson security leaders know well: sophisticated technology can still be exposed through third-party environments, predictable configurations, weak segregation, and inherited trust relationships.

The broader issue goes beyond one company or one model. As AI systems become more valuable, they also become higher-value targets. Organizations are racing to secure models, prompts, data pipelines, APIs, and infrastructure, but governance around vendors, contractors, access provisioning, and shadow environments is just as important. Attackers do not always break the front door. They look for side entrances, forgotten keys, and assumptions nobody thought to test.

For enterprise defenders, this is the takeaway. The next major AI security incident may not involve prompt injection or model misuse. It may come from identity gaps, third-party exposure, or weak asset management around where sensitive AI capabilities live. In the age of frontier models, classic cybersecurity discipline still matters: least privilege, strong vendor controls, continuous monitoring, segmented environments, and rapid detection of abnormal access.

 

Thanks for reading!

About us: Echelon is a full-service cybersecurity consultancy that offers wholistic cybersecurity program building through vCISO or more specific solutions like penetration testing, red teaming, security engineering, cybersecurity compliance, and much more! Learn more about Echelon here: https://echeloncyber.com/about

Are you ready to get started?