Cyber Intelligence Weekly (January 11, 2026): Our Take on Three Things You Need to Know
Welcome to our weekly newsletter where we share some of the major developments on the future of cybersecurity that you need to know about. Make sure to follow my LinkedIn page as well as Echelon’s LinkedIn page to receive updates on the future of cybersecurity!
To receive these and other curated updates to your inbox on a regular basis, please sign up for our email list here: https://echeloncyber.com/ciw-subscribe
Before we get started on this week’s CIW, I’d like to highlight that Echelon has a whole library of hot cybersecurity topics that we are willing to give at professional organization meetings and seminars! We know that finding great speakers is hard, so we are providing our readers with an easy button.
𝗪𝗲 𝘁𝗮𝗸𝗲 𝗲𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻 𝘀𝗲𝗿𝗶𝗼𝘂𝘀𝗹𝘆. We’re not just practitioners... we’re educators. That’s why we built a speaker catalog of cybersecurity topics we regularly update and deliver to teams and leadership groups. Want us to come speak to your organization?
DM us.
Download the speaker menu here: 📖 https://lnkd.in/ercSaf_k

Away we go!
Away we go!
1. New ChatGPT Health Raises a Familiar Question: Who Really Owns Your Medical Data?
OpenAI’s unveiling of a new ChatGPT Health experience marks a pivotal moment in how artificial intelligence intersects with deeply personal data. The company is encouraging users to link medical records and wellness applications to a health-focused version of its chatbot, pitching stronger encryption, data isolation, and promises that health conversations won’t be used to train its core AI models. The move reflects massive demand—hundreds of millions of people already turn to ChatGPT each week for health-related questions—but it also opens a new front in the ongoing debate over data privacy.
Privacy advocates warn that health data shared with ChatGPT Health exists outside the guardrails of traditional healthcare regulation. Unlike information handled by doctors or hospitals, data uploaded to an AI platform is not protected by HIPAA, leaving users dependent on corporate policies rather than statutory rights. Critics argue this creates a structural risk: once medical records are voluntarily uploaded, users may permanently lose legal protections tied to that information.
Concerns are amplified by recent history. The fallout from the bankruptcy of other businesses—where sensitive genetic data became an asset in a corporate sale—has made consumers acutely aware of how vulnerable personal health information can be when companies change direction or financial circumstances. At the same time, AI providers are increasingly leaning into personalization and, potentially, advertising-driven business models, raising questions about how health data might be monetized or correlated with other user information in the future.
Perhaps the most unsettled issue is transparency. OpenAI has not clearly articulated how health data would be handled in response to law-enforcement requests, nor how users would be notified if their information were disclosed. With ChatGPT Health able to ingest data from platforms like Apple Health, MyFitnessPal, and Weight Watchers—and powered in part by third-party data networks—the stakes extend far beyond convenience. For users, especially those sharing reproductive or highly sensitive health information, the promise of AI-driven insight now comes with a new and largely untested privacy tradeoff.

Cloud API Misconfiguration Drives Major Unauthorized Access Trend
A new cloud security pattern seen across multiple environments over the past week highlights a persistent misconfiguration problem that’s opening doors for attackers: improperly scoped API permissions and absent MFA at service endpoints. In a panel discussion published this week by ISMG editors, cloud security leaders underscored that the absence of multifactor authentication (MFA) and lax API policy controls continue to fuel high-impact breaches — even in mature cloud estates.
The panel’s insights resonate with recent breach investigations where attackers gained access to cloud consoles not through exotic zero-days but by abusing over-permissive API keys, stale service credentials, and abandoned access tokens. In many cases, lack of MFA or conditional access policies allowed these cloud API sessions to be hijacked without traditional indicators like malware or phishing. This elevates a chronic operational weakness into an obvious strategic risk: control plane abuse is now a top entry vector for cloud compromise.
More troubling is that these failures are not isolated to one provider — they span SaaS applications, IaaS consoles, and serverless environments, exposing data, workloads, and governance controls alike. The result is that attackers can quietly enumerate permissions, escalate privileges, and exfiltrate data long before threat detection teams identify a breach event.
For security leaders, the path forward must include robust conditional access policies, mandatory MFA across all API access, and continuous verification of token scope and usage patterns. Investments in Cloud Access Security Brokers (CASBs) or Cloud Security Posture Management (CSPM) tools need to be coupled with disciplined IAM audits and automated remediation. At the scale cloud environments now operate, reactive cloud incident response isn’t enough — proactive policy enforcement and least-privilege automation are essential to limit the next wave of API-driven compromises.

2. A Five-Year-Old Flaw Still Opening Doors: Fortinet Firewalls Under Renewed Attack
Fortinet has issued a fresh warning that a long-standing firewall vulnerability—first disclosed more than five years ago—is once again being actively exploited in the wild. The flaw, tracked as CVE-2020-12812, affects FortiGate devices running specific SSL VPN and LDAP configurations. While the issue itself is not new, recent attacker activity has shown that thousands of internet-facing firewalls remain exposed, creating an easy foothold for threat actors seeking initial access.
At its core, the vulnerability stems from how FortiGate handles authentication when local users with two-factor authentication are tied back to LDAP directories. Under certain conditions, differences in username case sensitivity can cause the firewall to bypass local authentication controls entirely. By simply altering the capitalization of a username, an attacker may be authenticated directly against LDAP—without triggering two-factor authentication—if secondary LDAP groups are misconfigured. In practical terms, this means VPN or even administrative access can be granted without the protections organizations believe are in place.
Security researchers estimate that more than 10,000 Fortinet devices are still vulnerable, despite patches and configuration fixes being available since 2020. The flaw has a well-documented history of abuse, having been leveraged by ransomware operators and state-aligned threat groups alike. That continued success underscores a persistent industry problem: legacy vulnerabilities tied to configuration drift are often overlooked, even in perimeter technologies assumed to be “set and forget.”
Fortinet has urged organizations to treat any suspected exploitation as a full compromise, recommending immediate credential resets and configuration reviews. The broader lesson is a familiar one for defenders—patching alone is not enough. Authentication logic, directory integrations, and fallback access paths all need regular scrutiny. When a five-year-old bug can still open the door to modern attacks, it’s a reminder that perimeter security failures are often less about zero-days and more about unfinished housekeeping.

“ZombieAgent” Prompt Injection Flaw Shows New Risk in AI Assistants
In the last few days security researchers have raised the alarm about a novel set of prompt injection attack techniques targeting ChatGPT and similar AI assistant systems. Known collectively under the name ZombieAgent, this class of exploits leverages new “memory” and connector features in AI agents to embed hidden instructions that the model executes without user awareness. What makes this particularly concerning for defenders is the zero-click nature of some variants — attackers don’t need to bait a user into clicking a link or opening a file; they simply hide malicious prompts in content that the AI will process as part of its normal execution flow.
Unlike traditional prompt injection that relies on user interaction, ZombieAgent chains can persist malicious context in the AI’s memory and connectors, enabling follow-on data exfiltration or unwanted actions across connected services. Analysts from Radware and other firms demonstrated how the technique can be used to escalate from seemingly benign text to powerful instructions acting on behalf of the compromised agent. Although patches have been rolled out in recent days, experts warn that this attack pattern will evolve alongside agent workflows.
For enterprise defenders, the implications are clear: as AI assistants become deeper parts of workflows — integrated with email, document stores, and business apps — these vectors can bypass perimeter controls and traditional user-initiated security checkpoints. It’s not enough to secure the model itself; organizations must also harden the inputs, memory persistence logic, and connector interfaces that link AI agents to sensitive data and systems. Early adopters of AI within production environments should treat prompt and memory injection as an active attack surface, not a theoretical risk.
This development reinforces that AI security is not just about model accuracy or bias mitigation — it’s about trust boundaries, execution context, and interface design. Security teams need updated adversarial testing strategies, ongoing threat modeling for memory features, and monitoring that understands when an AI agent’s outputs deviate from expected patterns.

3. The First Major Crypto Hack of 2026 and What It Reveals
Truebit has become the first major crypto breach victim of 2026, after attackers drained more than $26 million worth of digital assets from the platform in a single incident. The company disclosed that malicious actors exploited a vulnerable smart contract, prompting an urgent warning for users to avoid interacting with the affected code. Blockchain investigators later confirmed that roughly 8,500 ETH had been siphoned off, underscoring how quickly well-resourced attackers can extract value once a flaw is discovered.
The theft is notable not just for its size, but for what it represents. Truebit provides computational infrastructure used by other tokenized systems, meaning the attack targeted a foundational layer rather than an end-user wallet or exchange. That mirrors a broader shift in crypto crime, where attackers increasingly aim for high-leverage infrastructure points that allow for rapid, large-scale theft. It also extends a troubling trend from recent years, during which billions in cryptocurrency have been stolen annually despite improved tooling and monitoring across the ecosystem.
At the same time the Truebit incident surfaced, Chainalysis released new findings highlighting the growing professionalism of crypto crime. According to the firm, illicit crypto addresses received more than $150 billion in 2025 alone—a staggering increase driven largely by sanctioned entities and state-aligned actors. Cryptocurrency, particularly stablecoins, has become an efficient vehicle for bypassing sanctions, moving value across borders, and avoiding reliance on traditional banking systems.
Chainalysis also pointed to the evolution of organized money-laundering networks, particularly those operating across loosely regulated jurisdictions. Even as platforms such as Huione face sanctions or shutdowns, these networks have proven highly adaptable, shifting activity to alternative services with little disruption. The lesson for security leaders is clear: crypto crime is no longer opportunistic or amateur. It is organized, well-funded, and resilient—and incidents like the Truebit breach are increasingly symptoms of a mature, global criminal economy rather than isolated technical failures.
Thanks for reading!
About us: Echelon is a full-service cybersecurity consultancy that offers wholistic cybersecurity program building through vCISO or more specific solutions like penetration testing, red teaming, security engineering, cybersecurity compliance, and much more! Learn more about Echelon here: https://echeloncyber.com/about