Cyber Intelligence Weekly (January 25, 2026): Our Take on Three Things You Need to Know
Welcome to our weekly newsletter where we share some of the major developments on the future of cybersecurity that you need to know about. Make sure to follow my LinkedIn page as well as Echelon’s LinkedIn page to receive updates on the future of cybersecurity!
To receive these and other curated updates to your inbox on a regular basis, please sign up for our email list here: https://echeloncyber.com/ciw-subscribe
Before we turn to this week’s edition of Cyber Intelligence Weekly, I want to introduce a new CIW Spotlight Series: The Human Side of Cybersecurity.
This series is grounded in conversation rather than commentary. It centers on CISOs, and other cyber leaders, who are in the seat—navigating real leadership pressure, complex risk decisions, and the human realities of building and sustaining security programs. Some are earlier in their journey, others further along paths many of you may recognize or aspire toward. What they share isn’t theory. It’s experience—earned through moments of progress, frustration, growth, and reflection. These conversations are for the professionals who show up every day to quietly carry the weight of this industry.
As part of the series, I sat down with Corey Kaemming, CISO of Valvoline, to talk about his path to cybersecurity leadership.
From server racks to boardrooms. What Valvoline’s CISO learned on the way up:
- Compliance can be a springboard. It forced security thinking, fast.
- Leadership = counseling. 50% tech, 50% human.
- Hire for creativity. Awareness interns from marketing? Yes.
- Trust your team on tech; fight for budget.
- Billboard to new CISOs: You belong here.
- Stop worshipping tools. They’re means, not the end.
- Career tip: Be broad, fail small, build real relationships.
🎥 Watch the full conversation here: https://www.youtube.com/watch?v=EEk-Eqqp

RSA 2026, Come Meet With Us!
Also, to highlight upcoming events, we hope to meet you at RSAC 2026! If you are heading out to RSA and want to meet up with the Echelon team, come see us at our exclusive happy hour at the prestigious Olympic Club, a quiet oasis from the busy hustle and bustle of RSA.
Join Echelon Risk + Cyber for our annual RSA Happy Hour at The Olympic Club on Monday, March 23, from 3:00–5:00 PM PT, just before the official RSA Welcome Reception.
Good people, great conversations, easy networking, no pressure.
📍 The Olympic Club, San Francisco
👉 Reserve your spot here: https://lnkd.in/enj3Q9D3

Away we go!
1. Cellebrite Under Scrutiny After Report Links Tools to Activist Phone Extractions in Jordan
A new report from The Citizen Lab alleges that Jordanian authorities used Cellebrite’s mobile forensic tools to extract data from the phones of activists and human rights defenders — specifically in cases tied to speech and organizing critical of Israel’s war in Gaza. The researchers document seven cases between late 2023 and mid-2025, including forensic analysis of seized devices and court records that reference forensic “technical reports” used in prosecutions under Jordan’s updated cybercrime law.
What makes the report particularly unsettling isn’t just that phones were accessed — it’s the scale and intrusiveness of what mobile extraction can reveal. Citizen Lab points out that forensic tooling like Cellebrite’s UFED ecosystem can pull everything from chats and photos to location trails, saved passwords, Wi-Fi history, and deleted artifacts, turning a single seized device into a map of someone’s relationships, movements, and private life. The report also describes coercive unlock scenarios, including cases where authorities allegedly used biometrics or obtained passcodes without consent.
Cellebrite’s public response, as reflected in the report, doesn’t dispute the specifics and leans on policy language: customers must certify lawful authority; the company claims it vets customers; it says it takes misuse allegations seriously. Citizen Lab argues that’s not enough and presses for concrete accountability controls — including a recommendation to watermark extractions (or otherwise uniquely identify customer use) to make abuse easier to investigate and harder to deny.
The uncomfortable takeaway for security and privacy leaders is that this isn’t a “Jordan-only” story — it’s a governance story. When powerful forensic capability meets weak oversight, the risk isn’t hypothetical. If you’re a vendor selling sensitive security tech, “we have an ethics policy” can’t be the end of the conversation; you need technical guardrails and auditability. And if you’re an organization trying to protect people — employees, journalists, executives, activists — you should treat device seizure as a realistic threat model in certain geographies and build practical playbooks (strong passcodes, rapid credential rotation, least-data-on-device, and clear escalation paths when a device is detained).

Here’s What Cloud Security Holds for the Year Ahead
Looking forward into 2026, cloud security priorities are rapidly evolving in response to AI adoption, hybrid work models, and escalating threat sophistication. Industry experts emphasize that cloud security is no longer just about perimeter controls or compliance checkboxes; it must center on business resilience and risk quantification that aligns with executive priorities. Boards are increasingly asking for cybersecurity metrics expressed in financial and operational terms — such as potential monetary losses and time-to-recovery estimates — forcing security teams to adopt risk language that resonates with enterprise strategy.
Key trends include the need for AI-enhanced threat hunting and dynamic security posture management that can operate in real time across distributed cloud environments; the integration of attack surface management tools to uncover unknown risks; and a stronger focus on identity and access governance as foundational controls.
With cloud workloads becoming more automated and dependent on machine identity, teams must shore up identity compartments, least-privilege access, and Zero Trust policies to reduce the blast radius of common attack vectors. This future-facing view also underscores the importance of integrating cloud security with developer and DevOps workflows so that security becomes embedded into CI/CD pipelines and deployment lifecycles rather than retrofitted after the fact.
In essence, the year ahead will demand cloud security practices that are predictive, automated, and business-centric, enabling security leaders to speak the language of risk and resilience while adapting to cloud’s accelerating pace of change.

2. KONNI Goes After Blockchain Engineers With AI-Assisted Malware
North Korea–aligned operators tied to the KONNI cluster (also tracked as Opal Sleet / TA406) are actively targeting blockchain engineers with a phishing chain that looks almost boring—until you follow the execution path. Check Point says the lure arrives as a Discord-hosted ZIP that drops a decoy document plus a malicious Windows shortcut (LNK). Clicking the shortcut quietly launches PowerShell, stages additional components, and sets up persistence via a scheduled task that re-runs hourly while trying to blend in as a OneDrive startup task.
The part that should make every engineering leader sit up: the backdoor itself shows strong signs of AI-assisted development. Researchers point to unusually polished structure and “tutorial-style” comments embedded in the malware—exactly the kind of artifacts you’d expect when code is generated with a large language model and then minimally adapted. It’s also not a smash-and-grab implant: it performs anti-analysis checks, fingerprints the host, then phones home to a C2 that can return additional PowerShell for asynchronous execution.
Why does this matter beyond “yet another phishing campaign”? Because dev environments are a force-multiplier. If an adversary compromises the workstation of someone who touches CI/CD, cloud consoles, API keys, signing certs, or wallet infrastructure, the blast radius can jump from one device to many environments fast. That’s why this campaign’s “blockchain project documentation” theming is so effective: it’s tuned to the workflows and curiosity of engineers, not generic office staff.
If you’re defending engineering teams, treat this as a reminder to harden the basics in places we often underinvest: restrict LNK execution where possible, tighten macro/script policies, monitor scheduled task creation (especially anything masquerading as OneDrive), and reduce credential sprawl (short-lived tokens, scoped cloud roles, vaulting, and aggressive key rotation). Most importantly: assume adversaries will weaponize public PoCs and AI-assisted code quickly—your detection and containment have to be faster than their iteration loop.

AI and the Software Vulnerability Lifecycle:
Transforming Discovery, Patching, and Exploitation
A recent analysis from Georgetown’s Center for Security and Emerging Technology (CSET) examines how artificial intelligence—especially large language models (LLMs)—is reshaping the software vulnerability lifecycle by accelerating automation across discovery, remediation, and even exploitation phases. The report highlights that AI tools are increasingly being integrated with traditional security tooling such as static analysis and fuzzers, enabling defenders to shift vulnerability testing earlier into the development pipeline and identify likely “hot spots” for defects. While mathematical limits persist for fully automated discovery due to program complexity, AI excels at pattern generalization, meaning it can efficiently uncover variants of known vulnerability types by correlating broad datasets and developer insights.
Automated patching is also advancing: AI-generated code can propose and sometimes implement fixes much faster than human teams, reducing exposure windows. However, the challenges of ensuring correctness, avoiding regression, and avoiding “patch the exploit but not the root cause” remain. Human validation and contextual reasoning are still essential in complex codebases.
On the tooling side, AI’s role in exploitation underscores the dual-use tension: models capable of analyzing code structures can also assist in generating exploit scripts that test defenses or, if misused, empower attackers. Current models can generate reliable exploit code for higher-level languages and known vulnerability patterns with minimal human tweaks, though lower-level memory corruption exploitation remains more nuanced.
For cybersecurity practitioners and software leaders, the takeaway is that AI is not a silver bullet but a force multiplier: properly integrated, it accelerates secure development and remediation workflows, but it also demands rigorous governance, validation, and oversight to avoid amplifying risk.

3. New Kimwolf Botnet and the Hidden Risk of Unmanaged IoT Behind the Firewall
Our favorite cyber investigative journalist, Brian Krebs, recently published a story about a new IoT botnet dubbed Kimwolf is a reminder that “edge devices” aren’t just a home problem anymore. Researchers say it’s already spread to millions of devices and is being used for large-scale DDoS and traffic relaying (think: ad fraud, credential stuffing/account takeover attempts, and scraping). The more sobering detail: Kimwolf doesn’t stop at the infected device—it can scan the local network of that device looking for other vulnerable IoT targets to pull into the swarm.
What makes Kimwolf different is how it grew so quickly: it leaned on the ecosystem of residential proxy services—networks of consumer devices that route other people’s internet traffic. According to the reporting, the botnet exploited proxy endpoints (notably those associated with IPIDEA) to push malicious commands “downstream” into internal networks behind those endpoints, then programmatically hunt for easy-to-compromise devices. The most common victims appear to be unofficial Android TV streaming boxes (AOSP-based, not Play Protect certified), many of which ship with proxy software pre-installed and weak—or nonexistent—security controls.
If you’re thinking, “That’s sketchy consumer gear, why would it be in my environment?”—that’s the point. Infoblox reported seeing evidence of Kimwolf-related activity across a meaningful slice of customer environments, and other researchers flagged proxy endpoints showing up inside universities, utilities, healthcare, finance, and government networks. A DNS query tied to Kimwolf doesn’t automatically prove compromise of internal systems, but it does suggest something inside the perimeter was used as a staging point to probe laterally—and lateral movement only needs one forgotten device with a default password to turn into a real incident.
The security takeaway is straightforward: this is an exposure-management problem wearing a botnet costume. Organizations need to treat residential proxy presence as a high-risk indicator, hunt for unmanaged/consumer devices on corporate networks (especially streaming boxes and “smart” endpoints), and tighten segmentation so a single infected node can’t enumerate your internal address space. If you can’t explain why a proxy endpoint exists inside your network, you should assume an adversary will eventually try to use it as a pivot.
Thanks for reading!
About us: Echelon is a full-service cybersecurity consultancy that offers wholistic cybersecurity program building through vCISO or more specific solutions like penetration testing, red teaming, security engineering, cybersecurity compliance, and much more! Learn more about Echelon here: https://echeloncyber.com/about