Cyber Intelligence Weekly

Cyber Intelligence Weekly (February 1, 2026): Our Take on Three Things You Need to Know

Welcome to our weekly newsletter where we share some of the major developments on the future of cybersecurity that you need to know about. Make sure to follow my LinkedIn page as well as Echelon’s LinkedIn page to receive updates on the future of cybersecurity!

To receive these and other curated updates to your inbox on a regular basis, please sign up for our email list here: https://echeloncyber.com/ciw-subscribe

Before we turn to this week’s edition of Cyber Intelligence Weekly, I want to introduce a new CISO Spotlight Series: The Human Side of Cybersecurity.

This series is grounded in conversation rather than commentary. It centers on CISOs who are in the seat—navigating real leadership pressure, complex risk decisions, and the human realities of building and sustaining security programs. Some are earlier in their journey, others further along paths many of you may recognize or aspire toward. What they share isn’t theory. It’s experience—earned through moments of progress, frustration, growth, and reflection. These conversations are for the professionals who show up every day to quietly carry the weight of this industry.

As part of the series, I sat down with Robert Kemp, CISO of Federated Hermes, to talk about his path to cybersecurity leadership.

Bob Kemp (Federated Hermes) — “Speak in hot dogs, not acronyms.”

Some CISOs come up through the classic IT path. Bob Kemp didn’t. He started in accounting and finance, flirted with the idea of becoming an FBI agent doing forensic accounting, and eventually landed in tech the hard way — learning how work actually gets done inside real organizations. That origin story matters, because it shaped the theme of our whole conversation: security isn’t a technology job. It’s a business leadership job that happens to use technology.

Bob’s career has a recurring pattern: he earns trust by helping the organization win. At UPS, he built simple measurement and visibility into the accounting function (the “you can’t improve what you don’t measure” lesson) and watched performance jump dramatically. At Computer Associates, he lived through a major pivot from “sell to customers” to “serve customers” — a trusted-advisor model that drove NPS into the 90s. And when Sheetz recruited him into security (even though he openly admitted he didn’t “know squat about security”), they bet on his leadership + business understanding… and trained the rest.

My favorite moment was Bob’s story about presenting to executives early in his security career. His “five-minute update” turned into 45 minutes of questions, because the real barrier wasn’t security controls — it was translation. The breakthrough came when he started speaking in their language: cost per store day and the classic, brutal executive math: “How many hot dogs do we need to sell to pay for this?” Once he framed security as operational impact, the conversation changed from “why do we need this?” to “what else do you need?”

Bob also drops a handful of leadership principles worth printing on a billboard:

  • You don’t own the risk — the business does. Security’s job is to identify, explain, and help mitigate… but risk ownership lives with leadership.
  • A good idea is always a good idea — timing is everything. If it’s not funded today, don’t bury it. Put it on the backlog and bring it back with context.
  • Go back to basics. In incidents and operations, 95% of the time the answer is in the fundamentals: what changed, what’s misconfigured, what’s actually happening.
  • Most underhyped control: egress filtering — because something bad will slip through, and outbound visibility can save your bacon.

If you’re a security leader (or you mentor people who want to be one), this episode is a masterclass in the soft stuff that drives hard outcomes: communication, trust-building, knowing when to stand your ground, and doing the back-channel work so that by the time you’re in the approval meeting… you already know it’s approved.

🎥 Watch the full conversation here: https://youtu.be/AssmUhgDFpQ?si=G9BP45gvz5BEi9Pi

_________________________________

RSA 2026, Come Meet With Us!

Also, to highlight upcoming events, we hope to meet you at RSAC 2026! If you are heading out to RSA and want to meet up with the Echelon team, come see us at our exclusive happy hour at the prestigious Olympic Club, a quiet oasis from the busy hustle and bustle of RSA.

Join Echelon Risk + Cyber for our annual RSA Happy Hour at The Olympic Club on Monday, March 23, from 3:00–5:00 PM PT, just before the official RSA Welcome Reception.

Good people, great conversations, easy networking, no pressure.

📍 The Olympic Club, San Francisco

👉 Reserve your spot here: https://lnkd.in/enj3Q9D3
 

Away we go!

1.  Moltbook’s ‘Agent Internet’ Moment Undone by a Classic Backend Misconfig

In the span of a few days, Moltbook went from a niche curiosity to a viral talking point: a “social feed” where autonomous AI agents post and interact, sometimes with little obvious human steering. That hype created a bigger problem than the bots themselves—because behind the scenes, the platform’s backend configuration allegedly left the keys to the kingdom sitting out in the open. According to reporting by 404 Media, a publicly reachable database endpoint exposed sensitive agent data that could allow an attacker to seize control of any agent account and make it post whatever they wanted.

The issue was uncovered by security researcher Jameson O'Reilly, who told the outlet he found that the site’s backend—built on Supabase—appeared to be missing (or misusing) Row Level Security controls on a critical table. In Supabase’s model, tables in the “public” schema are reachable through its data API, and without RLS, those tables can become accessible broadly via the anon role. In this case, the published project URL and key were reportedly enough to enumerate and retrieve agent secrets, including API keys and account-claim material—effectively turning “autonomous agents” into “accounts anyone could puppet.”

That distinction matters, because Moltbook’s virality has been driven by people reading meaning into what agents “decide” to say. If an outsider can hijack an agent and post in its name, then the social proof collapses: you can’t trust whether a post represents an agent’s output, the creator’s prompting, or someone else entirely. O’Reilly specifically warned about high-visibility fallout—like compromising an agent associated with Andrej Karpathy and using it to publish convincing scams or inflammatory statements that would ricochet across social media long before corrections caught up.

The fix, by O’Reilly’s telling, was not exotic—more “security 101” than “AI safety debate”: lock down database access, enable RLS, and apply explicit policies so secrets never sit in an exposed schema. Supabase’s own guidance is blunt, RLS should be enabled on exposed tables, and once it’s enabled, data shouldn’t be accessible through the public anon key unless policies allow it. Moltbook’s creator Matt Schlicht did not comment to the outlet at the time, but the exposed database was later closed and the researcher said the team re-engaged to improve security.

Attackers Use Stolen AWS Credentials in Cryptomining Campaign

In an active cloud threat campaign, attackers have leveraged compromised AWS Identity and Access Management (IAM) credentials to hijack cloud infrastructure for cryptomining. Using valid keys — not infrastructure vulnerabilities — adversaries gained admin privileges to deploy unauthorized resources across AWS ECS and EC2 environments, running mining operations within customer accounts.

What sets this campaign apart for GRC professionals is its emphasis on the shared responsibility model: attackers did not breach AWS itself, but exploited weak customer identity controls. This highlights that identity and credential hygiene is now a core pillar of cloud risk management. Secure IAM configurations, rotation of access keys, use of temporary credentials, and enforcing multi-factor authentication are foundational controls that directly reduce this class of risk.

From a governance and compliance standpoint, regular auditing of cloud API activity logs via tools like AWS CloudTrail, the application of anomaly detection services such as GuardDuty, and implementation of least-privilege policies reduce exposure to credential theft and misuse.

Additionally, this campaign demonstrates the importance of monitoring service quotas and automated provisioning calls — attackers used the DryRun API flag to probe permissions without incurring costs, a tactic that can evade detection. Visibility into unusual usage patterns, coupled with alerting on policy changes or role creations, fortifies cloud environments against misuse. Risk and compliance teams must integrate credential risk assessments into enterprise cloud posture reviews to prevent similar abuse and avoid financial and operational impact.

2.  Real-Time Vishing Is Turning SSO Into a Human Attack Surface

A new wave of voice-phishing attacks is exposing a familiar truth about identity security: attackers don’t need zero-days when they can manipulate people in real time. Threat researchers are tracking an active campaign in which cybercriminals use phone calls paired with live phishing infrastructure to compromise single sign-on (SSO) accounts, then pivot directly into SaaS environments to steal data and attempt extortion. Some of the activity has been publicly linked by the actors themselves to the ShinyHunters name, though researchers caution that attribution remains fluid.

What makes this campaign different isn’t the concept of phishing—it’s the execution. Attackers are registering custom domains that closely mimic legitimate SSO portals and using purpose-built vishing kits that let them synchronize what the victim sees in their browser with what they hear on the phone. As the victim is coached through a “security verification” call, the attacker can trigger multifactor authentication prompts in real time, dramatically increasing the odds that the victim approves access without realizing what’s happening. Identity providers including Okta have confirmed seeing phishing kits capable of replicating authentication flows for multiple platforms, including enterprise and cloud services.

Once initial access is gained, the impact can escalate quickly. Investigators report attackers moving laterally into SaaS platforms, exfiltrating sensitive data, and in some cases issuing extortion demands. Organizations across financial services, media, and consumer platforms have disclosed recent incidents tied to social engineering—not software flaws—underscoring that the weakest link remains human trust rather than identity infrastructure itself. Providers like Microsoft and Google have said there’s no evidence their platforms were technically compromised.

The broader takeaway is uncomfortable but clear: SSO and MFA are necessary, not sufficient. Real-time vishing attacks exploit gaps in user awareness, approval fatigue, and weak identity governance around device enrollment and session monitoring. Defenders should assume these techniques will continue to evolve and focus on layered controls—stronger conditional access policies, phishing-resistant MFA, tighter SaaS telemetry, and regular user education that goes beyond “don’t click links” to include “don’t trust live security calls.” Identity may be the front door, but it’s still guarded by people.

 

On the Coming Industrialization of Exploit Generation with LLMs

In a recent blog post by Sean Heelan, AI’s rapid progress is poised to fundamentally change offensive cybersecurity by enabling what he terms the industrialization of exploit generation. Heelan’s experiments show LLM-powered agents built on models like GPT-5.2 and Opus 4.5 autonomously crafting working exploits for a previously unknown QuickJS vulnerability across multiple scenarios. These agents generated diverse exploit chains with minimal human intervention and within hours using modest token budgets, hinting at a future where human effort is no longer the limiting factor in exploit development.

The concept of industrialization here means that success in exploit creation will be driven not so much by the number of skilled operators but by the ability to allocate computational tokens and tooling effectively. This shift reduces barriers to entry for offensive capabilities and could democratize exploit production beyond elite groups. While these models do not yet invent fundamental breaks in advanced mitigations like ASLR or CFI, they excel at assembling reliable chains from known structures and components, accelerating tasks traditionally requiring deep expertise.

Importantly, Heelan acknowledges limitations: larger, real-world targets like Chrome or Firefox present much broader complexity, and true adversarial automation in live environments still involves adversarial detection risk. However, if agentic systems continue improving, a future where AI routinely generates zero-day exploits or adapts attacks with little human guidance becomes plausible.

For defenders, this underscores the need to shift focus from reactive patching to proactive architectural hardening, automated exploitation-resistance testing, and cloud-scale telemetry capable of detecting AI-orchestrated attack patterns at machine speed.

3.  When AI Toys Leak Trust: A Wake-Up Call for Child Data Security

A security lapse at Bondu, a startup selling AI-enabled stuffed toys for children, has exposed a deeply uncomfortable reality about connected products designed for kids. Security researchers recently discovered that more than 50,000 private chat transcripts between children and Bondu’s toys were accessible through the company’s web portal—no hacking required. Logging in with any standard Gmail account was enough to view sensitive conversations that were never meant to leave a child’s bedroom.

The exposed data included children’s names, birth dates, family details, preferences, and full transcripts of their conversations with the toy—information specifically designed to feel personal and emotionally engaging. While Bondu intended the portal to be used by parents and internal staff, it lacked even basic access controls. The researchers reported the issue, and the company quickly took the portal offline and relaunched it with stronger authentication. Bondu says it found no evidence of misuse beyond the researchers themselves, but the ease of access raised immediate red flags about how long the data may have been visible.

Beyond the technical misstep, the incident highlights a broader and more troubling risk: AI toys don’t just collect data—they collect trust. These products are engineered to encourage kids to open up, remember past conversations, and build emotional continuity. In the wrong hands, that information could be used for manipulation, impersonation, or worse. As one researcher put it, the data set represented exactly the kind of intelligence someone could exploit to deceive or harm a child.

The exposure also reignites concerns about how AI toy companies build and secure their platforms, especially as many rely on third-party AI services such as Google and OpenAI to process conversations. Even when safety filters work as intended, security failures can render those protections meaningless. The lesson here is blunt but necessary: when it comes to AI products for children, safety claims mean little without rigorous security, strong access controls, and a default assumption that this data should be treated as highly sensitive—because it is.

 

Thanks for reading!

About us: Echelon is a full-service cybersecurity consultancy that offers wholistic cybersecurity program building through vCISO or more specific solutions like penetration testing, red teaming, security engineering, cybersecurity compliance, and much more! Learn more about Echelon here: https://echeloncyber.com/about

Are you ready to get started?