Cyber Intelligence Weekly

Cyber Intelligence Weekly (November 23, 2025): Our Take on Three Things You Need to Know

Welcome to our weekly newsletter where we share some of the major developments on the future of cybersecurity that you need to know about. Make sure to follow my LinkedIn page as well as Echelon’s LinkedIn page to receive updates on the future of cybersecurity!

To receive these and other curated updates to your inbox on a regular basis, please sign up for our email list here: https://echeloncyber.com/ciw-subscribe

Before we get started on this week’s CIW, I’d like to highlight our new thought leadership article by @Alyssa Slayton on the FedRAMP 20x program.

𝗙𝗲𝗱𝗥𝗔𝗠𝗣 𝗶𝘀𝗻’𝘁 𝗷𝘂𝘀𝘁 𝗰𝗵𝗮𝗻𝗴𝗶𝗻𝗴. 𝗜𝘁’𝘀 𝘀𝗽𝗲𝗲𝗱𝗶𝗻𝗴 𝘂𝗽.

FedRAMP 20x introduces automation, faster paths for cloud services, and real-time monitoring. If you work anywhere near the federal cloud, this shift will touch your world. In this article, Alyssa Slayton breaks down what’s changing, who it impacts, and how to get ready.

💡 Read Alyssa’s article to get ahead: https://lnkd.in/g2g7M9Ku

Away we go!

 

1. How a Routine Database Change Triggered Cloudflare’s Worst Outage in Years

When Cloudflare went dark on November 18, the outage didn’t just take down a CDN—it temporarily dimmed a sizeable portion of the modern Internet. Around 11:20 UTC, users across the globe began seeing 5xx errors on sites that rely on Cloudflare’s edge. What looked, at first glance, like the fingerprints of yet another hyper-scale DDoS campaign turned out to be something far more mundane—and far more unsettling. A routine permissions update inside a backend ClickHouse database caused Cloudflare’s Bot Management system to generate a malformed internal configuration file. That file doubled in size, breached a hard-coded memory limit in Cloudflare’s core proxy engine, and brought requests to a halt across one of the most widely used cloud networks in the world.

Cloudflare’s engineers discovered the issue only after chasing down several misleading clues, including the eerie coincidence of the company’s own status page going offline at the same moment. Once the culprit was isolated, the fix was straightforward: stop distributing the enlarged configuration file, inject a known-good version, and restart affected services. Traffic began flowing normally again by mid-afternoon. The deeper concern, however, wasn’t the bug itself—it was how something so small inside a single cloud provider’s internal logic could knock over authentication systems, dashboard access, bot scoring engines, CDN performance, Workers KV, and a long list of downstream services that tens of thousands of businesses depend on.

Incidents like this lay bare the fragility baked into Internet-scale architectures. Organizations have spent the past decade consolidating core functions—security, caching, DNS, identity, traffic optimization—into a handful of hyperscale platforms. That consolidation has brought enormous efficiency and resilience… right up until one of those platforms experiences a systemic failure. Cloudflare did not suffer a cyberattack, no adversary infiltrated the platform, and no hardware failed. A small database permissions change triggered a cascade that stretched across continents. If a single mis-sized “feature file” can cause a global outage, it raises an uncomfortable but necessary question for defenders: How many of our critical workloads are now single points of failure sitting behind someone else’s abstraction layer?

To their credit, Cloudflare was transparent, apologetic, and aggressive in identifying follow-up actions. But the lesson for enterprises is not Cloudflare-specific—it’s structural. Critical business services increasingly depend on edge providers, API gateways, authentication services, and cloud-layer automation systems that share common dependencies under the hood. Outages travel fast in an interconnected world. A misconfiguration that used to cause a bad hour for a single engineering team now disrupts hospitals, banks, retailers, SaaS platforms, and local governments in one sweep. Cloudflare’s outage is a reminder that resilience is no longer about redundant servers—it’s about multi-provider failover, architectural diversity, and deeply understanding where your upstream risk truly lives.

PowerUserAccess vs. AdministratorAccess in AWS: An Attacker’s Perspective

When securing AWS environments, understanding the differences between PowerUserAccess and AdministratorAccess is essential—especially from an attacker’s point of view. Both policies grant significant privileges, but they differ in their scope and potential for exploitation.

AdministratorAccess offers full control over AWS resources, allowing the user to manage services, IAM policies, and security configurations without restriction. From an attacker’s perspective, gaining AdministratorAccess is the ultimate goal, as it provides unrestricted power to create, modify, and delete resources, disable logging, or create backdoors.

On the other hand, PowerUserAccess is slightly more limited. It grants full access to AWS services excluding IAM and account-level management. This means that while attackers with PowerUserAccess can still perform destructive actions—such as manipulating EC2 instances, altering S3 buckets, or deploying malicious Lambda functions—they cannot modify IAM policies or create new privileged users. However, the article highlights how attackers can still escalate privileges with PowerUserAccess. For example, they could:

  • Create new Lambda functions with elevated permissions.
  • Assume roles with higher privileges using STS (Security Token Service).
  • Abuse service-specific permissions to indirectly gain access to IAM functions.

While AdministratorAccess is the most dangerous, PowerUserAccess still presents a significant risk. Attackers can exploit service-level gaps to escalate privileges or gain persistence. Organizations should treat both policies with caution, applying least-privilege principles, monitoring for suspicious activity, and regularly reviewing permissions to minimize the risk of privilege escalation.

Mitigation Tip: Regularly audit AWS policies, use Service Control Policies (SCPs) to enforce guardrails, and limit the use of broad privileges like PowerUserAccess and AdministratorAccess to essential personnel only.

2. Azure Withstands Historic 15.7 Tbps Barrage From Aisuru IoT Army

When Microsoft quietly confirmed that Azure had just absorbed the largest cloud DDoS attack ever recorded, it marked another escalation in a year defined by outsized botnet activity. On October 24, Azure’s defenses faced a staggering 15.7 terabit-per-second flood hammering a single public endpoint in Australia. The traffic wasn’t subtle—it arrived as a wall of UDP packets, nearly 3.64 billion per second at the peak, coming from more than half a million compromised devices spread across the globe. Despite the scale, Azure’s global protection network held, filtering waves of junk traffic before they could destabilize customer workloads.

Behind the barrage was the Aisuru botnet, a rapidly growing Turbo Mirai–style IoT swarm that has been busy redefining what “large” means in the DDoS world. Aisuru is made up of everyday hardware—home routers, cameras, consumer CPE gear—that’s been swept up through ongoing exploitation campaigns. Its operators have turned these devices into a DDoS-for-hire engine that favors high-bandwidth, direct-path attacks. Because the compromised devices rarely spoof traffic and often sit on fast residential fiber, attribution has become easier even as the attack volumes continue to climb.

What makes Aisuru especially concerning is how quickly it has scaled this year. Netscout recently reported that the same botnet launched attacks exceeding 20 Tbps in October, primarily against gaming platforms and broadband carriers. Some of those floods topped 4 gigapackets per second—enough to knock over router line cards at major ISPs and disrupt customer connectivity. Cloudflare, for its part, tied Aisuru to a record 22.2 Tbps attack earlier in September. The pattern is clear: Aisuru’s operators are expanding, refining, and diversifying their playbook.

The botnet’s capabilities now stretch beyond basic volumetric traffic. It can generate UDP, TCP, and GRE floods; churn through dozens of TCP flag combinations; and even issue HTTPS-based attacks using residential proxies baked into the botnet itself. And like other modern TurboMirai variants, it’s not just about DDoS—operators also use it for credential abuse, automated scraping, and other opportunistic activities. With the holiday season approaching and global traffic volumes rising, Microsoft’s warning carries weight: organizations need to ensure their internet-facing services are prepared for DDoS pressure that is no longer the exception, but the norm.

Serious AI Bugs Found in Major AI Inference Frameworks

Researchers recently uncovered dozens of vulnerabilities across leading AI inference platforms used by Meta, Microsoft, NVIDIA, and other major vendors. These issues stem from insecure model-loading logic, unsafe parsing mechanisms, unchecked memory operations, and deserialization flaws in libraries that power production-scale inference. Many affected frameworks run with elevated privileges or broad access to GPUs, system resources, and shared memory.

The vulnerabilities are particularly concerning because AI model artifacts—weights, tokenizers, serialized files—are often treated as static and trustworthy. However, the research demonstrates that model files can be weaponized like malicious executables. A hostile actor could craft a poisoned model file that, when loaded, triggers remote code execution, privilege escalation, or inference runtime crashes. This makes the model-loading process a new form of supply-chain risk.

Moreover, inference servers are increasingly embedded into cloud applications, MLOps pipelines, and edge devices. A compromised inference runtime can allow attackers to pivot into internal networks, steal proprietary model IP, disrupt operations, or manipulate outputs without detection. The research highlights that AI supply chains remain immature compared to traditional software, lacking robust validation, sandboxing, and fuzz-testing.

Organizations deploying models—especially from untrusted or open-source sources—face increasing pressure to implement strict intake controls. Inference environments must be isolated, hardened, and monitored like any other high-risk compute pipeline. Vendors must also secure tokenizer libraries, memory handlers, op kernels, and custom CUDA extensions, all of which were shown to be exploitable.

This research is a wake-up call: AI workloads are not “just data science.” They are production systems processing untrusted binary content, often with minimal isolation. The vulnerabilities expose a systemic gap in AI engineering culture, where performance and experimentation still overshadow security and runtime isolation best practices.

3. FCC Scraps Cyber Rules for Telcos—Even After Chinese Espionage Breaches

The FCC’s latest decision has reopened an old debate in Washington: how much responsibility should America’s telecom carriers bear for protecting the networks that foreign intelligence services actively target? This week, the Commission formally reversed a ruling it issued earlier this year that required U.S. carriers to build and certify annual cybersecurity risk-management plans. The original mandate—passed in January after the Salt Typhoon espionage campaign breached multiple U.S. carriers—was designed to tie baseline cybersecurity practices to the 1994 Communications Assistance for Law Enforcement Act (CALEA), the law governing lawful intercept capabilities. Now, under new leadership, the FCC has declared that framework “unlawful,” “ineffective,” and too rigid for industry operations.

The reversal arrives even though the underlying threat has not diminished. Salt Typhoon, a Chinese state-sponsored group, gained access to sensitive carrier systems throughout 2024, including infrastructure used to process federal court–authorized wiretap requests. Investigators warned at the time that the intrusions weren’t smash-and-grab incidents—they were slow-moving, deeply embedded reconnaissance operations against Verizon, AT&T, Lumen, T-Mobile, Charter, Consolidated, and Windstream. In other words, the exact kind of campaign that thrives in environments where security practices vary widely and oversight depends on self-reporting.

While the FCC insists carriers have taken voluntary steps since the breach, the lone dissenting vote on the Commission, Anna Gomez, argued that voluntary action is not a strategy—especially when foreign intelligence units continue probing U.S. telecom infrastructure today. Her point is hard to dismiss. Telecom networks sit at the center of national power: emergency communications, federal investigations, military coordination, and ordinary Americans’ daily lives. Treating cybersecurity as optional or self-regulated minimizes what Salt Typhoon revealed—that hostile governments see U.S. carriers not just as targets of opportunity but as long-term intelligence footholds.

For cybersecurity leaders, the policy shift underscores a broader trend: critical infrastructure security is increasingly being pushed down to private operators without corresponding regulatory guardrails. Whether telcos will voluntarily maintain the level of rigor the earlier rule required remains to be seen. But the stakes are clear. When adversaries are willing to burrow into the systems that handle lawful intercept requests, hoping carriers will simply “continue on the right path” may not be enough to meet the moment.

Thanks for reading!

About us: Echelon is a full-service cybersecurity consultancy that offers wholistic cybersecurity program building through vCISO or more specific solutions like penetration testing, red teaming, security engineering, cybersecurity compliance, and much more! Learn more about Echelon here: https://echeloncyber.com/about

Are you ready to get started?