Intelligence
Cyber Intelligence Weekly Echelon

Cyber Intelligence Weekly (August 6, 2023): Our Take on Three Things You Need to Know

Welcome to our weekly newsletter where we share some of the major developments on the future of cybersecurity that you need to know about. Make sure to follow my LinkedIn page as well as Echelon’s LinkedIn page to receive updates on the future of cybersecurity!

To receive these and other curated updates to your inbox on a regular basis, please sign up for our email list here: https://echeloncyber.com/ciw-subscribe

Before we get started on this week’s CIW, I’d like to highlight an exciting new Echelon webinar series - "Chew on This: Meaty Discussions on the Convergence of Security + Business." In these monthly lunchtime sessions, we'll serve up 30-minute, information-packed discussions covering essential topics at the intersection of cybersecurity and business. No tech deep-dives, no yawn-fests - just high-energy, high-value conversations aimed at helping organizations drive a more secure business.

🍽️ First Course: Communication – How to Build a Bridge Between Security and the Business

📅 Date: 16th August 2023

🕒 Time: 12:00 PM EST

🎤 Panelists: Matt Donato (Moderator), Paul Interval, Jeff Hoge

REGISTER HERE

In our inaugural event, we will delve into the critical issue of communication between security and business. How can businesses effectively communicate their security and risk programs to the board and other key stakeholders? Should security and risk leaders have a seat at the business table? When and how should they communicate with the board?

How to Build a Bridge Between Security and the Business

Away we go!

1. CMU Researchers Uncover Inherent Weakness in AI Chatbots

In a recent study from Carnegie Mellon University, researchers unveiled a simple way to bypass, aka ‘jailbreak,’ the safety measures of several major AI chatbots, including ChatGPT, Bard, and Claude. This new method of bypass may be used in adversarial attacks on these models. This new method tweaks the prompts given to a chatbot, nudging it to produce forbidden or harmful responses. Despite the rules applied to these chatbots to prevent the output of harmful content, this method circumvents those barriers. Zico Kolter, an associate professor at CMU involved in the study, said the vulnerability underscores a fundamental weakness in advanced AI chatbots that current methods can't patch.

CMU Researchers Uncover Inherent Weakness in AI Chatbots

Adversarial attacks involve manipulating the bot's input so as to provoke it into producing prohibited output. This is achieved by attaching a special string of text to a harmful prompt, which the bot then interprets in a specific way due to its training on vast quantities of web data. The researchers demonstrated that this kind of attack could be applied successfully to several popular chatbots. Despite notifying companies such as OpenAI, Google, and Anthropic about the issue, there remains no overarching solution to prevent such attacks.

The vulnerability arises from the inherent nature of large language models like ChatGPT, which are essentially massive neural network algorithms trained on extensive human text data. These algorithms are excellent at predicting characters that should follow a given input string, thus creating outputs that mimic real human intelligence and knowledge. However, they are also prone to fabricating information, reflecting social biases, and producing strange responses when answers are more challenging to predict. These anomalies become more apparent when adversarial attacks tweak inputs to induce aberrant behaviors.

These alarming findings underscore the critical need to refine AI safety measures. Researchers point out that these issues may stem from the fact that all large language models are trained on similar data, downloaded from the same websites. The current methods of fine-tuning these models, which involve having human testers provide feedback, may not adjust their behavior sufficiently. Looking ahead, experts suggest that the focus should shift towards protecting systems that are likely to come under attack, rather than trying to 'align' the models themselves.

2. Midnight Blizzard: Unveiling the Latest Social Engineering Attack on Microsoft Teams

Microsoft's Threat Intelligence has identified a highly targeted social engineering attack involving Microsoft Teams chats by a threat actor known as Midnight Blizzard, more commonly known as APT29 or Cozy Bear. This group has been notably linked to the infamous SolarWinds attack in 2020 and is allegedly part of Russia’s Foreign Intelligence Service, or SVR, according to U.S. and U.K. law enforcement agencies.

The threat actors use previously compromised Microsoft 365 accounts and create new domains resembling technical support entities. These domains are then leveraged in Microsoft Teams to send phishing messages attempting to steal credentials from targeted organizations. Fewer than 40 unique global organizations, including government, NGOs, IT services, technology, discrete manufacturing, and media sectors, are reported to have been impacted by this campaign.

Midnight Blizzard is notorious for long-term and dedicated espionage of foreign interests, dating back to early 2018. Their operations often involve compromising valid accounts and, in some cases, compromising authentication mechanisms within an organization to broaden access and evade detection. They utilize a variety of initial access methods, including stolen credentials, supply chain attacks, and exploitation of service providers’ trust chains.

The latest attack by Midnight Blizzard involves a credential phishing technique. They compromise Microsoft 365 tenants owned by small businesses to host their social engineering attack. They then rename the compromised tenant, add a new onmicrosoft.com subdomain, and a new user associated with that domain, from which they send messages to the target tenant. Using security-themed or product name-themed keywords, they create a new subdomain and new tenant name to make their messages appear legitimate.

Once the attacker has valid account credentials or targets users with passwordless authentication configured, they send a Microsoft Teams message to the targeted user, asking them to enter a code into their Microsoft Authenticator app. If the targeted user complies and enters the code, the threat actor gains access to the user’s Microsoft 365 account. The actor may then proceed to steal information from the compromised Microsoft 365 tenant, or attempt to add a device to the organization as a managed device via Microsoft Entra ID (formerly Azure Active Directory). Microsoft has issued multiple recommendations to reduce the risk of this threat, including deploying phishing-resistant authentication methods, educating users about social engineering, and implementing Conditional Access App Control in Microsoft Defender for Cloud Apps for users connecting from unmanaged devices.

3. Microsoft Patches Critical Power Platform Vulnerability Amid Criticism

After facing criticism for its slow response to a critical security flaw, Microsoft disclosed on Friday that it has addressed the issue affecting its Power Platform. The flaw allowed unauthorized access to custom code functions used in Power Platform's custom connectors, potentially leading to unintended information disclosure.

The tech giant emphasized that there's no need for customer action and found no evidence of active exploitation of the vulnerability. However, cybersecurity firm Tenable, which discovered and reported the issue to Microsoft on March 30, 2023, argued that the flaw could have led to unauthorized access to cross-tenant applications and sensitive data due to insufficient access control to Azure Function hosts.

Microsoft allegedly issued an initial fix on June 7, 2023, but it wasn't until August 2, 2023, that the vulnerability was fully addressed. This delay attracted criticism from Tenable CEO Amit Yoran, who called Microsoft "grossly irresponsible, if not blatantly negligent." Yoran expressed frustration over the broken "shared responsibility model" in a LinkedIn post, accusing Microsoft of a lack of transparency and a culture of "toxic obfuscation."

Defending its approach, Microsoft explained that it follows an extensive process of investigating and deploying fixes. The company stated that "developing a security update is a delicate balance between the speed and safety of applying the fix and the quality of the fix." The tech giant further added that it monitors any reported security vulnerability of active exploitation and moves swiftly if it sees any active exploit.

Thanks for reading!

About us: Echelon is a full-service cybersecurity consultancy that offers wholistic cybersecurity program building through vCISO or more specific solutions like penetration testing, red teaming, security engineering, cybersecurity compliance, and much more! Learn more about Echelon here: https://echeloncyber.com/about


Sign up to get Cyber Intelligence Weekly in your inbox.
Latest Intelligence