Smart & Secure: Your Guide to Using AI in the Workplace
Originally published November 2025 · Updated March 2026 by Drew Foley, Cybersecurity Associate at Echelon.
Key Takeaways
Not enough time to read the full article? Here is what you need to know:
- AI governance has evolved from theory to practice. Organizations are now expected to align with NIST AI RMF, ISO/IEC 42001, and the EU AI Act as AI becomes embedded in daily work.
- Banning AI rarely works. Structured governance with approved tools, clear policies, and oversight enables safe innovation and employee productivity.
- Unauthorized AI use creates concrete risks, from data leakage to bias and compliance failures, that can be mitigated with technical controls, policy, and training.
- Formal AI Acceptable Use Policies and AI-aware security training are now standard expectations for mature security and compliance programs.
- The EU AI Act has binding deadlines in 2026. If you operate or sell into the EU, compliance work must already be underway.
- Partnering with experts in AI governance, risk, and compliance helps organizations operationalize trustworthy AI and prepare for a rapidly evolving regulatory landscape.
What Changed Since 2025?
Since this article was first published, AI has moved rapidly from experimentation into everyday production workflows, across collaboration, security, and core business systems. The shift is no longer about whether to adopt AI, but how to govern it responsibly.
Three major developments define the 2026 landscape:
The EU AI Act Is Now Enforcing Real Deadlines
The EU AI Act entered into force on August 1, 2024, and becomes fully applicable on August 2, 2026. This is no longer a future compliance concern; it is an active project deadline.
By August 2026, organizations deploying high-risk AI in the EU must have completed conformity assessments, registered applicable systems in the EU database, and have operational processes for human oversight, logging, and post-market monitoring. Member States are also required to stand up at least one AI regulatory sandbox by that date.
If you operate or sell into the EU: map your AI use cases to risk tiers now, build or update your AI inventory, and budget for conformity, logging, and monitoring work.
U.S. Federal AI Policy Is Shifting
On December 11, 2025, President Trump issued an Executive Order directing agencies to pursue a "minimally burdensome" national AI framework and to challenge state AI laws viewed as obstructing federal priorities. The order creates an AI Litigation Task Force and asks the FCC to consider a federal AI reporting standard that could preempt conflicting state rules.
What this means in practice: even as federal policy moves toward harmonization, expect continued state-level activity in AI transparency, employment, and consumer protection. Treat federal preemption as an evolving constraint, not a solved one.
NIST AI RMF Has Been Updated for Generative AI
The 2025–2026 updates to the NIST AI Risk Management Framework expand coverage of generative AI, AI supply-chain vulnerabilities, and emerging attack models, while aligning more closely with the NIST Cybersecurity and Privacy Frameworks.
NIST also introduced maturity-model guidance that encourages organizations to assess AI risk-management maturity and establish metrics for continuous improvement.
Although NIST frameworks remain voluntary, they are increasingly referenced by regulators and auditors. Aligning with the updated NIST AI RMF strengthens audit readiness and regulatory defensibility.
The Risks of Unauthorized AI Use
Using unapproved AI tools can create significant organizational risks. In 2026, several of these risks now carry direct regulatory consequences under the EU AI Act and increased scrutiny from auditors referencing NIST AI RMF.
- Data leakage: AI tools may access, store, or transmit sensitive company information, including PII, financial data, or intellectual property, resulting in breaches or regulatory violations.
- Security vulnerabilities: Unvetted AI platforms can introduce system weaknesses or network exposure, increasing the risk of cyberattacks. The updated NIST AI RMF now specifically addresses AI supply-chain vulnerabilities as a distinct risk category.
- Lack of auditability: Many AI tools lack sufficient logging or tracking, making it difficult to review actions or demonstrate compliance, a direct problem under EU AI Act monitoring requirements.
- Misaligned objectives: AI outputs may conflict with business goals, producing irrelevant or harmful results such as inappropriate customer recommendations or hallucinations in regulated outputs.
- Bias and discrimination: AI can unintentionally perpetuate biases in hiring, marketing, or other processes, violating anti-discrimination laws and undermining diversity initiatives.
- Inefficiencies and increased costs: Poorly integrated AI may create duplicated efforts, miscommunication, data silos, or reduced productivity.
Examples of AI tools that typically require organizational approval before use include natural language processing (NLP) tools, machine learning platforms, AI chatbots and virtual assistants, speech recognition and voice-to-text services, image and facial recognition tools, AI marketing platforms, and AI-powered cybersecurity solutions.
Best Practices for Safe and Effective AI Use
Organizations can safely leverage AI by adopting structured practices, including formal governance and oversight. The following reflects both the original 2025 guidance and the operational updates required heading into 2026.
- Use approved AI tools: Provide pre-approved AI tools employees can safely use for business purposes, reducing risk and ensuring compliance. This remains the single most effective control an organization can implement.
- Follow an approval process: Require employees to submit requests to IT before using any unapproved AI tools, including purpose, risks, mitigation measures, and costs. Use only after written approval.
- Enforce a formal AI Acceptable Use Policy: A clear policy guides all AI usage, defining approved use cases, restrictions, and reporting requirements. As of 2026, this is becoming a standard expectation in mature security and compliance programs.
- Align with AI governance frameworks: Follow NIST AI RMF and ISO/IEC 42001 to deploy AI responsibly. With the updated NIST AI RMF now covering generative AI and supply-chain risks, alignment also strengthens your position with auditors and regulators.
- Apply technical and data controls: Enforce network restrictions, account management, and privacy safeguards to protect sensitive data and ensure compliance.
- Review for ethical and goal alignment: Review outputs to avoid bias or discrimination and ensure AI supports business objectives.
- Train employees: Include AI-specific training during onboarding and in annual Security Awareness Training, covering risks, proper use, and company-approved tools.
- Adopt a partnership mindset: Treat AI as a tool to empower employees, not restrict them unnecessarily, balancing innovation with risk management.
✓ 2026 Safe AI Use Checklist
Employees and teams can use the checklist below to quickly validate whether their AI usage aligns with current expectations:
- Confirm you are using an approved AI platform or environment for work-related tasks.
- Register new AI use cases or tools through the defined AI review or governance process.
- Avoid inputting confidential, regulated, or highly sensitive data into AI tools unless explicitly permitted and technically safeguarded.
- Label AI-generated content where required, especially in customer-facing or regulatory deliverables.
- Conduct basic reasonableness checks on AI outputs, verifying facts and numbers before relying on them.
- Escalate any suspected AI-related incident (for example, unexpected data exposure or harmful output) through incident response channels.
Frequently Asked Questions
Can I use external public AI tools for work?
In general, employees should only use external AI tools for work with prior approval and under defined safeguards. Many organizations restrict or block unapproved tools and instead offer managed AI platforms where data handling and logging meet internal policies and regulatory expectations.
What if my project needs an AI capability not currently approved?
If you need a new AI capability, submit a request through the AI governance or IT intake process that outlines the use case, data involved, and any third-party vendors. Security, privacy, and compliance teams can then evaluate the risk, perform due diligence on the provider, and determine whether to approve, pilot, or deny the request.
How do I verify that AI outputs are reliable?
You should review AI-generated content with human judgment, cross-check critical facts against trusted sources, and apply standard quality checks used for non-AI work. For high-impact uses, organizations often require documented review, sign-off, and, in some cases, periodic validation and testing of AI systems as part of their risk management frameworks.
Are there new frameworks or regulations I should know about?
Yes. NIST AI RMF 1.0 provides a voluntary risk management framework, ISO/IEC 42001 defines requirements for an AI management system, and the EU AI Act creates binding obligations for AI systems deployed in the EU based on their risk level. Many organizations now map their internal controls to these frameworks to demonstrate trustworthy and compliant AI practices.
The Echelon Approach
At Echelon, we believe AI should empower, not endanger, your organization. Our AI Governance approach integrates risk management, compliance, and security into every phase of AI adoption.
We help you establish clear policies, evaluate risks, and align your AI initiatives with frameworks like NIST AI RMF and ISO 42001. From assessing model transparency to ensuring ethical use and regulatory readiness, Echelon builds the guardrails that let innovation move safely and confidently.
Learn more about Echelon’s AI governance readiness services.
Sources
- DeCesare, J. (November 11, 2025). NIST AI RMF 2025 Updates: What You Need to Know About the Latest Framework Changes - I.S. Partners
- Future of Life Institute. (2024). Implementation Timeline | EU Artificial Intelligence Act - ArtificialIntelligenceAct.eu
- The White House. (December 11, 2025). Ensuring a National Policy Framework for Artificial Intelligence - WhiteHouse.gov