Beyond Bureaucracy: Mastering the AI Risk Framework for Government Agencies and Contractors

In the public sector, the conversation around Artificial Intelligence is no longer about if it will be adopted, but about how fast it must be adopted to survive. That’s the reality facing every government agency today. The pressure to process massive amounts of data and deliver faster, more equitable services is reaching a breaking point. Forget the old excuses; the wave of Artificial Intelligence is here, and it’s forcing an immediate, non-negotiable shift in how the government operates.

The good news is that when the government adopts AI, it’s governed by a single, non-negotiable rule: don’t screw up the public trust, civil rights, or ethical governance. When we talk about AI in the public sector, we must talk about the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) [11]. Think of the NIST RMF as the “Cardinal Rules” of AI. It’s the playbook. It’s similar to NFPA standards for the Fire Service: they provide guidance and best practices, but are not laws. The NIST RMF are best practices for all users of AI, both in businesses and governments.

The 4 Cardinal Rules of AI Safety

The NIST RMF provides a proactive, structured approach to managing AI risk across its entire lifecycle.

 

The Federal Mandate

The federal government doesn’t just suggest responsible AI use; it mandates it. The White House Office of Management and Budget (OMB) issued its own set of directives in OMB Memo M-25-21 [10] that accelerate AI use for federal agencies while demanding strict governance. This memorandum introduces the ultimate tripwire: High-Impact AI.

According to the memo, High-Impact AI is defined by the feds as any system whose output is a “principal basis for decisions or actions with legal, material, binding, or significant effect on things like civil rights, human health and safety, or access to critical government services.”

If your AI involves some impact to these areas, it’s important to understand the regulatory framework being mandated at the federal level.

The OMB Mandate Visualized

The Blueprint and the Mandate, A Quick Recap

  • NIST RMF is the Framework (Voluntary Guide): The National Institute of Standards and Technology (NIST) developed the AI RMF as a guide intended for voluntary use by both the public and private sectors to improve trustworthiness and manage AI risk. NIST’s mission is nonregulatory; it provides standards and guidance. The RMF acts as the “Cardinal Rules” and the playbook for ethical AI.
  • OMB Directive is the Mandate (Legal Requirement): The White House Office of Management and Budget (OMB) issued its directive (M-25-21) to set hard deadlines and mandatory operational requirements for federal agencies. This mandate serves as the regulatory hammer.
  • The Convergence: The OMB directive specifically requires federal agencies to implement minimum risk management practices for all High-Impact AI systems. Crucially, principles of the NIST RMF form part of this required baseline for those minimum risk management practices.

FedRAMP: The Cloud Security Gatekeeper

If the NIST RMF is the framework for AI oversight, FedRAMP (Federal Risk and Authorization Management Program) [12] is the mandatory security framework for cloud computing. Because modern AI systems run almost exclusively on cloud platforms, any vendor serving federal agencies must achieve FedRAMP authorization, proving their cloud solutions meet rigorous security standards set by the federal government. This is essential for protecting the vast amounts of public sector data that High-Impact AI systems rely on.

Compliance Check: Where AI Meets the Security Bar

When discussing FedRAMP, it’s important to know the status of the leading AI platforms used in the public sector:

  • Google Cloud: Achieved FedRAMP High Authorization for its cloud infrastructure and its generative AI services, including Gemini in Workspace and the Gemini app and Generative AI on Vertex AI.
  • Microsoft/OpenAI: The Azure OpenAI Service, which hosts models like GPT-4o, has been approved as a service within the FedRAMP High Authorization for Azure Government.
  • OpenAI (Direct): OpenAI itself (for its ChatGPT Enterprise SaaS product) is currently listed as “in process” with the FedRAMP security review program, a step toward gaining government-wide authorization.

The General Services Administration (GSA) is actively prioritizing the authorization of generative AI tools to accelerate rapid adoption across federal agencies. This focused effort concentrates on specific capabilities that offer immediate operational gains, such as:

  • Chat interfaces for improved internal and citizen-facing communication.
  • Code-generation and debugging tools critical for modernizing federal software development.
  • Prompt-based image generators and general-purpose API offerings for access to these core capabilities.

Beyond the leading AI platforms listed, the FedRAMP Marketplace [13] currently includes 484 total Authorized Services and 629 total designations, including those “In Process”. This ecosystem is rapidly expanding to include critical platforms needed for AI operations.

Making It All Make Sense

The interaction between the vendors (Google, Azure, OpenAI) and the Office of Management and Budget (OMB) mandate is a critical dynamic in public sector AI adoption. To clarify the responsibility, the OMB directive is legally directed to the heads of all Executive Branch departments and agencies. Therefore, the primary and ultimate responsibility for implementing the minimum risk management practices and facing the consequence of discontinuing a non-compliant AI system rests squarely with the Federal Agency. The agency is the risk owner. [1] However, the agency cannot meet this mandate without its vendors. This creates a critical cycle of inherited compliance through procurement:

The Vendor as the Compliance Gatekeeper

Vendors interact with the OMB mandate indirectly, by proving they meet the foundational security and risk requirements necessary for the agency to comply.

  • FedRAMP as the First Hurdle: For a vendor’s cloud platform (where modern AI systems run) to even be considered by a federal agency, it must achieve FedRAMP authorization (Low, Moderate, or High). FedRAMP is the mandatory security baseline for cloud services, proving compliance with crucial security controls. Azure Government, Google Cloud, and OpenAI’s status on the FedRAMP Marketplace demonstrates their commitment to being “contract-ready.”
  • Contractual Inheritance of Risk: The OMB mandate requires the agency to manage risks based on NIST RMF principles. When an agency acquires an AI system, it must ensure the supplier is operating it securely. For high-security environments, policy explicitly dictates that the government client must hold the external supplier—in this case, Google, Microsoft, or OpenAI—to the same security standards as the organization maintaining and using the AI system. [2]

Acquisition Guidance

The OMB’s authority extends to how agencies acquire AI systems. If an agency purchases a High-Impact AI tool, that agency must conduct pre-deployment testing and ongoing AI Impact Assessments to prove the system meets the minimum required risk practices. The agency achieves this by:

  • Specifying NIST RMF Alignment: Writing contracts that require the vendor to provide documentation and audit trails proving their AI model and underlying infrastructure (FedRAMP-authorized) aligns with the NIST RMF functions (Govern, Map, Measure, Manage).
  • Continuous Monitoring: Requiring the vendor to submit to the agency’s continuous monitoring protocols. This is crucial because AI models evolve (due to continuous updates to training data or technical changes), meaning the security and risk profile is constantly shifting. [2]

In summary: The OMB mandate is the legal burden on the agency, but the agency uses FedRAMP and contractual requirements to force vendors to meet the NIST RMF derived security and ethical standards before they can do business with the federal government. The agency owns the risk, but the vendor enables compliance.

State Level AI Governance

Thus far, we have examined how AI regulation is being shaped at the federal level. But what about at the state level? State governments are utilizing over 100 uses for generative AI, with many applications focusing on administrative efficiency, such as drafting and summarizing documents and processing contracts. State efforts are also moving into high-impact, direct citizen-safety applications. For instance, the California Department of Transportation (Caltrans) is leveraging generative AI to proactively identify intersections likely to be dangerous for pedestrians, cyclists, and scooter riders, allowing them to take preventive action on vulnerable roadway user safety.

The regulatory evolution at the state level has emphasized three key approaches to private sector AI regulation:

  • Use and context-specific regulations focusing on sensitive applications
  • Technology-specific regulations
  • Liability and accountability approaches that clarify or modify existing legal regimes’ application to AI [3]

This has resulted in targeted interventions in specific high-impact sectors:

  • Employment: Regulations targeting the use of automated decision systems in hiring have been a significant area of focus. New York City, for instance, passed a law in 2021 requiring employers to conduct bias audits, publish the results, and notify candidates when automated tools are used in screening, subjecting violators to civil penalties. States including California, Illinois, Maryland, Massachusetts, and New Jersey have also addressed AI in hiring practices. [4]
  • Healthcare: Several states have enacted frameworks addressing AI’s application in healthcare, most notably systems such as mental health chatbots. This signals particular legislative concern over potential direct harm to vulnerable populations in critical service delivery areas. [5]

The Colorado Model (Liability and Accountability)

The Colorado Artificial Intelligence Act (CAIA), enacted in May 2024 (scheduled effective date of February 1, 2026), is recognized as the country’s first comprehensive, risk-based AI law focusing on preventing algorithmic discrimination. [21]

The Utah Model (Pro-Innovation and Targeted Disclosure)

In sharp contrast to Colorado’s broad, liability-focused approach, states like Utah have pursued a lighter regulatory touch, explicitly seeking to avoid chilling innovation. The Utah AI Policy Act (UAIP) focuses on targeted disclosure and creating mechanisms for innovation support, such as regulatory sandboxes. [22]

The U.S. AI regulatory landscape is defined by the strategic dovetailing of federal guidance and state liability regimes. While federal frameworks like M-25-21 establish a rigorous, mandatory baseline for government agencies concerning “High-Impact” use cases, states are utilizing the NIST AI RMF as the designated pathway for private sector compliance and liability protection.

This structure dictates that compliance strategy cannot be approached on a state-by-state basis. Instead, multi-jurisdictional enterprises must treat the voluntary NIST RMF as a mandatory standard of due diligence to mitigate legal risks across the states.

Local Government Success Stories: The Blueprint for Responsible AI

  • Virginia Beach, VA: Prioritizing 911 Calls. The problem wasn’t 911. It was the crushing volume of non-emergency 10-digit calls clogging up the lines. The solution? AI augmented the system using Natural Language Processing (NLP) to manage non-emergency questions. This tool now successfully handles over 40% of those calls, freeing up public safety dispatchers from more than 900 hours of talk time every year. That’s a massive win because it lets the humans focus on life-or-death emergencies. [18]
  • Dallas, TX: Streamlining Procurement. A boring topic? Yep. A high-value use case? You bet. Dallas used an AI procurement platform to automate drafting solicitation documents in minutes. This speeds up the entire governmental purchasing process, identifies local and small businesses more easily, and ensures compliance with legal clauses. It’s the ultimate “do more with less” scenario. [19]
  • Helsinki, Finland: Customer Service Automation. Helsinki is using virtual assistants and public-facing chatbots to serve two audiences: city employees and constituents. The AI helps employees find relevant internal information quicker and answers public questions more accurately. It’s a low-risk starting point that builds institutional trust needed for later, higher-impact expansion. [20]

The Public Safety Crucible: CJIS and the DOJ

The only place where the AI rules get even stricter than the NIST RMF is in public safety. Most people think AI in law enforcement is too risky to touch, but the reality is that the U.S. Department of Justice (DOJ) is actively defining its responsible use. Why? Because identifying criminal suspects, forecasting crime, and running risk assessments are all High-Impact AI use cases, according to the federal tone-setters.

For every vendor and department looking to adopt AI in this space—from police to courts to emergency dispatch—there is one ultimate gatekeeper: The Criminal Justice Information Services (CJIS) Security Policy [14].

CJIS compliance is the mandatory, non-negotiable standard for protecting criminal justice data (CJI), or Criminal Justice Information, including biometric records and case files. It goes far beyond standard security. For an AI product to even be considered in a public safety contract, a vendor must demonstrate adherence to, which includes rigorous technical controls, like encryption, and, critically, personnel screening. That means every person from the vendor’s cloud engineer to the data support staff who touches that system must pass stringent background checks.

This is why simple, low-risk administrative adoption is the blueprint to start integrating AI. Local agencies are demonstrating maturity in low-stakes areas (like procurement) to build the institutional trust and governance muscle required before they touch high-risk, mission-critical areas like public safety, where the CJIS  bar is towering.

The Contractor Conundrum: Inheriting the Government’s Risk

For contractors and consultants, the compliance bar is non-transferable and continuous. The Department of Defense (DoD), which acquires and operates some of the highest-impact AI systems, mandates that external suppliers in the supply chain must be held to the same security standards as the organization maintaining and using the AI system. This principle of inherited risk eliminates the possibility of the government mitigating liability by contracting out the system; accountability for security is immediately imposed on the supplier. [2]

Roles in the AI Supply Chain

The distinction between the AI vendor and the government contractor is critical to defining responsibility.

Contractors, therefore, must operationalize the convergence of several standards:

  • NIST AI RMF: The ethical and trustworthiness framework.
  • OMB: The mandatory minimum risk practices for high-impact systems.
  • CJIS: The non-negotiable data handling and personnel screening requirements for criminal justice data.

The market has responded by developing automated compliance tools that allow contractors to manage NIST AI RMF implementation: tracking risk registers, assigning ownership, and collecting necessary evidence while simultaneously monitoring the 400+ controls required for CJIS compliance [14]. Leading platforms in this space include Vanta [9], Drata [15], Hyperproof [16], and Secureframe [17]. This convergence is the only way to meet the mandatory security requirements for high-stakes government work.

The Mandate Cascade: Federal to Local Alignment

The bureaucratic bottleneck isn’t just in D.C. Local jurisdictions have limited resources, which means they can’t afford to spend taxpayer money on inventing their own novel AI ethical frameworks. Instead, they play follow the leader.

Local government AI policies are overwhelmingly modeled after, and draw guidance from, existing federal and state governance frameworks.

The Dallas and Virginia Beach examples highlighted aren’t just one-off success stories; they’re proof of this Mandate Cascade in action. When local leaders decide to buy AI, they look for vendors who can prove their solutions are pre-aligned with NIST, FedRAMP, and CJIS [14]. This compliance assurance acts as a massive de-risking factor for the client, minimizing their policy burden and political exposure.

Core Conclusions

  • NIST RMF is the De Facto National Standard for Due Diligence: By integrating the NIST RMF into the CAIA’s affirmative defense provisions, Colorado has transformed the voluntary RMF into a critical legal mechanism for demonstrating “reasonable care” in the private sector. This establishes the RMF as the most robust, uniform standard available to address algorithmic discrimination risks across states. [6]
  • Risk Definitions Converge, Mitigation Methods Diverge: Federal (M-25-21) and leading state (CAIA) regulations agree that AI systems with material legal or significant effects on rights and access to services constitute high risk. However, M-25-21 mandates administrative procedural steps, while CAIA imposes a flexible tort-based liability standard, forcing companies to address both procedural form and substantive legal exposure simultaneously. [7]
  • Local Government Procurement Reinforces Federal Standards: Local and city governments, guided by policy bodies and internal needs, are implementing governance principles focused on transparency, accountability, and security. These requirements, often embedded in procurement mandates, create a downstream market pressure on vendors to standardize their products and risk documentation to align with RMF principles. [8]
  • Liability is Inherited by the Integrator: The principle of non-transferable accountability means the Government Contractor/Integrator, not the foundational AI Vendor, bears the primary contractual and compliance risk for the integrated solution, enforcing rigor across the entire supply chain.

In short: The federal government sets the tone and the mandatory security bar (OMB, CJIS), and local governments simplify their procurement process by demanding adherence to that bar (NIST RMF) to ensure they never compromise public trust or civil rights.

References

[1] Ropes & Gray LLP. White House Issues Guidance on Use and Procurement of Artificial Intelligence Technology. https://www.ropesgray.com/en/insights/alerts/2025/04/white-house-issues-guidance-on-use-and-procurement-of-artificial-intelligence-technology

[2] Department of Defense Chief Information Officer. AI Cybersecurity Risk Management Tailoring Guide. https://dodcio.defense.gov/Portals/0/Documents/Library/AI-CybersecurityRMTailoringGuide.pdf

[3] Future of Privacy Forum. The State of State AI: Legislative Approaches to AI in 2025. https://fpf.org/blog/the-state-of-state-ai-legislative-approaches-to-ai-in-2025/

[4] Brennan Center for Justice. States Take the Lead Regulating Artificial Intelligence. https://www.brennancenter.org/our-work/research-reports/states-take-lead-regulating-artificial-intelligence

[5] Future of Privacy Forum. Chatbots in Check: Utah’s Latest AI Legislation. https://fpf.org/blog/chatbots-in-check-utahs-latest-ai-legislation/

[6] National Association of Attorneys General. A Deep Dive into Colorado’s Artificial Intelligence Act. https://www.naag.org/attorney-general-journal/a-deep-dive-into-colorados-artificial-intelligence-act/

[7] PwC. Tech Regulatory Policy Developments – Responsible AI. https://www.pwc.com/us/en/services/consulting/cybersecurity-risk-regulatory/library/tech-regulatory-policy-developments/responsible-ai.html

[8] City of Seattle. Artificial Intelligence Policy – POL-211. https://seattle.gov/documents/departments/tech/privacy/ai/artificial_intelligence_policy-pol211%20-%20signed.pdf

[9] Vanta. Products: NIST AI Risk Management Framework. https://www.vanta.com/products/nist-ai-risk-management-framework

[10] Office of Management and Budget. Memorandum M-25-21: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust. M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf

[11] National Institute of Standards and Technology. NIST Risk Management Framework | CSRC. https://www.google.com/search?q=https://csrc.nist.gov/projects/risk-management-framework

[12] FedRAMP. FedRAMP | FedRAMP.gov. https://www.fedramp.gov/

[13] FedRAMP. FedRAMP Marketplace. https://www.google.com/search?q=FedRAMP+Marketplace

[14] Federal Bureau of Investigation (FBI). Criminal Justice Information Services (CJIS) Security Policy v6.0. CJIS_Security_Policy_v6-0_20241227 (1).pdf

[15] Drata. Homepage. https://drata.com/

[16] Hyperproof. Homepage. https://hyperproof.io/

[17] Secureframe. Homepage. https://secureframe.com/

[18] https://www.govtech.com/artificial-intelligence/when-trouble-calls-virginia-beach-va-lets-ai-answer

[19] https://www.smartcitiesdive.com/news/dallas-procurement-ai-cities/757555/

[20] https://www.ibm.com/case-studies/city-of-helsinki#:~:text=Currently%2C%20the%20City%20of%20Helsinki,date%20or%20canceling%20an%20appointment.%E2%80%9D

[21] https://leg.colorado.gov/bills/sb24-205

[22] https://le.utah.gov/~2024/bills/static/SB0149.html

Scroll to Top