MENU

Global cybersecurity agencies publish secure AI integration principles for OT

Global cybersecurity agencies publish secure AI integration principles for OT

News |
By Asma Adhimi



U.S. and Australian cybersecurity agencies, together with international partners, have released new secure AI integration principles for OT aimed at helping operators adopt artificial intelligence safely in critical-infrastructure environments. The guidance focuses on balancing innovation with risk management as AI technologies move deeper into operational technology systems.


eeNews Europe readers working with industrial, embedded, or automation systems will find the recommendations timely as AI-driven control and monitoring rapidly expand across OT networks.

A coordinated push for safer AI in OT

The Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) published Principles for the Secure Integration of Artificial Intelligence (AI) in Operational Technology (OT), with support from cybersecurity bodies across Europe, North America, and the Asia-Pacific region. Moreover, the document offers four core principles that help OT owners and operators understand risks and adopt AI in a controlled, resilient manner.

“AI holds tremendous promise for enhancing the performance and resilience of operational technology environments – but that promise must be matched with vigilance,” said CISA Acting Director Madhu Gottumukkala. “OT systems are the backbone of our nation’s critical infrastructure, and integrating AI into these environments demands a thoughtful, risk-informed approach. This guidance equips organizations with actionable principles that AI adoption strengthens — not compromises — the safety, security, and reliability of essential services.”

The guidance highlights machine-learning and large-language-model deployments, including AI agents, and notes that its principles also apply to systems that use traditional statistical models or rule-based automation.

Four principles for AI deployment in critical infrastructure

CISA and ASD’s ACSC urge organizations to start by understanding AI, training technical teams and operators on AI risks, impacts, and secure development practices. They also encourage operators to assess AI use cases in OT environments, balancing technical feasibility with data-security requirements and planning for short- and long-term integration challenges.

The third principle calls for stronger AI governance, which includes continuous model testing and strict compliance monitoring. The final principle emphasizes embedding safety and security into every AI project, maintaining transparency, operator oversight, and tight alignment with existing incident-response plans.

Broad international backing

Cybersecurity agencies across North America, Europe, and the Asia-Pacific region — including the NSA’s AI Security Center, the FBI, Canada’s Cyber Centre, Germany’s BSI, the Netherlands’ NCSC-NL, New Zealand’s NCSC-NZ, and the UK’s NCSC—collaborated on the guide to strengthen AI security in critical-infrastructure systems. Readers can access the full guidance and related resources on CISA’s Artificial Intelligence and Industrial Control Systems webpages.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s