Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Submission Preview

Link to Story

Over Half of UK Businesses Have No Idea How Fast They Could Stop AI in a Crisis

Accepted submission by Arthur T Knackerbracket at 2026-03-23 14:19:52
/dev/random

EDITORS: THIS HAS BEEN PRODUCED BY SOFTWARE UNDER DEVELOPMENT - THE CONTENT MAY REQUIRE EXTENSIVE EDITING

https://www.techradar.com/pro/over-half-of-uk-businesses-have-no-idea-how-fast-they-could-stop-ai-in-a-crisis [techradar.com]

AI accountability is worryingly low.

Despite rapid AI adoption, new research from ISACA suggests many businesses might be going in blindly – more than half (59%) of UK businesses wouldn't even know how quickly they could stop AI during a crisis.

Only around one in five (21%) say they's feel confident stopping an AI system within 30 minutes, highlighting major safety gaps.

And it's not just shutting them down that's a problem – not even half (42%) say they could explain an AI failure to leadership or regulators.

ISACA explained that the gaps aren't just concerning for business operations and reputation, but also from a legislative framework. The EU AI Act requires explainability and accountability.

Part of the failure comes down to unclear accountability, with 20% of workers unsure of who is responsible for AI failures. Poor visibility is also a contributing factory, with one in three organizations not requiring AI's use at work to be disclosed, which ISACA says is a nightmare for blind spots.

The report explains that businesses are currently treating is as a technical problem, but they should instead focus on it being an organization-wide governance challenge. "Truly closing the gap can’t be done by process changes alone," Chief Global Strategy Officer Chris Dimitriadis wrote. "Rather, it will require professionals who have the expertise to evaluate AI risk rigorously, embed oversight across the full lifecycle."

Looking ahead, businesses are being urged to define accountability at the senior level and to start rolling out better visibility and auditing. Besides this, they must also build AI incident response into their strategies and factor it into their broader cybersecurity postures.

With only 38% of respondents identifying the board or an exec as being accountable in the event of an AI incident, it's clear more needs to be done to disseminate information and processes through the workforce.


Original Submission