An AI system that plans and executes multi-step actions by calling external tools, rather than producing a single response to a single prompt.
Agents pair a language model with function-calling capability and run in a loop: read the goal, choose a tool, read the result, decide what to do next. The trade-off is capability against cascading failure risk, which is why blast-radius design matters so much at the agent layer.
A design pattern in which the AI decides what to do next, not the human.
The model selects tools, reads intermediate results, and chooses further actions in a loop until a goal is reached or a budget is exhausted. Agentic AI shifts the governance conversation from prompt review to scope review: what is the agent allowed to touch, and how reversible are its actions?
The scope of systems, data, and external actions an agentic AI can affect when something goes wrong.
Minimising blast radius is the core of safe agent design: narrow tool scopes, reversible actions, human-in-the-loop on high-impact steps, and hard limits on cost, time, and out-of-scope domains. It is our preferred framing for discussing agent risk with a security or compliance audience.
The European Union regulation that classifies AI systems by risk tier and imposes obligations on providers and deployers.
Most enterprise AI in regulated sectors falls into the high-risk tier, triggering requirements around documentation, monitoring, human oversight, and post-market surveillance. Non-compliance carries turnover-based fines comparable in scale to GDPR.
A structured review of an organisation’s AI usage, risks, and controls.
Typically covers inventory (what is in use), data flows, vendor posture, and compliance gaps against frameworks such as the EU AI Act, ISO/IEC 42001, or the NIST AI Risk Management Framework. Output is usually a scored report with prioritised remediation steps.
The set of policies, roles, and review processes that decide what AI can be used, by whom, on which data, and under which controls.
Governance is the bridge between ad-hoc AI adoption and a managed programme with auditability. Good governance is boring on purpose: clear ownership, versioned policies, repeatable reviews, evidence trails.
A measure of whether an organisation has the data quality, infrastructure, skills, and governance needed to deploy AI safely and repeatably.
Low readiness is the root cause of most failed AI pilots: the model is not the problem, the operating environment is. Readiness assessments score across dimensions (data, infra, process, people, risk) rather than giving a single number.
A structured evaluation of likelihood and impact for harms linked to an AI system (bias, data leakage, over-reliance, hallucination, third-party exposure).
Outputs feed into mitigation plans, vendor decisions, and compliance documentation. In regulated sectors the assessment is itself an audit artefact, not a private internal document.
The degree to which an AI system’s behaviour matches the intent of its operators and the expectations of its users.
Alignment breaks down through prompt injection, distribution shift, or goal mis-specification. Production alignment is an operational problem as much as a research one: monitoring, evaluation, and rollback pathways matter more than clever prompts.