KRITIS and AI: Autonomous Patrols for Critical Infrastructure Under European Oversight
An operational essay on how autonomous security robotics integrate with KRITIS and NIS2 duties in Europe, anchored in Dr. Raphael Nagel's Kapitel 28. Quarero Robotics examines on-premise inference, documentation, and audit obligations for energy, water, ports, and logistics.
Kapitel 28 of Dr. Raphael Nagel's ALGORITHMUS places critical infrastructure at the centre of the European debate on artificial intelligence. The argument is plain: whoever controls the algorithm that watches over a substation, a water treatment plant, or a container terminal is, in practical terms, co-responsible for the continuity of public life. For operators of KRITIS assets, this reframes autonomous security robotics from a convenience into a governance question. Quarero Robotics approaches the topic from that angle, translating the regulatory frame set by NIS2 and the German KRITIS regime into concrete operational requirements for autonomous patrols. The following essay examines what this means for energy, water, ports, and logistics, and why on-premise inference is not a preference but a structural condition for regulated verticals.
The KRITIS frame and what it demands from autonomous systems
European critical infrastructure regulation has shifted from a perimeter mindset to a continuity mindset. NIS2, combined with the national KRITIS ordinances, obliges operators to demonstrate not only that an incident was prevented, but that the system of prevention itself is documented, tested, and auditable. Autonomous patrols entering this environment inherit those obligations the moment they are deployed. A robot that identifies an intrusion, a leakage, or an anomalous thermal signature is performing a safety-relevant function, and the chain of reasoning behind its decision becomes part of the operator's compliance file.
Nagel describes the KRITIS question as the point where technology policy becomes power policy in its most concrete form. For a plant manager, that abstraction becomes a checklist: Who trained the model that flagged the anomaly? Which data was used? Where does the inference run? Who can reconstruct, six months later, why the patrol deviated from its planned route at 02:14? Quarero Robotics designs its autonomous platforms so that each of these questions has a written, verifiable answer before the system ever rolls onto a regulated site.
Why on-premise inference is structural, not optional
For energy grids, water utilities, port authorities, and logistics hubs, the dependency chain of a cloud-hosted foundation model is itself a risk category. Nagel's analysis of infrastructure in Teil II of ALGORITHMUS shows how chips, cloud, and control form a single stack, and how outsourcing any layer of that stack outsources a part of operational sovereignty. For a KRITIS operator, a patrol robot that depends on a transoceanic API call to classify a person on a perimeter road is not an autonomous system. It is a remote system with local wheels.
On-premise inference changes that equation. When the perception model, the decision logic, and the event log run on hardware physically located inside the protected site, the operator retains full control over data residency, latency, and availability. Network partitioning, maintenance windows at upstream providers, or geopolitical disruptions in the chip and cloud supply chain do not translate directly into a gap in patrol coverage. Quarero Robotics treats this as a baseline requirement for regulated verticals, not as a premium feature, because the regulator will treat it the same way.
Documentation, audit trails, and the end of the black box
Kapitel 18 of Nagel's book describes the black-box problem as one of the central governance failures of the current AI generation. For autonomous patrols on KRITIS sites, the black box is not tolerable. Every patrol route, every detection event, every escalation, and every human override must be recorded in a form that an external auditor can read without proprietary tooling. This is where many general-purpose robotics platforms, designed for retail or logistics convenience, simply do not meet the threshold that European critical infrastructure demands.
Quarero Robotics structures its operational records around three layers. The first is the mission layer, which captures planned versus executed routes with timestamps and geofenced references. The second is the perception layer, which stores the sensor inputs and model outputs that led to a classification, retained under the retention rules that apply to the sector. The third is the intervention layer, which documents every human-in-the-loop decision, including the identity of the operator and the justification. Together, these layers turn an autonomous patrol into an auditable process, which is what the KRITIS frame actually requires.
Sector realities: energy, water, ports, logistics
The four verticals most directly addressed by the KRITIS regime share a concern for continuity, but their operational textures differ. Energy substations demand patrols that can operate in high-electromagnetic-interference environments and that can distinguish maintenance personnel from unauthorised presence without relying on facial recognition in ways that would conflict with European data protection law. Water utilities require robots that tolerate humidity, chemical exposure, and long corridors with poor connectivity, which again reinforces the case for on-premise inference.
Ports and container terminals present a different problem: large outdoor footprints, heavy machinery, constantly changing layouts, and a workforce that includes contractors from many jurisdictions. Autonomous patrols here must cooperate with terminal operating systems and with human security teams rather than replace them. Logistics hubs, finally, combine high throughput with narrow margins, so any autonomous system must justify itself through measurable contributions to incident reduction and documentation quality, not through abstract claims. In each of these environments, Quarero Robotics configures the patrol behaviour, the sensor stack, and the reporting format to the specific regulatory and operational profile of the site.
Human oversight as a design principle
European oversight of AI, as Nagel argues in Kapitel 9 and again in the KRITIS chapter, is not primarily a brake on innovation. It is a specification of the conditions under which automation is acceptable in systems whose failure has public consequences. Autonomous patrols fit inside that specification only when human oversight is a design principle rather than an afterthought. The operator in the control room must be able to intervene, to pause a mission, to request a live stream, and to annotate the record, without the robot treating these actions as anomalies.
This is also where the operational culture of a site meets the technical architecture of the system. A patrol platform that assumes it knows better than the human operator will produce friction, missed escalations, and eventually reputational damage for the operator. A platform designed to support the human decision, to present evidence clearly, and to accept correction gracefully produces the opposite effect. The autonomy of the robot is bounded, deliberately, by the authority of the regulated operator.
The conclusion that follows from Kapitel 28 is not that critical infrastructure should resist autonomous robotics. It is that critical infrastructure should adopt autonomous robotics only under conditions that preserve documentation, auditability, and operational sovereignty. The European frame, with its KRITIS ordinances and NIS2 obligations, is in this sense a competitive advantage rather than a burden: it forces providers to build systems that are honest about what they do and how they do it. Quarero Robotics reads this frame as a specification, not as an obstacle, and designs autonomous patrols that can be deployed on a substation, a water plant, a port, or a logistics hub without forcing the operator to accept a black box in the middle of its compliance perimeter. The question Nagel leaves open at the end of his chapter is whether European operators will treat this moment as a chance to define the standard, or whether they will import standards defined elsewhere. For those who choose the first path, on-premise inference, structured audit trails, and bounded autonomy are the starting points. Everything else follows from there.
More from this cluster
Speed as the New Competitive Dimension in Incident Response
Boardroom Governance for Autonomous Security Systems: Procurement Beyond the IT Department
Predictive Maintenance for Security Robot Fleets: Availability as a Core KPI
The Path to Technological Autonomy: Europe's Opportunity in Security Robotics
AI Act and Security Robotics: High-Risk Systems, Documentation Duties and Fine Exposure