Live · DACH ops
03:47 · QR-2 · Sektor B · 0 anomalies04:03 · QR-7 · Gate 4 · handover ack04:11 · QR-2 · Sektor B · patrol complete · 4.2 km04:14 · Filderstadt · ops ack · all green04:22 · QR-12 · Stuttgart-W · charge cycle 84%04:30 · QR-3 · Karlsruhe · perimeter sweep · pass 3/404:38 · QR-9 · Wien-N · weather check · IP65 nominal04:45 · QR-2 · Sektor B · thermal hit reviewed · benign04:52 · QR-15 · Zürich-O · escalation queue · empty05:00 · all units · shift turnover · zero incidents03:47 · QR-2 · Sektor B · 0 anomalies04:03 · QR-7 · Gate 4 · handover ack04:11 · QR-2 · Sektor B · patrol complete · 4.2 km04:14 · Filderstadt · ops ack · all green04:22 · QR-12 · Stuttgart-W · charge cycle 84%04:30 · QR-3 · Karlsruhe · perimeter sweep · pass 3/404:38 · QR-9 · Wien-N · weather check · IP65 nominal04:45 · QR-2 · Sektor B · thermal hit reviewed · benign04:52 · QR-15 · Zürich-O · escalation queue · empty05:00 · all units · shift turnover · zero incidents
← All articles
Algorithm · AI · Control layer

Algorithmic Sovereignty: Why European Security Robotics Is a Strategic Infrastructure Question

An editorial essay from Quarero Robotics on why autonomous security robotics in Europe must be read as a sovereignty question, drawing on Dr. Raphael Nagel's book ALGORITHMUS to examine chip, cloud and talent dependencies, AI Act obligations for high-risk systems and the structural risk of delegating perimeter surveillance AI to non-European foundation models.

Dr. Raphael Nagel (LL.M.)
Investor & Author · Founding Partner
Follow on LinkedIn

In ALGORITHMUS, Dr. Raphael Nagel formulates a thesis that reads, on first encounter, like a provocation and, on reflection, like an operational brief: whoever controls the algorithm controls the conditions for everyone else. The book maps that claim across capital markets, chip supply chains, foundation models and regulatory regimes. For operators of autonomous security robotics in Europe, the thesis is not an abstraction. It is a description of the terrain on which perimeter surveillance, access control and critical infrastructure protection are now being built. This essay, written from the operational perspective of Quarero Robotics, translates Nagel's argument into the specific register of European security robotics and the obligations that follow from it.

The Canon: Control of the Algorithm as Control of the Conditions

Nagel's central line in ALGORITHMUS is that the decisive power question of the twenty-first century is not who writes the laws or signs the treaties, but who owns the algorithm that decides. He documents this through the capital flows into OpenAI, the Code Red reaction inside Google, the concentration of advanced chip manufacturing at TSMC, ASML and NVIDIA, and the October 2022 export control regime that redefined semiconductors as a strategic asset rather than a commodity. The pattern is consistent across every chapter: the physical substrate of computation, the model layer and the data layer are not neutral inputs. They are instruments through which conditions are set for every downstream actor.

For a security robotics operator, that reframing is uncomfortable but clarifying. A patrol robot, a surveillance drone or an autonomous access-control unit is, at the level of its decision logic, an algorithm executing on hardware that was designed, fabricated and often hosted outside Europe. If Nagel's thesis is correct, and the evidence he assembles is difficult to dismiss, then the question of who controls that stack is not an IT procurement detail. It is a sovereignty question with direct consequences for the resilience of critical infrastructure.

The Dependency Stack Behind an Autonomous Security Robot

A contemporary security robot is a composite object. Its perception layer typically relies on vision and sensor models trained on large datasets and accelerated by GPUs from a single dominant supplier. Its reasoning layer increasingly calls on foundation models hosted by a small number of hyperscalers. Its connectivity depends on cloud regions whose legal jurisdiction is not always European. Nagel's analysis of the chip ecosystem, in which roughly ninety percent of advanced logic fabrication sits in Taiwan and the single source of EUV lithography sits in the Netherlands under American export control pressure, describes the upstream reality that every European robotics operator inherits whether they model it explicitly or not.

The consequence is structural. A European operator of autonomous perimeter surveillance can hold impeccable contracts with its customers and still be exposed, at the level of the stack, to decisions taken in Washington, Hsinchu or Redmond. Quarero Robotics treats this not as a reason for pessimism but as a design constraint. Sovereignty at the application layer, which is where a security robot actually meets the protected site, is only meaningful if the operator has mapped, and where possible reduced, the concentration risk in the layers underneath.

The AI Act and the Operational Weight of High-Risk Classification

The European regulatory frame makes the sovereignty question concrete. Autonomous systems deployed for surveillance of critical infrastructure, for access control and for biometric identification fall within the high-risk categories of the AI Act. The obligations attached to that classification include documented risk management, data governance proportionate to the intended purpose, technical documentation, logging, human oversight arrangements, accuracy and robustness testing, and post-market monitoring. Penalties for non-compliance reach into percentages of global annual turnover that are material for any operator.

Nagel is blunt about what this means in practice. Regulation without the technical capacity to inspect, modify and retrain the systems being regulated produces a paradox: the obligation sits with the European operator, while the ability to satisfy it sits with a non-European provider. A security robotics company that has outsourced its perception and decision models to an opaque foundation model API cannot credibly attest to robustness, cannot fully reconstruct decision paths after an incident and cannot guarantee that model behaviour will not shift under a vendor update it did not authorise. Compliance, in that configuration, is a performance rather than a property of the system.

Why Perimeter Surveillance Cannot Be Delegated to a General Foundation Model

The temptation to route perimeter surveillance through a general-purpose vision and language model is understandable. The models are capable, the integration is fast and the marginal cost per query looks attractive. Nagel's chapter on foundation models as new platform monopolies explains why the calculation is incomplete. A general model is trained on a distribution of data and optimised for a distribution of tasks that were not chosen by the security operator. Its failure modes, its biases and its update cycle are determined elsewhere. For a system that must decide, at three in the morning, whether a figure at a fence line is a maintenance worker, an intruder or a shadow, the provenance of that judgement is not a philosophical matter.

The NIST findings on facial recognition error rates that Nagel cites, with error ratios of up to one hundred to one across demographic groups, are a warning rather than a historical footnote. A security robotics deployment that inherits such error structures inherits, with them, the legal and reputational exposure of every false identification. Quarero Robotics therefore treats the perception and decision layers of its autonomous security systems as domain-specific assets, trained and evaluated against the conditions under which they will actually operate, rather than as thin wrappers over external general models.

Domain Data as the European Lever

Nagel is precise about where European and mid-sized industrial actors retain leverage. It is not in raw compute, where the gap to American and Chinese hyperscalers is structural, and not in the frontier model race, where training runs now exceed the scale of most corporate research budgets. It is in proprietary domain data of strategic quality, combined with the algorithmic competence to refine it. A European security robotics operator accumulates exactly that kind of asset: years of site-specific perimeter behaviour, intrusion patterns, environmental conditions, false-alarm taxonomies and human-operator feedback that no general provider can reconstruct from public sources.

This is the basis on which European security robotics can be built as something other than a reseller channel for foreign AI. Quarero Robotics treats operational data from deployments, under strict contractual and regulatory discipline, as the raw material for models that are specific to the European security context, its legal environment and its physical geography. The aim is not autarky, which Nagel correctly identifies as unrealistic, but a defensible layer of sovereignty at precisely the point where the system meets the customer and the regulator.

What Operators of Critical Infrastructure Should Require

For KRITIS operators, the practical implication of Nagel's argument is a shorter and harder procurement conversation. The relevant questions are no longer limited to detection rates, battery life and integration timelines. They extend to the location of model training, the jurisdiction of the cloud region that hosts inference, the contractual treatment of model updates, the availability of logs sufficient for AI Act conformity assessments, and the continuity plan in the event that an upstream chip or cloud supplier becomes unavailable. These are sovereignty questions dressed in technical language.

Quarero Robotics takes the position that a European security robotics provider should be able to answer each of those questions without deflection. That includes naming the dependencies that cannot be eliminated, such as the current concentration in advanced GPUs, and describing the measures that reduce their weight, from on-premise inference where feasible to European hosting, domain-specific models and human-in-the-loop arrangements that satisfy the oversight obligations of the high-risk regime. The alternative, in Nagel's framing, is to operate critical infrastructure on conditions set by actors who are neither accountable to the European operator nor subject to European law.

Nagel closes the early chapters of ALGORITHMUS with a sentence that security robotics operators should take seriously: delegated power questions are not resolved, they are missed. For European autonomous security robotics, the power question is whether the algorithms that decide who enters a site, who is flagged at a perimeter and who is escalated to human review are built, trained and governed within a frame that the European operator and the European regulator can actually reach. Quarero Robotics reads that as an operational mandate rather than a slogan. It means designing systems whose perception, decision and logging layers are specific enough to the European security context to withstand technical audit, legal scrutiny and geopolitical stress. It means treating chip, cloud and talent dependencies as variables to be managed rather than conditions to be accepted. And it means accepting, with Nagel, that the question of who controls the algorithm is not a question for the next decade. It is the question that is being answered, in procurement decisions and deployment architectures, right now.

Translations

Call now+49 711 656 267 63Free quote · 24 hCalculate price →