A Trust Framework for Sovereign AI: The Hallucination-Aware Layered Optimization (HALO) Project
Organisations involved
Main Participant: Seedbox Ventures is a Germany-based AI studio. The company, which historically developed bespoke AI solutions for clients, recognized a critical gap in the European market for trustworthy and sovereign AI. This innovation study was central to its strategic transformation from a service-based company into a scalable, AI-first technology provider.
The challenge
European adoption of Generative AI is slow in regulated sectors such as finance, legal services, and healthcare because costs are difficult to predict, hallucinations undermine trust, and GDPR and data-sovereignty requirements restrict the use of non-EU “black-box” models. Cloud AI solutions and manual verification workflows do not meet the reliability or auditability needed for high-risk, production decision-making.
Seedbox therefore needed to move beyond its service-based model and develop a scalable, product-led alternative: a reusable framework for verifiable, cost-efficient, and fully EU-sovereign AI. Early efforts to create a single efficient model capable of supporting multiple European languages were unsuccessful, as improving performance in one language consistently reduced accuracy in others. In parallel, approaches to cost forecasting and real-time hallucination detection were not sufficiently mature for deployment, forcing a fundamental architectural rethink.
That pivot depended on access to high-performance computing. HPC enabled rapid evaluation of alternative designs, distributed training for continuous pre-training, specialised GRPO reinforcement learning, and large-scale synthetic data generation. FFplus provided both the compute capacity and structured framework required to de-risk development, validate the Hallucination-Aware Layered Optimization architecture (HALO) and converge on capital-efficient, production-ready models.
The Solution
Seedbox subsequently developed the HALO framework, an integrated architecture built entirely on open-source models to ensure GDPR and EU AI Act compliance. Using distributed HPC infrastructure, more than 100,000 GPU hours were employed to support continuous pre-training, reinforcement learning using Group Policy Optimization, and large-scale synthetic data generation.
HALO combines three tightly integrated components: a specialised RAG-enabled LLM trained to answer strictly from source documents; a lightweight classifier router that filters simple queries to avoid unnecessary high-cost inference; and a multilingual factual consistency scorer that evaluates outputs against source material. Together, these layers deliver transparent, auditable, and cost-controlled Generative AI suitable for regulated production environments.
Impact
Seedbox Ventures will have transitioned from a straightforward service provider to a scalable, product-led AI company by February 2026. The HALO framework now represents their core strategic asset and a foundation for developing their SaaS revenue base. To support this, operational LLM costs have been reduced by approximately 85%, enabling economically viable deployment for SMEs and regulated enterprises.
From a societal perspective, HALO directly addresses trust deficits in Generative AI by providing sentence-level auditability, supporting responsible adoption in domains where incorrect outputs carry legal or financial risk. Environmentally, the project’s shift from large-scale pre-training to targeted optimisation significantly reduced R&D energy consumption. In production, the classifier router further lowers energy use by avoiding compute-intensive inference for the majority of simple queries.
Benefits
- Seedbox achieved approximately 85% reduction in operational LLM costs through intelligent query routing.
- Delivered auditable, sentence-level hallucination transparency for regulated AI applications.
- Enabled market entry for a sovereign European alternative to non-EU proprietary models.
- Supported transformation to a scalable, product-led business model with projected SaaS revenues.
- Reduced energy consumption through targeted training and efficient inference pathways.