Securing Europe’s Digital Future: Four Leading Research Initiatives Join Forces for Trustworthy AI

In a major move to secure Europe’s digital future, four leading-edge research initiatives funded by the European Health and Digital Executive Agency (HaDEA) – AIXPERT, ROBUSTIFAI, TRUMAN, and TURING – have officially joined forces to form a strategic innovation cluster. Backed by a combined EU investment of around €28.8 million under the Horizon Europe programme, this powerhouse collaboration is set to redefine the landscape of human-centric technology. By pooling their expertise, the cluster aims to move beyond theoretical safety, engineering a new generation of AI systems that are not only high-performing but inherently resilient, transparent, and ethically aligned with European standards. This unified front represents a critical step in ensuring that as AI scales into our daily lives, it remains a reliable and accountable partner for society. 

AIXPERT project is developing a pioneering situation-aware platform for trustworthy AI, bringing together an agentic framework, explainable multimodal foundation models, and a technical governance framework to make AI systems transparent, accountable, and robust. By combining multi-agent systems with real-time human feedback, the project enables reliable and user-friendly AI across diverse use cases and domains. Its solutions will be validated in high-impact areas including healthcare, recruitment, manufacturing, education, and the creative industries. 

ROBUSTIFAI project is developing a rigorous design and deployment methodology for building reliable, robust, and trustworthy generative AI systems. The project is based on three key axes: embedding human-centric needs within neural models, integrating neural and symbolic techniques, and enabling adaptivity to environmental changes and user variations. RobustifAI focuses on foundation models deployed within human cyber-physical systems, complex environments that combine computation, networking, humans and physical processes to monitor and control real-world environments with applications in many sectors. The methodology will be demonstrated and validated through three representative use cases in automotive systems, service robotics, and cybersecurity. 

TRUMAN is designing and developing technologies and methodologies for improving AI systems’ resilience against security, privacy, and fairness attacks, as well as increasing the trust of human users for the full AI life cycle. The TRUMAN project investigates innovative technologies around knowledge graphs, continual learning, and large language models (LLMs) in scenarios involving dynamic data collection, distributed model training, and human-in-the-loop (HITL). TRUMAN will develop customized robustness solutions for both existing and newly developed privacy, adversarial, and fairness attacks, guided by human interaction design principles. These technologies will be evaluated against four use cases illustrating scenarios from different sectors, including marketing, IT, finance, and healthcare. 

TURING project is advancing the development of robust and trustworthy generative AI models designed to simulate complex physical systems with unprecedented efficiency. By leveraging multimodal foundation models trained on diverse datasets of dynamic physical processes, TURING enables AI systems to learn the underlying laws governing these systems and produce accurate predictions while dramatically reducing the computational burden typically associated with high-fidelity simulations. Its solutions will be demonstrated in critical scientific and industrial fields including nuclear energy and safety, particle physics, and meteorology and climate modelling. Turing integrates ethical, legal and social considerations throughout the AI lifecycle. 

For project contacts, please refer to the contact section on their websites.