News

TEST NEWS TEMPLATE: EXCALIBUR - Overview, pillars and methodology

11 February 2026

Abstract project banner

The EXCALIBUR project aims to advance trustworthy and responsible artificial intelligence by combining regulatory mapping, Large Language Model (LLM)-driven explainability, and a practical fairness evaluation toolset.

EXCALIBUR is organised around three core components:

  • Framework: A research-driven, model-agnostic framework that uses LLMs as explainers and fairness interpreters, producing clear, human-understandable outputs grounded in established XAI and fairness techniques.

  • Toolset: An open-source toolset and visualization platform that integrates the framework into real AI pipelines, offering transparent explanations and fairness reports with interactive visualizations.

  • Evaluation & Case Studies: A two-track validation strategy combining in-lab experiments for method development with in-the-wild case studies and stakeholder involvement to assess usability, trust, and societal impact.

Key pillars of the project include translating Trustworthy AI principles into technical requirements, developing LLM-based explainability and fairness modules, and maintaining an open infrastructure supported by continuous, stakeholder-driven evaluation.

Over the coming months the team will publish method details, code releases, and case-study findings. If you’d like to get involved or follow progress, check back on our News page for updates and releases.

EXCALIBUR demo image