EXCALIBUR

Trustworthy and Responsible AI through LLM-powered Explainability and Fairness Evaluation

Funded Research Initiative

The EXCALIBUR project is implemented under the Hellenic Foundation for Research and Innovation (HFRI / ELIDEK) through the call "Basic Research Financing (Horizontal support for all Sciences)", which is part of Component 4.5 "Promoting Research and Innovation" of the National Recovery and Resilience Plan "Greece 2.0", funded by the European Union – NextGenerationEU.

Core Components

EXCALIBUR is structured around three core components in response to the growing need for trustworthy and responsible artificial intelligence.

Framework

A research-driven framework that leverages Large Language Models (LLMs) as advanced, model-agnostic explainers and fairness evaluators, integrating insights from established explainability and fairness approaches into clear, human-understandable outputs.

Toolset

An open-source toolset and visualization platform that embeds the framework into real AI pipelines and presents explanations and fairness reports in a transparent, user-friendly way, enabling human insight, interaction, and feedback.

Evaluation & Case Studies

A two-track validation approach combining in-lab experimentation for method development and fine-tuning with in-the-wild case studies involving stakeholders, assessing usability, trust, clarity, and overall impact through iterative evaluation.

Key Pillars

The foundational principles that guide EXCALIBUR's approach to trustworthy AI.

Regulatory-to-Technical Mapping

The systematic translation of Trustworthy AI principles and regulatory requirements into concrete technical specifications and ethical guidelines that shape the framework and platform design.

Model-Agnostic Explainability

LLM-based components that produce human-like explanations and contextual fairness interpretations across tasks and metrics, supporting the identification and understanding of potential sources of bias in data and models.

Open Tools & Stakeholder Validation

An open infrastructure for reuse (covering code, tools, and platform components) combined with continuous evaluation involving researchers and non-expert users, ensuring practical usefulness and societal relevance.

JOIN US

Get Involved

EXCALIBUR follows a citizen-science-inspired approach. Stay connected for upcoming opportunities to participate in our in-the-wild case studies and help shape the future of trustworthy AI.