EXCALIBUR

EXplainable and demoCrAtized pipeLInes for unBiased, trUstworthy, and Responsible human-centric AI

About the Project

  • EXCALIBUR develops a human-centric framework for Trustworthy and Responsible AI, using LLMs to generate clear, model-agnostic explanations and fairness interpretations for AI systems.
  • It bridges regulatory principles and technical requirements, translating transparency, fairness, and accountability into actionable system specifications and ethical guidelines.
  • The project delivers an open-science platform and toolset with human-friendly visualizations, enabling understanding, interaction, and feedback, even for non-experts.
  • Validation combines in-lab experimentation (method development and fine-tuning) with in-the-wild case studies with non-expert users, following a Citizen Science approach.
  • Initial use cases focus on the wearables domain, where trust, clarity, and fairness are especially critical.

At a Glance

Project Title
EXCALIBUR: EXplainable and demoCrΑtized pipeLInes for unBiased, trUstworthy, and Responsible human-centric AI
Duration
36 months
Total Budget
€300,000
Host Institution
Aristotle University of Thessaloniki (AUTh)
Principal Investigator
Athena Vakali
Scientific Area
SA5 "Mathematics & Information Sciences"
Scientific Field/Subfield
5.2 "Computer and information sciences" - 5.2.7 "Artificial intelligence, intelligent systems, multi-agent systems"

Core Components

EXCALIBUR is structured around three core components in response to the growing need for trustworthy and responsible artificial intelligence.

Framework

A research-driven framework that leverages Large Language Models (LLMs) as advanced, model-agnostic explainers and fairness evaluators, integrating insights from established explainability and fairness approaches into clear, human-understandable outputs.

Toolset

An open-source toolset and visualization platform that embeds the framework into real AI pipelines and presents explanations and fairness reports in a transparent, user-friendly way, enabling human insight, interaction, and feedback.

Evaluation & Case Studies

A two-track validation approach combining in-lab experimentation for method development and fine-tuning with in-the-wild case studies involving stakeholders, assessing usability, trust, clarity, and overall impact through iterative evaluation.

Key Pillars

The foundational principles that guide EXCALIBUR's approach to trustworthy AI.

Regulatory-to-Technical Mapping

The systematic translation of Trustworthy AI principles and regulatory requirements into concrete technical specifications and ethical guidelines that shape the framework and platform design.

Model-Agnostic Explainability

LLM-based components that produce human-like explanations and contextual fairness interpretations across tasks and metrics, supporting the identification and understanding of potential sources of bias in data and models.

Open Tools & Stakeholder Validation

An open infrastructure for reuse (covering code, tools, and platform components) combined with continuous evaluation involving researchers and non-expert users, ensuring practical usefulness and societal relevance.

JOIN US

Get Involved

EXCALIBUR follows a citizen-science-inspired approach. Stay connected for upcoming opportunities to participate in our in-the-wild case studies and help shape the future of trustworthy AI.