About

An end-to-end strategy for democratizing trustworthy and responsible AI.

Objectives

Objective 1: Mapping Trustworthy AI Principles

Title: Understand & Prioritize AI Ethics

Description: EXCALIBUR begins by systematically analyzing EU and global AI regulations and ethical frameworks, including the AI Act, AI Bill of Rights, and ACM principles. The project identifies and prioritizes human-centric requirements for transparent, fair, and accountable AI. The results are structured into a taxonomy that connects legal mandates, theoretical guidelines, and practical indicators, forming the foundation for responsible AI system design.

Impact: Provides a clear roadmap for aligning AI development with ethical and legal standards.

Objective 2: LLM-based Explainability & Fairness

Title: Design Transparent & Fair AI

Description: Building on Objective 1, EXCALIBUR integrates Large Language Models (LLMs) with state-of-the-art explainable AI (XAI) and fairness methods. The framework acts as a model-agnostic tool to deliver human-understandable explanations and fairness assessments for any AI system. This transforms black-box AI pipelines into transparent, trustworthy outputs that can be easily interpreted and validated by users, even non-experts.

Impact: Enables a paradigm shift toward human-centric, trustworthy AI technologies.

Objective 3: Validation & Human-Centric Deployment

Title: Test, Validate & Empower Users

Description: The EXCALIBUR framework is deployed in a human-centric domain (wearables) through an open-source platform that supports AI trustworthiness across the full ML pipeline. Through both in-lab and in-the-wild experiments, citizen-science approaches, and iterative feedback, the platform ensures human oversight, societal benefit, and actionable recommendations for responsible AI.

Impact: Bridges technical innovation with real-world usability and societal impact.

Methodological Approach

The methodological approach of EXCALIBUR is structured as an end-to-end strategy for democratizing trustworthy and responsible AI, by combining regulatory mapping, method development with Large Language Models (LLMs), tool building, and stakeholder-driven evaluation.

Phase 1: Regulatory & Requirements Mapping

The project begins by translating Trustworthy AI principles into actionable technical requirements, with emphasis on transparency, fairness, and accountability, while also specifying human-centric needs (human oversight, societal well-being, safety) and establishing ethical and data governance guidelines to ensure GDPR-aligned experimentation.

Phase 2: Framework Development

Building on this foundation, EXCALIBUR designs a model-agnostic LLM-based framework with two core components:

  • Explainability Module: Learns from diverse XAI methods (e.g., LIME-like to more advanced techniques) to generate human-readable explanations.
  • Fairness Evaluation Module: Computes and interprets a broad set of fairness metrics and uses the LLM to explain their meaning in context and help uncover potential sources of bias in data and models.

Phase 3: Validation & Experimentation

The framework is then validated through two complementary experimentation tracks in the wearable domain:

  • In-lab Evaluation: Using baseline AI models built on public wearable datasets for method development and fine-tuning.
  • In-the-wild Case Studies: Involving non-expert stakeholders in iterative rounds to assess usability, trust, clarity of explanations, and usefulness of fairness reports.

This process follows a citizen-science-inspired approach to ensure continuous feedback and real-world relevance.

Phase 4: Open-Source Delivery

Finally, EXCALIBUR delivers an open-source toolset and visualization platform (e.g., a Python library plus a web-based interface) that:

  • Integrates the framework into real AI pipelines
  • Presents results in a human-friendly way
  • Supports user intervention and feedback
  • Applies standard security and privacy practices for safe deployment