DCLXVI 2024: International Workshop on Designing the Conceptual Landscape for a XAIR Validation Infrastructure Rheinland-Pfälzische Technische Universität (RPTU) Kaiserslautern, Germany, December 11, 2024 |
Conference website | http://batcat.info/dclxvi/ |
Submission link | https://easychair.org/conferences/?conf=dclxvi2024 |
Poster | download |
Abstract registration deadline | October 22, 2024 |
Submission deadline | October 22, 2024 |
Logo | https://batcat.info/dclxvi/dclxvi-2024-logo.png |
Designing the Conceptual Landscape for a XAIR Validation Infrastructure
"XAIR" is the abbreviation for "explainable-AI-ready." DCLXVI 2024 will compare different points of view on what it means to make models and data explainable-AI-ready (XAIR), in addition to being findable, accessible, interoperable, and reusable (FAIR), so that they become "FAIR and XAIR." These discussions are related to the Knowledge Graph Alliance's working group on XAIR principles.
Manuscript categories
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Discussion of a core concept for explainable-AI-readiness, including a critical analysis of multiple definitions from the literature. (But no need to comprehensively review the entire literature.)
- Surveying the landscape of multiple core concepts (two or three, not more), including an analysis of how or whether different definitions of these concepts can be combined with each other.
- Applied ontology techniques, methodology, software, or digital artefacts that can be used for conceptual landscape discovery and visualization or for conceptual landscape design, including a demonstration of how this can be applied to core concepts for explainable-AI-readiness.
- Papers on going beyond FAIR, including a discussion of requirements that are insufficiently addressed by the FAIR principles, so that they need to be supplemented, updated, or revised.
Use the LaTeX template and style files provided by Springer, and keep your manuscript at a minimum of eight pages (excluding references/appendices) and a maximum of twenty pages (including references/appendices). Also confer Springer's guidelines for proceedings authors.
What are these "core concepts"?
The core concepts for explainable-AI-readiness (i.e., the concepts to be discussed at the workshop) include, but are not limited to the following:
- Explainability and explanation
- Reproducibility, reliability, and reliance
- Opacity and transparency, interpretability and interpretation
- Data, knowledge, information, and wisdom
- Responsibility, trust, trustworthiness and reasons/motivations for trusting
- Model design, parameterization, and optimization
- Holistic validation and unit testing (of models)
- Theoretical virtues (of models)
- Epistemic agents, vices, and virtues
- The four elements of "FAIR," possibly requiring a revision or update
- Simulation; applying and evaluating models
- Context awareness, subject matter, and logical subtraction
Committees
See the website for an updated list of committee members.
Publication
The proceedings will be published in the Springer series Lecture Notes in Networks and Systems (LNNS), and by the camera-ready stage latest, manuscripts must conform with the specifications for publication in that series.
Venue
Rheinland-Pfälzische Technische Universität (RPTU), Campus Kaiserslautern, Germany
Sponsors
The workshop is organized by the Horizon Europe projects AI4Work (HEur GA 101135990) and BatCAT (HEur GA 101137725).