The workshop will be held at the GECCO conference in San Jose, Costa Rica. GECCO runs 13-17 July. It will be both on-site and streamed online.

Submission deadline: March 27, 2026
Explainable artificial intelligence (XAI) has gained significant traction in the machine learning community in recent years because of the need to generate “explanations” of how these typically black-box tools operate that are accessible to a wide range of users. From an application perspective, important questions arise, for which XAI may be crucial: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? The goal of XAI and related research is to develop methods to interrogate AI processes with the aim of answering these questions. This can support decision makers while also building trust in AI decision-support through more readily understandable explanations.
Nature-inspired optimisation techniques are also often black box in nature, and the attention of the explainability community has begun to consider explaining their operation too. Many of the processes that drive nature-inspired optimisers are stochastic and complex, presenting a barrier to understanding how solutions to a given optimisation problem have been generated. Explainable optimisation can address some of the above application-focused questions around bias, problem formulation, and trust, that also arise during the use of an optimiser.
By providing mechanisms that enable a decision maker to interrogate an optimiser and answer these questions trust is built with the system. On the other hand, many approaches to XAI in machine learning are based on search algorithms that interrogate or refine the model to be explained, and have the potential to draw on the expertise of the EC community. Furthermore, many of the broader questions (such as what kinds of explanation are most appealing or useful to end users) are faced by XAI researchers in general.
Following the success of the first four workshops hosted at GECCO 2022-25, we seek contributions on a range of topics related to this theme, including but not limited to:
- Interpretability vs explainability in EC and their quantification
- Landscape analysis and XAI
- Contributions of EC to XAI in general
- Use of EC to generate explainable/interpretable models
- XAI in real-world applications of EC
- Possible interplay between XAI and EC theory
- Applications of existing XAI methods to EC
- Novel XAI methods for EC
- Legal and ethical considerations
- Case studies / applications of EC & XAI technologies
- Papers will be double blind reviewed by members of our technical programme committee.
Authors can submit short contributions including position papers of up to 4 pages and regular contributions of up to 8 pages following in each category the GECCO paper formatting guidelines. Software demonstrations will also be welcome.
======================================
IMPORTANT DATES
Submission opening: February 2, 2026
Submission deadline: March 27, 2026
Notification: April 24, 2026
Camera-ready: May 5, 2026
Author’s mandatory registration: May 11, 2026
Workshop: TBC
For more detailed information, see the ECXAI at GECCO 2026 workshop website (https://ecxai.github.io/ecxai/workshop-2026).
