In the drive to develop the next generation of collaborative, interoperable, and secure air combat systems, one hurdle remains constant: Trust.
While Deep Learning excels at managing complex, multi-agent environments, its "black-box" nature often creates a barrier to operational deployment. How can a pilot or commander trust a system that cannot explain its own decisions?

A Breakthrough at AAAI 2026
We are proud to highlight a significant technical milestone from our project partners, recently presented at the AAAI Conference. Their paper, "Interpretable Multi-Agent Path Finding via Decision Tree Extraction from Neural Policies," tackles this challenge head-on.
The research introduces a framework that "distills" complex neural networks into Decision Trees. This process extracts the high-performance logic of AI and translates it into a series of transparent, human-understandable rules.
Why This Matters for EICACS?
Operational Transparency: By converting neural policies into decision trees, we move from "hidden math" to "auditable rules," ensuring AI behavior aligns with mission protocols.
Seamless Collaboration: Multi-agent pathfinding is the "brain" behind drone swarming and wingman coordination. Making this logic interpretable ensures better human-machine teaming.
System Security: Transparent models are easier to verify and harden against adversarial interference, a core requirement for European defense sovereignty.
Looking forward
The EICACS project consortium is set to gather at the Dassault Aviation premises in Saint-Cloud, Paris, France, this March, to review last year's technical progress and lay the ground for the project's final stage.
Follow the project on LinkedIn and Youtube to stay connected with upcoming developments.

Co-funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them.