This panel discussion will offer a few different perspectives on Explainable AI. There will be time for Q&A after these short presentations. Titles for the short presentations and corresponding references are provided below.
Panelists
Steve Grossberg: “Adaptive Resonance Theory is Explainable: Deep Learning, and AI based on it, is not.”
Grossberg, S. (2020). A path towards Explainable AI and autonomous adaptive intelligence: Deep Learning, Adaptive Resonance, and models of perception, emotion, and action.
Frontiers in Neurobotics, June 25, 2020. https://www.frontiersin.org/articles/10.3389/fnbot.2020.00036/full
Paulo Lisboa: “Given tabular data, ANOVA can express any black box classifier as a sum of non-linear and non-overlapping functions of fewer variables. The derived models make plausible predictions for real-world data and buck the performance-transparency trade-off even against deep learning.”
Walters, B., Ortega-Martorell, S., Olier, I. and Lisboa, P.J.G. How to open a black box classifier for tabular data. Algorithms, Volume 16(4), 181, 2023.
https://doi.org/10.3390/a16040181
Janet Wiles: “Who is XAI explaining itself to? Insights from Developer Priorities and User Experiences.”
Bingley, W. J., Curtis, C., Lockey, S., Bialkowski, A., Gillespie, N., Haslam, S. A., Ko, R. K. L., Steffens, N., Wiles, J., & Worthy, P. (2023). Where is the Human in Human-centered AI? Insights from Developer Priorities and User Experiences. Computers in Human Behavior, 141, 107617. https://doi.org/10.1016/j.chb.2022.107617
Marley Vellasco: “Explainable AI: Challenges and Opportunities”
Christian Meske, Enrico Bunde, Johannes Schneider & Martin Gersch (2022) Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities, Information Systems Management, 39:1, 53-63, DOI: 10.1080/10580530.2020.1849465
I. Stepin, J. M. Alonso, A. Catala and M. Pereira-Fariña, "A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence," in IEEE Access, vol. 9, pp. 11974-12001, 2021, doi: 10.1109/ACCESS.2021.3051315.
Asim Roy: “DARPA’s form of Explainability provides natural protection from adversarial attacks plus a symbolic model.”