• Tutorials

    • Ethical Risks and Challenges of Computational Intelligence

      Artificial intelligence (AI) has entered an increasing number of different domains. A growing number of people – in the general public as well as in research – have started to consider a number of potential ethical challenges and legal issues related to the development and use of AI technologies. There have been related initiatives across the globe, such as the High-Level Expert Group on Artificial Intelligence (AI HLEG) appointed by the European Commission that has a general goal to support the implementation of the European Strategy on Artificial Intelligence. This has been followed up with the proposal of the European Artificial Intelligence Act (AIA) and the New Machinery Directive (MD), focusing on developing a framework for trustworthy Artificial Intelligence within Europe, laying down harmonized rules, for both AI systems with and without a physical layer (e.g., chatbots vs. robots etc.). This tutorial will give an overview of the most commonly expressed ethical challenges and ways being undertaken to reduce their negative impact using the findings in an earlier undertaken review (https://www.frontiersin.org/articles/10.3389/frobt.2017.00075/full ) and an overview paper of Artificial Intelligence Ethics (https://www.computer.org/csdl/journal/ai/5555/01/09844014/1Fnr097UNd6 ), supplemented with recent work and initiatives. This includes the identified challenges in a “Statement on research ethics in artificial intelligence. ( https://www.forskningsetikk.no/globalassets/dokumenter/4-publikasjoner-som-pdf/statement-on-research-ethics-in-artificial-intelligence.pdf )”

      Among the most important challenges are those related to privacy, fairness, transparency, safety and security. Countermeasures can be taken first at design time, second, when a user should decide where and when to apply a system and third, when a system is in use in its environment. In the latter case, there will be a need for the system by itself to perform some ethical reasoning if operating in an autonomous mode. This tutorial will introduce some examples from our own and others´ work and how the challenges can be addressed both from a technical and human side with special attention to problems relevant when working with AI research and development. AI ethical issues should not be seen only as challenges but also as new research opportunities contributing to more sustainable, socially beneficial services and systems.

      An overview of the topic explaining its relevance and significance to the computational intelligence society
      Computational intelligence ethics is a very broad and multi-disciplinary research area. In this tutorial, we would target to present a structured overview with a focus on the most important ethical issues and their countermeasures. It will also cover how that can open up new directions in research related to robots and systems.

      As development is moving from lab settings to practical applications involving users, there is increasing attention on the ethical implications and legal issues related to robots and systems. Thus, earlier experience from talks on the same topic has shown that there is, in general, a wide and increasing interest in the theme of the tutorial. Thus, the tutorial will target all attendees of the IJCNN-2023 conference.

      A tutorial schedule with topic and time allocation 
      The main content of the tutorial will be a presentation of the most commonly expressed ethical challenges. This will be illustrated by examples from own and others´ work. The tutorial will also contain some parts where participants discuss ethical challenges (plenary or in small groups). Further, opinions within the audience will be collected by using tools like the Mentimeter voting tool (responding using your smartphone to answer multiple-choice questions).

      • Southern University of Science and Technology (SUSTech), Shenzhen, China,

    • Explainable AI (XAI) for Computer Vision – A Review of Existing Methods and a New Method to Extract a Symbolic Model from a CNN model

      Along with the advent of deep learning and its quick adoption, there is concern about using models that we don’t really understand. And because of this concern, many critical applications of deep learning are hindered. The concern about transparency and trustworthiness of these models is so high that it is now a major research focus of Artificial Intelligence (AI) programs at funding agencies like DARPA and NSF in the US. If we can make deep learning explainable and transparent, the economic impact of such a technology would be in the trillions of dollars. 

      One of the specific forms of Explainable AI (XAI) envisioned by DARPA includes the recognition of objects based on identification of their parts. For example, the form requires that to predict an object to be a cat, the system must also recognize some of the specific features of a cat, such as the fur, whiskers, and claws. Object prediction contingent on recognition of parts provides additional verification for the object and makes the prediction robust and trustworthy. 

      The first part of this tutorial will review some of the existing methods of XAI in general and then those that are specific to computer vision. The second part of this tutorial will cover a new method that decodes a convolutional neural network (CNN) to recognize parts of objects. The method teaches a second model the composition of objects from parts and the connectivity between the parts. 

      This second model is a symbolic and transparent model. Experimental results will be discussed including those related to object detection in satellite images. Contrary to conventional wisdom, experimental results show that part-based models can substantially improve the accuracy of many CNN models. Experimental results also show part-based models can provide protection from adversarial attacks. Thus, a school bus will not become an ostrich with the tweak of a few pixels.

    • Randomization in Neural Networks: Feed-forward and Reservoir Computing models

      Randomization plays  a fundamental role in the design of Machine Learning models, and neural networks in particular. This tutorial will delve into the topic of randomization-based neural network systems, focusing a special attention on the neural architectures based on random, untrainable, weights.

      This tutorial will first introduce the main randomization-based feedforward learning paradigms with closed-form solutions. The popular instantiation of the feedforward neural networks is called random vector functional link neural networks (RVFL) originated in the early 1990s. Other feedforward methods included in the tutorials are random weight neural networks (RWNN), extreme learning machines (ELM), Stochastic Configuration Networks (SCN), Broad Learning Systems (BLS), etc. We will also present recently developed deep RVFL implementations. The tutorial will also consider computational complexity with increasing scale of the classification/forecasting problems. The tutorial will also present extensive benchmarking studies using classification and forecasting datasets.

      The tutorial will then focus on Reservoir Computing (RC), a class of recurrent neural networks where the dynamical hidden layer is left untrained after initialization based on dynamical system stability theory. The RC paradigm enables the design and implementation of fastly trainable, resource-efficient recurrent neural models, naturally suitable for processing dynamical and structured (e.g., temporal) forms of data. This part of the tutorial will span from RC basics (introducing the theory and the fundamental neural models), to deep RC architectures for time-series and graphs (showing the intrinsic advantages of depth in the design of recurrent neural networks). Links to computational neuroscience and neuromorphic computing will be discussed, as well as relevant applications in the field of edge and pervasive AI.

      This tutorial is targeted to both researchers and practitioners that are interested in setting up fastly-trainable neural networks methodologies. Basic concepts on Machine Learning and Deep Neural Networks represent suggested prerequisites.

    • Reinforcement Learning Control: Theoretical Framework, State-of-the-Art Designs, and Validation in Robotics Applications

      Significant progress in reinforcement learning (RL) and its demonstrated successes in addressing unprecedented challenges in complex decision problems have inspired new results beyond computer board games to real engineering and control applications. State-of-the-art RL control developments have propelled the field from viewing the approach as a heuristic technique to a new height with the potential of broadening the field of both reinforcement learning and classical control theory. It is increasingly evident that some RL control methods are becoming exciting new tool sets in the suite of feedback control designs. In this tutorial, we discuss recent advances in reinforcement learning for control by providing a comprehensive treatise of the subject matter, and centering on effective design methods that are theoretically sound, and also with demonstrated applicability in (non-trivial) problems such as complex control suite benchmarks and realistic robotics applications. Participants may expect to gain insights on what theoretical results have the potential to provide qualitative assurances on control performance, what theoretical assumptions have direct implications in applications, and what RL design “tricks” are effective and efficient when seeking innovative solutions in complex robotics problems. The presenter will use her extensive experience to shed critical light not only on what has been achieved in RL control, but also on potentially promising directions to further advance RL control.   

    • Foundation Models: A Sweeping Opportunity for Computer Vision

      Big data contains a tremendous amount of dark knowledge. The community has realized that effectively exploring and using such knowledge is essential to achieving superior intelligence. How can we effectively distill the dark knowledge from ultra-large-scale data? One possible answer is: ``through Transformers". Transformers have proven their prowess at extracting and harnessing dark knowledge from data. This is because more is truly different when it comes to Transformers.

      This tutorial will introduce the structural design, training methods, and applications of Vision Transformers. We will start with the development of neural networks and introduce their theoretical foundations through CNNs to visual transformers. Then, we will discuss the structural design of Vision Transformers, including the plain Vision Transformer and hierarchical Vision Transformers, followed by a discussion of how to train these models in a supervised, self-supervised, and multi-modality way. Next, we will present the applications of Vision Transformers to both low-level tasks and high-level tasks, which have redefined the art of computer vision. Finally, we discuss the open challenges of current Vision Transformers and give future expectations for Vision Transformer developments.

      1. Introduction
      With the help of a highly developed computing base, the community has witnessed the benefits of advanced structures with complicated designs, deeper networks, and more parameters in better harnessing the dark knowledge from large-scale data. Recently, a novel vision network structure, i.e., Vision Transformer, has received much attention from both academic and industrial communities thanks to its incredible ability to extract and harness dark knowledge from massive data, especially when scaling up to a large model.
      With this amazing ability, Vision Transformers reign in the performance rankings of various vision tasks and show the potential to unify different vision tasks with a single model.
      Such phenomena demonstrate the excellent potential of Vision Transformers to be the vision foundation models and bring the development of computer vision algorithms into a
      new era

      2. Vision Transformer architecture
      Transformer is designed to process the one-dimension sequence in natural language processing. Recently, ViT adapts the structure to two-dimension vision tasks by employing a patch embedding layer to divide the images into patches and considering the embedded image patches as one-dimension tokens. However, due to the modality difference in natural language and images, directly adapting the structure to vision tasks experiences hardness in harnessing the dark knowledge efficiently. To this end, many works are proposed to improve Vision Transformers by considering the prior knowledge in computer vision, resulting in a series of well-designed Vision Transformer models.

      In this part, we aim to introduce the development of Vision Transformer architectures, and several representative models will be introduced to make a better representation. Specifically, this part has three sections, i.e., plain Vision Transformer, Vision transformer with inductive bias, and lightweight and efficient transformer.

      3. Since Vision Transformer is new to the vision community and demonstrates different properties from CNNs, how to efficiently utilize the data to train a better Vision Transformer is also an important topic. Thus we will introduce the Vision Transformer from the pretraining perceptive in this section.

      4. The previous two sections, i.e., model architecture and pretraining, focus on how to better prepare the Vision Transformers to serve as backbones. In this section, we will introduce the development of Vision Transformer applications for various vision tasks from three aspects: low-level tasks, high-level tasks, and remote sensing.

    • Machine Learning Pipeline for EEG-based Brain-computer Interfaces

      A brain-computer interface (BCI) allows the user to control an external device directly using the brain's neural activities and has been used in a wide range of applications. Electroencephalogram (EEG) is the dominant input to non-invasive BCIs, which has the advantages of being low-risk and low-cost, recorded from electrodes on the scalp. Decoding EEG into the user's meaningful intention requires accurate signal analysis. Therefore, machine learning approaches must be carefully taken, especially with deep learning of end-to-end neural networks. For this tutorial, we propose to discuss and clarify the proper pipeline of EEG analysis in BCI systems with state-of-the-art approaches. The analysis includes signal processing, feature extraction, and classification/regression. A deep neural network can fuse the latter two parts into a single model but must be carefully used with additional data quantity and data alignment considerations. State-of-the-art approaches will be discussed and evaluated on benchmark datasets for a fair and clear comparison.

    • Language and the Brain: Deep Learning for Brain Encoding and Decoding

      How does the brain represent different modes of information? Can we design a system that can automatically understand what the user is thinking? We can make progress towards answering such questions by studying brain recordings from devices such as functional magnetic resonance imaging (fMRI). The brain encoding problem aims to automatically generate fMRI brain representations given a stimulus. The brain decoding problem is the inverse problem of reconstructing the stimuli given the fMRI brain representation. Both the brain encoding and decoding problems have been studied in detail in the past two decades and the foremost attraction of studying these solutions is that they serve as additional tools for basic research in cognitive science and cognitive neuroscience.

      There has been a recent spurt in the availability of large cognitive neuroscience datasets related to passive reading/listening/viewing of concept words, narratives, and pictures. Encoding and decoding models using recent advances in deep learning have opened new opportunities for modelling brain activity and exploring the convergent representations underlying language comprehension in the human brain and in neural language models (LM). Theories of language processing in the brain have been traditionally explored by carefully designed psycholinguistic experiments. What insights can we draw from the recent, largely task-free neuroimaging data for the theories of language and the brain? Can we use this growing database of human neuroimaging data to test extant theories of language processing in the brain? Apart from these questions, we will also look at the implications of these efforts for models of natural language processing (NLP).

      In this tutorial, we plan to discuss different kinds of stimulus representations, and popular brain encoding and decoding architectures in detail. The tutorial will provide a working knowledge of the state of the art methods for encoding and decoding, a thorough understanding of the literature, and a better understanding of the benefits and limitations of encoding/decoding with deep learning. We also highlight the concordance and gaps between the putative representations utilized by the brain and those learned by LMs.

    • Collaborative Learning and Optimisation

      Machine learning (ML) and optimisation are two essential missions that Computational Intelligence (CI) aims to address. Accordingly, many CI-based ML and optimisation techniques have been proposed, where deep neural networks (used for ML) and evolutionary algorithms (used for optimisation) are the most well-known representatives. Intrinsically, CI-based ML and optimisation are closely related. On the one hand, CI-based ML consists of various model-centric or data-centric optimisation tasks. On the other hand, CI-based optimisation is often formulated into ML-assisted search problems. In recent years, there emerges a new research frontline in CI, namely Collaborative Learning and Optimisation (COLO), which studies how to synergise CI-based ML and optimisation techniques while unleashing the unprecedented computing power (e.g., via supercomputers) to generate more powerful ML and optimisation techniques for solving challenging problems.

      This tutorial aims at introducing this newly emerging research direction. Specifically, we will first introduce CI, CI-based ML and optimisation techniques, and their relationships, and then describe COLO from three aspects, i.e., how to make use of ML techniques to assist optimisation (Learn4Opt), how to leverage optimisation techniques to facilitate ML (Opt4Learn), and how to synergise ML and optimisation techniques to deal with real-world problems which involve ML and optimisation as two indispensable and interwoven tasks (LearnOpt), where the most representative research hotspot in each of these three aspects, i.e., automated construction of deep neural networks, data-driven evolution optimisation, and predictive optimisation will be discussed in detail.

      The organiser is the co-founder of the research direction of COLO and has given talks about the similar topic, as tutorials, invited talks, keynotes, in various international forums in the past.

    • Graph Self-Supervised Learning: Taxonomy, Frontiers, and Applications

      In recent years, deep learning on graph-structured data has drawn much attention in both academic and industrial communities. Following the prevailing (semi-) supervised learning paradigms, most deep graph learning methods suffer from several shortcomings, including heavy label reliance, poor generalization, and weak robustness. To circumvent these issues, graph self-supervised learning (GSSL), which extracts supervision signals for model training with well-designed pretext tasks instead of manual labels, has become a promising and trending learning paradigm for graph data. As the field rapidly grows, a global perspective of the development of GSSL is urgently needed in the research community. To fill the gap, we provide a comprehensive tutorial on this fast-growing yet challenging topic.

      This tutorial starts with the foundational background of deep graph learning. Then, we conduct a systematic taxonomy to categorize the existing GSSL methods and introduce the most representative ones. Following the latest research trends, we discuss three frontier subtopics under the umbrella of GSSL, including trustworthy GSSL, scalable GSSL, and automatic GSSL. Afterward, we present the real-world applications of GSSL in various directions, including recommender systems, anomaly/out-of-distribution detection, chemistry, and graph structure learning. Lastly, we finalize the tutorial with conclusions and discuss potential future directions.  

      We believe this tutorial is beneficial to a broad audience from academia and industry, including general machine learning researchers who would like to know about self-supervised learning on graph-structured data, graph analytics researchers who want to keep track of the most recent advances in deep graph learning, and domain experts who would like to generalize GSSL to new applications or other fields. 

    • Recent Advancement on Federated Learning Combating Non-IID Data

      Federated learning (FL) is a promising machine learning paradigm that enables collaborative learning across a variety of clients without sharing private data. In vanilla FL, all participating clients aim to train a global model by periodically synchronizing their weight parameters. However, the performance of the learned model is usually hindered by the widely-existing statistical heterogeneity across clients (also known as the non-IID problem). To overcome this problem, some recent studies propose to use different techniques to optimize the learned generic model or optimize a specific personalized model for each client.

      This tutorial aims to review the recent advancement on federated learning with an emphasis on addressing non-IID data problems. The content can be summarized as the following five parts. Firstly, the introduction and background are delivered to provide a comprehensive review on federated learning. Secondly, we discuss the emerging trends  in FL combating non-IID data. This part can be further categorized into three aspects, i.e., client clustering-based FL, prototype-based FL, and graph-structured FL. Thirdly, we present three novel federated learning frameworks towards more practical scenarios. Some theoretical analyses are correspondingly given to guarantee the effectiveness of the proposed frameworks. Fourthly, applications of FL with non-IID data are introduced. Lastly, we finalize the tutorial with discussions on promising future directions as well as conclusion.

      This tutorial can benefit researchers from both academia and industry. Also, researchers focusing on FL or interested in FL can both be inspired by the recent advancement on FL and produce high-impact works in related topics.

    • Trustworthy Federated Learning: Concepts, Methods, Applications, and Beyond

      Due to data isolation and privacy challenges in the real world, Federated Learning stands out among various AI technologies for diverse real-world scenarios, ranging from business applications like risk evaluation systems in finance to cutting-edge technologies such as drug discovery in life sciences. Increasing numbers of people believe the FL system is promising and can be trusted. However, FL is threatened by adversarial attacks against the privacy of data, the stability of the learning algorithm, and the confidentiality of the system. Such vulnerabilities are exacerbated by the distributed training in federated learning, which makes protecting against threats harder and evidences the need to further the research on defense methods to make federated learning a real solution for a trustworthy system. 
      This tutorial first introduce major concepts and backgrounds in federated learning and trustworthy AI. Then we introduce a comprehensive roadmap for developing a trustworthy FL system and summarize existing efforts from three key aspects: security, robustness, and privacy. Specifically, we present an overview of threats that depict a general picture of this vulnerability of developing trustworthy federated learning in the different development stages (\ie, data processing, model training, and deployment) and further discuss them from selected aspects (\ie, security, robustness, and privacy ). To expound on guidelines for selecting the most adequate defense method to keep the FL system trustworthy, we discuss the technical solutions for realizing each aspect of Trustworthy FL. Finally, we present the applications of federated learning in various domains, followed by challenges and future prospects of trustworthy federated learning.

    • Explaining Machine Learning Decisions

      Deep neural networks (DNNs) are being deployed in various real-world applications. Despite tremendous progress, their acceptance is being hampered by an inherent inability to explain decisions. For example, in some countries, it is the law to explain to a client why their loan application was turned down.  Similarly, it would not be enough for a recommender system in the medical domain to recommend a prescription for tuberculosis. Instead, the recommender system must mark the specific portions of a chest X-ray and point to histopathological findings that lead to the conclusion that a patient is suffering from tuberculosis.

      In recent years, efforts have been made to explain the decisions made by machine learning methods. For instance, [1] proposed a machine learning model to develop a classifier capable of distinguishing Parkinson’s disease (PD) using Dopamine Transporter Scan or DaTSCAN images. By employing the Local Interpretable Model-agnostic Explanation (LIME) [2], the authors generated visual super-pixels that aided in segregating PD from non-PD. In another work [3], the authors developed a graph-convolutional neural network to predict distant metastasis in breast cancer patients by utilizing gene expression omics data. They employed Graph-Layerwise Relevance Propagation (G-LRP), a variant of LRP [4], to explain the decisions made by the developed model over each individual data point, thereby generating patient-specific molecular subnetworks that identified potentially druggable biomarkers.

      The applications of XAI methods are not limited to the medical domain. For instance, [5] utilized Long-Short Term Memory (LSTM) to develop a drought-index forecasting model, and further, employed SHapley Additive exPlanations (SHAP) [6] to interpret the spatial and temporal relationship between the attributes and the forecasted results.

      A brief description of some of the state-of-the-art XAI methods is as follows:

      SHapley Additive exPlanations or SHAP is a model-agnostic XAI method, based on the cooperative game theory, developed by Lloyd Shapley in 1953 [8]. It considers each feature of an instance as a “player” in a cooperative game. The prediction is considered the "payout,” or the reward generated by the coalition of players in the game. The average marginal contribution of a player in all possible coalitions is used to figure out the relevance of the features.
      Local Interpretable Model-agnostic Explanations or LIME can explain the predictions of any classifier by constructing a simplified interpretable model locally around a prediction that provides a good local approximation of the predictions made by the machine learning model.
      Layer-wise relevance propagation method and its variants  provide a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. Through LRP, the end user can see which segments of the input image affect the predictions of kernel-based classifiers for multilayered neural networks. The contributing pixels could be visualized as heatmaps and thus, could be intuitively interpreted by an expert, who may verify the validity of the classification decision and analyze the regions of potential interest.
      Realizing the need to explain the outcomes  of the ML algorithms, the Defense Advanced Research Projects Agency (DARPA) launched the XAI program aiming to create a suite of new AI techniques enabling end users to understand, appropriately trust, and effectively manage the AI systems [7]. Most of the international conferences and workshops include one or more sessions on Explainable methods. Premier universities, including Harvard university and Stanford University, have launched courses/ modules on explainability  and interpretability of  ML methods. The online platforms such as Coursera and Kaggle, In this tutorial, we will introduce representative XAI methods and the participants will have hands-on experience on the use of these methods.