With the base units of a simple graph, a vast number of variants illustrate the complex interactions of the real-world entities in the spatial aspect, either homogeneous or heterogeneous. Under the evolving graph structures, the evolutional patterns represent rich information from the temporal side. The special session calls for papers working on Social Network Computation, including intelligent computational frameworks, models, algorithms and applications for quantifying online social networks activities.
This special section on Deep Learning on Anomaly Detection will solicit recent advances in anomaly detection that exploit the data structures, semantics, dynamics and heterogeneity to provide more reliable and efficient anomaly detection systems.
The research and development of intelligent vehicles and transportation systems are rapidly growing worldwide. Intelligent transportation systems are making transformative changes in all aspects of surface transportation based on vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I) connectivity, and automated driving (AV). With the support of advanced equipment, a host of intelligent devices in cars have made various functions in practice, such as airbag control, unwelcome intrusion detection, collision warning and avoidance, power management and navigation, driver alertness monitoring, etc. Among those functions, the neural network plays a critical role in building all types and levels of intelligence in vehicle and transportation systems. The objective of this special session is to provide a forum for researchers and practitioners to present advanced research in neural network models with a focus on innovative applications for intelligent vehicle and transportation systems.
Transfer learning and transfer optimization aims to increase the quality or efficiency of learners and optimizers via transferring useful knowledge. They are essential in real-world applications, because in many scenarios, the input space, data distributions, learning tasks, and decision space may change over time. Thus, adapting learning or optimization approaches of historical environments can speed up learning or optimization problems solving in new environments. In some other scenarios,
collecting or labelling training data in a learning problem may be expensive or unavailable. Exploring the knowledge from different but related domains is much helpful in enhancing the quality of a model. Additionally, training models, such as the deep neural network or fitness evaluations, may computationally expensive. In this case, reusing models or solutions from related learning or optimization tasks are helpful in reducing computational time or resource. However, due to different characters of tasks, such as the size of collected data, the quality of collected data, the similarity between tasks, and complexities of tasks, exploring and transferring useful knowledge are often very
different. The theme of this special session is transfer learning and optimization, covering ALL different learning and optimization paradigms, machine learning and optimization approaches. The aim is to investigate the new theories, methods of leveraging and reusing knowledge of transfer learning and optimization, and their applications.
Nuclear fusion is an appealing potential solution to the grand challenge of obtaining affordable, abundant power without damaging the environment. Despite the major challenges in controlling nuclear fusion, substantial progress has been made by scientists to reach a tipping point in increasing the number of fusion approaches and private startup support to pursue them. Given the enormous progress of machine learning (ML) in supervising nonlinear dynamic systems, there is a huge opportunity for data scientists to make fusion reactors a reality.
This special session represents some of the early attempts to bring machine learning and fusion energy together. Our aim is to increase the visibility of these opportunities in the ML community, and the number of active collaborative efforts on developing data-driven algorithms for nuclear fusion. We invite researchers to submit papers on all aspects of data science for fusion and plasma physics, including core algorithm development, new models, fusion applications, and implementations.
Lifelong Learning has been continuously growing in recent years, and the field becomes one of the most significant in the modern Machine Learning (ML) area. Contemporary data sources generate high dimensional information with heterogeneous structure, noisy, redundant, and often incomplete. Furthermore, most of the data streams present additional difficulties, such as distribution shift, imbalanced data, lack of access to the class labels, or access to only a part of them, and often a long delay in receiving feedback. A plethora of real-world applications have data with those characteristics, for instance in areas such as networking, cybersecurity, finance, environmental engineering, healthcare. Lifelong learning research has focused on developing accurate and robust decision models able to consolidate knowledge and forget concepts incrementally, and to automatically adapt to external changes and distribution shifts. Much of the current research pays attention mainly to classification problems. However, we expect to also discuss the unsupervised context and the regression case. Furthermore, we aim also to discuss the most recent approaches on related learning paradigms such as online learning, transfer learning and active learning.
The aim of this special session is to gather recent advancements on tackling fundamental questions in data stream mining and lifelong learning, such as adaptation to non-stationary characteristics, stability-plasticity dilemma, robust deployment, and learning under limited access to ground truth. In summary, we aim to provide a forum to share the latest innovative algorithms, open questions and applications on lifelong learning systems.
Neuromorphic computing is gaining much importance as the scale of AI models and unstructured data become bigger and bigger. The goal of the special session is to accelerate the development of neuromorphic computing toward industrial applications, through open discussion among algorithm, architecture, and application researchers from diverse backgrounds.
The main topics of this special session include but not limited to
- Algorithms for neuromorpchic computing, spiking neural networks, deep learning, reservoir computing, etc.
- Architecture for neuromorphic computing systems, digital/analog electronics, photonics, materials, mechanics, physical reservoir computing, etc.
- Applications for cloud, edge and IoT systems. sensor data analytics, surveillance, anomaly detection, autonomous vehicles and robots, intelligent networking systems, machine-to-machine(M2M) communications, etc.
With the increasing penetration of distributed energy resources, there are greater interests in investigating new market mechanisms and transactive energy management to support local energy trading and balancing, and the integration of high volume of renewable energy and demand responses. This emerging research calls for advanced machine learning and computational intelligence techniques from advanced energy forecasting to distributed and large-scale energy management. This timely special session on Computational Intelligence in Transactive Energy Management and Smart Energy Network (CITESEN 2023) aims to showcase the latest development in advanced applications of computational intelligence to smart energy management and smart energy markets. More details can be found on the special session website https://sites.google.com/view/ijcnn-citesen-2023/
Evolutionary Computation (EC) and Neural Computation (NC) are two representative and complementary nature-inspired computational paradigms: from the bionic point of view, EC mimics the evolutionary processes on the macro level, while NC models the working mechanisms of neural systems on the micro level. From the technical point of view, EC is a family of algorithms for complex optimization, while NC is a family of representation learning methods for complex modeling. Hence, fusions of the two computational paradigms are not only biologically plausible but also technically beneficial.
Intrinsically, the two computational paradigms can be fused in two general ways. On one hand, NC methods can be incorporated into EC frameworks to improve the search efficiency/effectiveness of the EC algorithms. On the other hand, EC algorithms can be adopted to improve the performance of NC methods. During the past decades, both EC and NC have witnessed a big boom in algorithm design and applications. However, research on exploring the synergies between EC and NC, in particular the leverage of EC for enhancing the power of NC is still in its infancy . The theme of this special session -- evolutionary neural computation -- aims to bring together researchers investigating methods and applications in studies the interdisciplinary fields across EC and NC. https://www.emigroup.tech/index.php/news/ijcnn-2023-special-session-on-evolutionary-neural-computation/
Over the last decades there has been an increasing interest in using machine learning and in the last few years, deep learning methods, combined with other vision techniques to create autonomous systems that solve vision problems in different fields. This special session is designed to serve researchers and developers to publish original, innovative and state-of-the art algorithms and architectures for real time applications in the areas of computer vision, image processing, biometrics, virtual and augmented reality, neural networks, intelligent interfaces and biomimetic object-vision recognition.
This special session provides a platform for academics, developers, and industry-related researchers belonging to the vast communities of *Neural Networks*, *Computational Intelligence*, *Machine Learning*, *Deep Learning*, *Biometrics*, *Vision systems*, and *Robotics *, to discuss, share experience and explore traditional and new areas of the computer vision, machine and deep learning combined to solve a range of problems. The objective of the workshop is to integrate the growing international community of researchers working on the application of Machine Learning and Deep Learning Methods in Vision and Robotics to a fruitful discussion on the evolution and the benefits of this technology to the society.
The methods and tools applied to vision and robotics include, but are not limited to, the following: Computational Intelligence methods, Machine Learning methods, Self-adaptation, self-organisation and self-supervised learning, Robust computer vision algorithms (operation under variable conditions, object tracking, behaviour analysis and learning, scene segmentation,,), Extraction of Biometric Features (fingerprint, iris, face, voice, palm, gait), Registration Methods, Convolutional Neural Networks CNN, Recurrent Neural Networks RNN, Deep Reinforcement Learning DRL, Hardware implementation and algorithms acceleration (GPUs, FPGA,s,.)
The fields of application can be identified, but are not limited to, the following: Video and Image Processing, Video tracking. 3D Scene reconstruction, 3D Tracking in Virtual Reality Environments, 3D Volume visualization. Intelligent Interfaces (User-friendly Man Machine Interface). Multi-camera and RGB-D camera systems. Multi-modal Human Pose Recovery and Behavior Analysis, Human body reconstruction, Gesture and posture analysis and recognition, Biometric Identification and Recognition, Extraction of Biometric Features (fingerprint, iris, face, voice, palm, gait), Surveillance systems, Robotic vision, Autonomous and Social Robots, Industry 4.0, IoT and Cyber-physical Systems
Swarm and multi-robot systems are becoming increasingly popular in complex tasks that exceed the capabilities of single-robot systems. Seamless cooperation among robots enables a collective capacity that is greater than the sum of individual robots' abilities. Previous studies have demonstrated the unique advantages of swarm and multi-robot systems in various applications, including search and rescue, reconnaissance, and intelligence gathering.
Despite their great potential and unique qualities, swarm and multi-robot systems require significant human efforts for designing agent behaviours and controlling agent-to-agent interactions to ensure the desired team performance. Machine learning techniques greatly facilitate the design of collective robot behaviours for different environmental conditions and task requirements to enable robots to deal with uncertainty in obstacle distributions, task dynamics, and robot availabilities.
This special session is dedicated to scientific contributions on the usages and benefits of machine learning and AI to power the developments of swarm and multi robot systems. We invite contributions to new algorithms and applications of machine learning in swarm and multi-robot developments.
Due to data isolation and privacy challenges in the real world, Federated Learning stands out among various AI technologies for real-world scenarios, ranging from business applications like risk evaluation systems in finance to drug discovery in life sciences. Although still in its infancy, FL has already shown significant theoretical and practical results, making it one of the hottest topics in the machine learning community. Nonetheless, many questions and challenges remain open and attract increasing interest from international research communities: e.g., the statistical unbalancing of data, distributed optimization problems, communication latency, security, and resilience to attack issues. In particular, the trustworthiness of FL systems is threatened by adversarial attacks against data privacy, the learning algorithm’s stability, and the system’s confidentiality. Such vulnerabilities are exacerbated by the distributed training in federated learning, which makes protecting against threats harder and makes it evident the need to further the research on defense methods to make federated learning a real solution for a trustworthy system.
The main objective of the 2023 Special Session on Federated Learning - Methods, Applications, Challenges, and beyond, is to focus the international research community’s attention on the emerging perspectives and practical algorithms in Federated Learning, with a particular emphasis on its privacy and security aspects. This session aims to collect novel contributions and research experiences from the variegated research communities participating in the IJCNN conference. We believe that such diversity will help in finding novel approaches for mitigating current issues and optimise Federated Learning algorithms.
The advent of the big data era in healthcare comes with the widespread capture of health data in various forms, such as electronic patient records, administrative claim records, biometric data, sensor data, and medical images. Artificial Intelligence and machine learning techniques have been widely used to unlock the
hidden value from the sophisticated data and transform them into sensible decision-making and actions to support better healthcare. There are many successful healthcare applications leveraging the power of AI. For example, the giant Google has partnered with healthcare organizations to successfully develop machine learning-enabled imaging and diagnostics tools for skin, eye diseases and lung, breast cancers and so on to support medical specialists’ better decision-making.
Randomization-based learning algorithms have received considerable attention from academics, researchers, and domain workers because randomization-based neural networks can be trained by non-iterative approaches possessing closed-form solutions. Those methods are generally computationally faster than iterative solutions and less sensitive to parameter settings. Even though randomization-based non-iterative methods have attracted much attention in recent years, their deep structures have not been sufficiently developed nor benchmarked. This special session aims to bridge this gap. The first target of this special session is to present the recent advances in randomization- based learning methods. Randomization-based neural networks usually offer non-iterative closed-form solutions. Secondly, the focus is on promoting the concepts of non-iterative optimization with respect to counterparts, such as gradient-based methods and derivative-free iterative optimization techniques. Besides the dissemination of the latest research results on randomization-based and/or non-iterative algorithms, it is also expected that this special session will cover some practical applications, present some new ideas and identify directions for future studies.
Original contributions as well as comparative studies among randomization-based and non-randomized- based methods are welcome with unbiased literature review and comparative studies. Original contributions having biomedical applications with or without randomization algorithms are also welcome. Typical deep/shallow paradigms include (but not limited to) random vector functional link (RVFL), randomized recurrent networks (RRN), kernel ridge regression (KRR) with randomization, extreme learning
machines (ELM), random forests (RF), stochastic configuration network (SCN), broad learning system (BLS), convolution neural networks (CNN) with randomization, and so on.
The topic of this session - Bayesian neural networks - is to combine the beauties of two fields: neural networks which are powerful in complex function approximation and hidden representation learning, and Bayesian which has a solid theoretical foundation in uncertainty modeling. This special session will study the new theories, models, inference algorithms, and applications of this area, and will be a platform to host the recent flourishing of ideas using Bayesian approaches in neural networks and using neural networks in Bayesian modeling. All aspects of using Bayesian approaches in neural networks and using neural networks in Bayesian modeling and related works are welcome.
Innovations in Artificial Neural Networks (ANNs) lead to rich and elegant theory of Complex Valued Neural Networks ( CVNNs ), Quaternionic Neural Network ( QNNs ) alongwith interesting applications. In the past decade, research efforts in these areas have accelerated leading to new research directions related to Hypercomplex-Valued Neural Networks ( HVNNs ) ( particularly based on geometric and algebraic properties related to hypercomplex numbers including quaternions ). CVNNs naturally arise in applications dealing with electromagnetic waves, quantum waves and other wave phenomena. Also, quaternionic neural networks found many applications in modeling three and four dimensional data, processing of colour and polarimetric SAR images etc. Inspite of development of large body of knowledge ( theory and applications ), new research problems such as generalization of real valued ANN architectures, training algorithms to CVNNs, QNNs naturally arise. Furthermore, applications of CVNNs, QNNs in research areas such as pattern recognition, classification, nonlinear filtering, brain-computer interfaces, time-series prediction, intelligent image processing, bio-informatics, robotics etc are emerging naturally. This special session is aimed at providing a forum for organized and comprehensive exchange of ideas, presentation of research results and discussion of novel trends in CVNNs, QNNs. We fondly hope that this special session will attract renowned speakers, experienced/young research scholars who aspire to contribute to CVNN, QNN community. We expect the session to inspire and benefit computational intelligence researchers, other than specialists who require latest tools related to ANNs
Reservoir Computing (RC) is a popular approach for efficiently training Recurrent Neural Networks (RNNs), based on (i) constraining the recurrent hidden layers to develop stable dynamics, and (ii) restricting the training algorithms to operate solely on an output (readout) layer.
Over the years, the field of RC attracted a lot of research attention, due to several reasons. Indeed, besides the striking efficiency of training algorithms, RC neural networks are distinctively amenable to hardware implementations (including neuromorphic unconventional substrates, like those studied in photonics and material sciences), enable clean mathematical analysis (rooted, e.g., in the field of random matrix theory), and finds natural engineering applications in resource-constrained contexts, such as edge AI systems. Moreover, in the broader picture of Deep Learning development, RC is a breeding ground for testing innovative ideas, e.g. biologically plausible training algorithms beyond gradient back-propagation. Although established in the Machine Learning field, RC lends itself naturally to interdisciplinarity, where ideas and inspirations coming from diverse areas such as computational neuroscience, complex systems and non-linear physics can lead to further developments and new applications.
This special session is intended to be a hub for discussion and collaboration within the Neural Networks community, and therefore invites contributions on all aspects of RC, from theory, to new models, to emerging applications.
Domain adaptation aims to learn a model by training data such that the model can generalize well on test data, even if the training data and test data are from different distributions. Over the past few years, we have witnessed compelling evidence of successful investigations on theoretical development and the use of domain adaptation to support many real-world applications, e.g., computer version, privacy protection, and medical analysis. This special session aims to provide a forum for researchers in domain adaptation to share the latest advantages in domain adaptation theories, algorithms, models and applications. The main topics include, but are not limited to, the following: theories of domain adaptation, homogeneous and heterogeneous domain adaptations, open-set, partial and universal domain adaptations, multiple source domain adaptation, few-shot domain adaptation, source-free domain adaptation, domain generalization and out-of-distribution generalization.
With the breakthroughs in Deep Learning (DL), recent years have witnessed the booming of Artificial Intelligence (AI) applications and services. Driven by the rapid advances in mobile computing and the Artificial Intelligence of Things (AIoT), billions of mobile and IoT devices are connected to the Internet, generating zillion bytes of data at the network edge.
The ability to imbue interconnected devices with intelligence is at the forefront of this technological revolution. In this regard, conventional machine learning techniques have rapidly adapted to various applications in multiple domains. However, DL techniques, though having demonstrated unparalleled performance primarily in Computer Vision and Natural Language Processing fields, are often subjected to significant computation and memory costs as well as massive data requirements. This poses a great challenge to empower devices at the network edge with DL capability. Nowadays, accelerated by the remarkable success of DL and IoT technologies, there is an urgent need to push the DL frontier to the network edge to fully unleash the potential values of big data. The emerging Edge Computing (EC) paradigm provides a promising way to enable this, which leverages on distributed computing concepts to push computational loads from the network core to the network edge with the aim to provide faster responses to end users.
Deep Edge Intelligence (DEI) is a combination of DL, AI, EC and IoT. It enables the development and deployment of DL and AI techniques, based on EC, on edge devices, e.g., IoT devices, where the data are generated, aiming to provide AI for every person and every organization at any place. This special session seeks to bring together research that sheds light on the ways in which AI, Deep Learning, IoT, edge and fog computing will mutually shape the future of the next generation of information technology.
The human-in-the-loop (HIL) terms can be used in different ways in different research fields. Human-in-the-loop learning algorithms refer to algorithms that include human feedback into the training loop of the machine learning models to improve the quality of training and to augment the functions of the model. In the hardware applications, the human-in-the-loop learning devices or robots refer to the skill transfer skills from the users to the robots, where a human’s skills could be actively learnt by the robots’ model to enhance its intelligence or skill levels in an active and autonomous way. In the system design and implementation processes, the “human-in-the-loop” refers to the active integration of human inputs with different physiological inputs, such as BCI. We can see the “human-in-the-loop” methodologies in different research scopes differ, but their central approach can be stay unified and the interdisciplinary research and development can be foreseen. The special session aims to reduce the gaps between the robotic systems’ techniques and settings and the users’ practical needs by the users’ inputs and optimization.
Different regulatory bodies globally, namely in the framework of the European Commission, are proposing laws to regulate the use of artificial intelligence (AI), especially in critical applications, notably in financial applications. These so-called high-risk AI systems are being recommended to adhere to explainable principles in the spirit of creating trustworthy AI that can tackle hurdles that exist in real application due to lack of model interpretability.
Interpretability has been a focus of research since the beginning of Deep Learning, because high accuracy and high abstraction bring the black box problem, i.e., accuracy vs interpretability problem. This aspect is also of importance because of trustworthiness issues, i.e., a model that is not trusted is a model that will not be used. These issues often arise in real application scenarios, where end-users are not easily convinced of the reliability of black box model.
The financial sector is one of the largest users of digital technologies and a major driver in the digital transformation of the economy. Financial technology (FinTech) aims to both compete with and support the established financial industry in the delivery of financial services. As the emerging financial crisis is bringing to everyone’s attention, the financial sector is one of the forerunners in the public concern, e.g. credit scoring models are explicitly given as an example of a high-risk use case where standard intelligent models may fail in this new era.
The aim of this special session is, firstly, to collect papers describing intelligent innovative approaches to address fintech challenges. Secondly, focusing on trustworthiness, explainability, and interpretability, we also aim to accept a set of research papers on this subject, which contribute to improve transparency of AI supported processes in the Fintech space. Finally, to address the disparity between the proliferation in AI models within the financial industry, papers with approaches for risk assessment and responsible AI decision-making are encouraged.
Modern AI and advanced sensing technologies have been transforming our ability to monitor the Earth and explore the Universe. By analysing and interpreting data (primarily imagery) captured by remote sensing devices on satellites, aircrafts or UAVs and astronomical telescopes that operate either on ground or in orbit, valuable insights can be gained into the events on the Earth and the histories of the Universe.
Recent years have witnessed rapid advances in remote sensing technologies, resulting in an explosive growth of Earth observation data for probing the entire Earth at daily or even finer granularity. On the other hand, many new astronomical telescopes with enhanced sensing capabilities, like the recently launched James Webb Space Telescope, have been put into operation, generating massive data about the never explored aspects of the Universe. Nowadays, thanks to the boom of modern AI techniques, particularly deep learning, armed by an unprecedented growth in super-computing power, such space data can be transformed into valuable scientific discoveries and actionable insights which may benefit various fields, such as astronomy, transportation, agriculture, and environment. However, the rapidly increasing complexity and requirements of newly emerging applications in different fields are posing greater challenges to existing AI techniques, leading to the surging needs of technology advancement.
This special session aims to bring together researchers from academia, governments and industries to review past achievements, disseminate latest studies, and explore future directions about innovating and applying modern AI techniques, particularly deep learning, to analyse space data, primarily remote sensing and astronomical imagery, with the aim of fully unleashing the potential values of space data to benefit wider‐ranging fields.
Much of current research on Machine Learning (ML) is dominated by methods of the Deep Learning family. The more complex their architectures, the more difficult the interpretation or explanation of how and why a particular network prediction is obtained, or the elucidation of which components of the complex system contributed essentially to the obtained decision. This brings about the concern about interpretability and non-transparency of complex models, especially in high-stakes applications areas such as healthcare, national security, industry or public governance, to name a few, in which decision making processes may affect citizens. This is, for instance, made especially relevant by rapid developments in the field of autonomous systems – from cars that drive themselves to partner robots and robotic drones. DARPA (Defense Advanced Research Projects Agency), a research agency of US Department of Defense, was the first to start a research program on Explainable AI (https://www.darpa.mil/program/explainable-artificial-intelligence) with the goal “to create a suite of machine learning techniques that (1) Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and (2) Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.” Research on Explainable AI (XAI) is now supported worldwide by a variety of public institutions and legal regulations, such as European Union’s General Data Protection Regulation (GDPR) and the forthcoming Artificial Intelligence Act. Similar concerns about transparency and interpretability are being raised by governments and organizations worldwide. The lack of transparency (interpretability and explainability) of many ML approaches in the light of regulations may end up limiting ML to niche applications and poses a significant risk of costly mistakes without the mitigation of a sound understanding about the flow of information in the model.
The research field of deep learning for graphs studies the application of well-known deep learning concepts, such as convolution operators on images, to the processing of graph-structured data. Graphs are abstract objects that naturally represent interacting systems of entities, where interactions denote functional and/or structural dependencies between them. Molecular compounds and social networks are the most common examples of such graphs: on the one hand, a molecule is seen as a system of interacting atoms, whose bonds depend, e.g., on their inter-atomic distance; on the other hand, a social network represents a vastly heterogeneous set of user-user interactions, as well as between users and items, like, pictures, movies and songs. Besides, graph representations are extremely useful in far more domains, for instance to encode symmetries and constraints of combinatorial optimization problems as a proxy of our a-priori knowledge. For these reasons, learning how to properly map graphs and their nodes to values of interest poses extremely important, yet challenging, research questions. This special session on graph learning will solicit recent advances that exploit various topics to benefit the solving of real-world problems.
The special session we propose is an excellent opportunity for the machine learning community and IJCNN 2023 to gather together and host novel ideas, showcase potential applications, and discuss the new directions of this remarkably successful research field. In particular, the special session will attract papers proposing deep learning models and methods for graphs, e.g., graph coarsening, structure learning, graph kernels and distances, and graph stream processing. Theoretical results, benchmarks, and practical applications are also welcome and encouraged.