The 27th International Conference on Neural Information Processing
(ICONIP2020)

November 18 - November 22, 2020
Bangkok, Thailand


Call for paper ( pdf / txt )

Topics

Theory and Algorithms

Computational and Cognitive Neurosciences

Human Centered Computing

Applications

Venue

Bangkok, Thailand.

App

The 27th International Conference on Neural Information Processing (ICONIP2020) aims to provide a leading international forum for researchers, scientists, and industry professionals who are working in neuroscience, neural networks, deep learning, and related fields to share their new ideas, progresses and achievements.

ICONIP2020 will be held in Bangkok, Thailand, during November 18 - November 22, 2020 and will be collocated with several events, including ACML, iSAI-NLP, AIoT, CSBio, and DLAI4.

Topics



ICONIP2020 will deliver keynote speeches, invited talks, full paper presentations, posters, tutorials, workshops, social events, etc. Topics covered include but are not limited to:




Theory and Algorithms

  • Causality and explainable AI
  • Computational intelligence
  • Control and decision theory
  • Constraint and uncertainty theory
  • Machine learning
  • Neurodynamics
  • Neural network models
  • Optimization
  • Pattern recognition
  • Time series analysis

Computational and Cognitive Neurosciences

  • Affective and cognitive learning
  • Biometric systems/interfaces
  • Brain-machine interface
  • Computational psychiatry
  • Decision making and control
  • Neuroeconomics
  • Neural data analysis
  • Reasoning and consciousness
  • Sensory perception
  • Social cognition








Human Centred Computing

  • Bioinformatics
  • Biomedical information
  • Healthcare
  • Human activity recognition
  • Human-centred design
  • Human–computer interaction
  • Neuromorphic hardware
  • Recommender systems
  • Social networks
  • Sports and rehabilitation

Applications

  • Big data analysis
  • Computational finance
  • Image processing and computer vision
  • Data mining
  • Information security
  • Information retrieval
  • Multimedia information processing
  • Natural language processing
  • Robotics and control
  • Web search and mining

Important Dates



Workshop/Special Session Proposal Deadline: May 1, 2020
Tutorial Proposal Deadline: May 1, 2020
Notification of Workshop/Special Session/Tutorial Proposal: May 8, 2020
Paper Submission Deadline: June 1, 2020 June 28, 2020
Paper Notification Date: August 31, 2020
Paper Camera Ready Deadline: September 15, 2020

Paper Information


Papers should be written in English and follow the Springer LNCS format. Paper submissions is single-blind review, so author names can be shown in the submission. The submission of a paper implies that the paper is original and has not been submitted under review or is not copyright-protected elsewhere and will be presented by an author if accepted. All submitted papers will be refereed by experts in the field based on the criteria of originality, significance, quality, and clarity.

The Proceedings will be published in the Springer’s series of Lecture Notes in Computer Science. Selected papers will be published in a special issue of an SCI journal.

Final papers after acceptance will normally be 10 pages with a maximum of 12 pages in length. The page count includes everything, including references and appendices. Please follow Springer’s proceedings LaTeX templates provided in Overleaf (https://www.overleaf.com/latex/templates/springer-lecture-notes-in-computer-science/kzwwpvhwnvfj#.WuA4JS5uZpi).


Submission site


https://cmt3.research.microsoft.com/User/Login?ReturnUrl=%2Ficonip2020

Workshop/Special Sessions of ICONIP 2020

Special Sessions


(1) Human-in-the-Loop Interactions in Machine Learning
Dr. Zehong (Jimmy) Cao, University of Tasmania, TAS, Australia (Zehong.Cao@utas.edu.au) Prof. Chin-Teng Lin, University of Technology Sydney, NSW, Australia Prof. Dongrui Wu, Huazhong University of Science and Technology, Wuhan, China

Abstract: Extracting information from a massive amount of humans’ natural behaviour and cognition patterns has allowed supporting the machine learning and decisions in many fields, ranging from computer science to engineering. Human-in-the-loop approaches interact with machine learning are gaining popularity as a better approach to training more accurate models, because the human feedback into the learning loop of the machine can help it improve faster. Recent advances in machine learning, are giving momentum to human-in the-loop approaches to enable complex paradigms that operate in connection with human beings. Given the remarkable achievement associated with the processing of human physiological signals obtained from neuroimaging modalities and cognitive systems, it has been proposed as a useful and effective framework for the modelling and understanding of human behaviour and cognition patterns as well as to enable a direct communication pathway between the human and machine. This paves the way for developing new human-in-the loop interacting and interfacing techniques in reasoning and machine learning that foster the capabilities for understanding and modelling the training process.

(2) 13th International Workshop on Artificial Intelligence and Cybersecurity (AICS 2020)
Dr. Kitsuchart Pasupa, (kitsuchart@it.kmitl.ac.th ) Prof. Kaizhu Huang, Xi' Jiaotong-Liverpool University

Abstract: The 13th International Workshop on Artificial Intelligence and Cybersecurity (AICS 2020) was previously the International Data Mining and Cybersecurity Workshop (DMC) which has been held for ten consecutive years. The purpose of AICS is to raise the awareness of cybersecurity, promote the potential of industrial applications, and give young researchers exposure to the key issues related to the topic and to ongoing works in this area. AICS 2020 will provide a forum for researchers, security experts, engineers, and students to present latest research, share ideas, and discuss future directions in the fields of data mining, artificial intelligence, and cybersecurity. Website: http://www.csmining.org/

(3) Healthcare Analytics-Improving Healthcare outcomes using Big Data Analytics
Dr. Imran Razzak, School of Information Technology, Deakin University Geelong, Australia, imran.razzak@deakin.edu.au
Dr. Peter Eklund, School of Information Technology, Deakin University, Australia
Dr. Ibrahim A Hameed, Norwegian University of Science and Technology, Norway

Abstract: The field of health informatics has revolutionized the face of health care in the past decade. The increasingly aging population, prevalence of chronic diseases and rising costs has brought about some unique healthcare challenges to our global society. Informatics based solutions have not only changed the way in which information is collected and stored but also played a crucial role in the management and delivery of healthcare. Intelligent and automated data processing has never been more important than it is today. In recent years, intelligent systems have emerged as a promising tool for solving problems in various healthcare related domains. With the advent of various swift data acquisition systems and recent developments in the health care information technology, huge amounts of data have been amassed in different forms. One of the key challenges in this domain is to build intelligent systems for effectively modelling, organizing and interpreting the available healthcare data. Healthcare service providers are increasingly acknowledging the strategic importance of data analytics. However, the challenge becomes how to take Big Data and translate it into information that can be used by healthcare professionals for decision making to improve healthcare outcomes and improve the quality of care.

(4) The Synergy of Software Engineering Automation and Machine Learning (SSEA-ML)
Dr. Sajid Anwar, Center for Excellence in Information Technology, Institute of Management Sciences, Peshawar, Pakistan, sajid.anwar@imsciences.edu.pk
Dr. Abdul Rauf, RISE-Research Institute of Sweden in Vasteras, Sweden
Dr. Imran Razzak, School of Information Technology, Deakin University, Australia

Abstract: The contemporary economies and their GDP’s are heavily reliant on the up-to-date computer-based systems both in terms of operations to manufacture a product as well as in forming it as a viable commodity. The rapid growth in the technological development paradigms has led to numerous challenges to software development engineers, ranging from micro technological challenges to human-machine communication in industry 4.0. Emergence of Machine Learning (ML) as the epic center of computational research in last decade has helped researchers to come up with optimal solutions. With the emergence of industry 4.0, the application of ML has now widened to all phases of system development life cycle from requirements to maintenance and from planning to continuous improvement. This widening of scope for ML has led to extended and improved development and application of intelligent tools for automatic extraction of information from different documents, identification of functional and non-functional requirements, and test suites etc. With the swift progressions in ML and artificial intelligence, use of ML-based techniques and methodologies for software engineering are introduced and further optimized for greater efficiency in software engineers, processes and the product. From requirements to test cases, and from architecture to documentation, the ML artifacts and tools are now being employed.

(5) Advanced Machine Learning Approaches in Cognitive Computing
Dr. Jonathan H. Chan, jonathan@sit.kmutt.ac.th, Computer Science at the School of Information Technology, King Mongkut's University of Technology Thonburi
Dr. Phayung Meesad , phayung.m@it.kmutnb.ac.th,
Dr. Kuntpong Woraratpanya, kuntpong@it.kmitl.ac.th,
Prof. Yoshimitsu Kuroki, kuroki@kurume-nct.ac.jp.

Abstract: Cognitive technology has far-reaching applications in multiple sectors and is transforming global business today. Cognitive computing applications can be used in finance and investment firms to analyze the market in specific ways for their clients and make valuable suggestions. In healthcare and veterinary medicine, physicians can use cognitive computing tools to interact with past patient records and a database of medical information to aid and guide treatment. Cognitive computing applications in the travel industry could aggregate available travel information including flight and resort prices and availability, and combine that with user preference, budget, etc., to help deliver a streamlined, customized travel experience that could save consumers time, money, or both. In the health and wellness domain, data collected from wearable devices like a FitBit or Apple Watch can help personal trainers and individuals get suggestions for how to change their diet or exercise program, or even how to manage their sleep and stress-reducing routines. It is undeniable that one of the key successes of modern applications comes from cognitive computing; therefore, this special issue focuses on the recent and high-quality works in a research domain to promote key advances in cognitive computing technology, covering all theoretical and practical aspects from basic research to development of applications, and providing overviews of the state-of-the-art in emerging domains.

(6) Randomization-Based Deep and Shallow Learning Algorithms
Prof. Ponnuthurai Nagaratnam Suganthan, EPNSugan@ntu.edu.sg, Nanyang Technology University
Dr. M. Tanveer, mtanveer@iiti.ac.in, Indian Institute of Technology

Abstract: Randomization-based learning algorithms have received considerable attention from academics, researchers, and domain workers because randomization-based neural networks can be trained by non-iterative approaches possessing closed-form solutions. Those methods are in general computationally faster than iterative solutions and less sensitive to parameter settings. Even though randomization-based non-iterative methods have attracted much attention in recent years, their deep structures have not been sufficiently developed nor benchmarked. This special session aims to bridge this gap. The first target of this special session is to present the recent advances of randomization- based learning methods. Randomization based neural networks usually offer non-iterative closed form solutions. Secondly, the focus is on promoting the concepts of non-iterative optimization with respect to counterparts, such as gradient-based methods and derivative-free iterative optimization techniques. Besides the dissemination of the latest research results on randomization-based and/or non-iterative algorithms, it is also expected that this special session will cover some practical applications, present some new ideas and identify directions for future studies. Selected papers will be invited to Applied Soft Computing Journal Special Issue.

(7) Graph Neural Networks for Cognition and Development
Dr. Xu Yang, xu.yang@ia.ac.cn, Institute of Automation, Chinese Academy of Sciences
Dr. Shen-Lan Liu liusl@dlut.edu.cn
Prof. Zhi-Yong Liu, zhiyong.liu@ia.ac.cn

Abstract: Cognitive ability and developmental function have widely been considered to be highly related to essence of intelligence. The structural data, containing both the attributions and relations, are of particular importance for cognition and development research, such as the structured features or structured knowledge. Graph provides a natural way to represent and analyze structures in these data, and the graph neural networks (GNNs), as the deep learning models on graphs, have demonstrated superior performances in different types of structural data processing tasks, which are attributed to the powerful structural representation and inference ability of GNNs. Among these tasks, this special session focuses on the GNNs based methods and applications to cognition and development tasks of autonomous systems. As many new types of GNNs and typical applications are currently emerging to cater for the needs of processing and understanding of structural data in cognition and development research. The objective of the special session is thus to provide an opportunity for researchers and engineers from both academia and industry to publish their latest and original results on underlying theory, models, optimization algorithms, and applications of GNNs for cognition and development.

(8) NIPBIS2020: First International Workshop on Neural Information Processing for Big Data and IoT in Smart Cities
Prof. Loo Chu Kiong, University of Malaya, Malaysia, ckloo.um@um.edu.my
Prof. Gwanggil Jeon, Incheon National University, South Korea, gjeon@inu.ac.kr
Prof. Marco Anisetti, University of Milan, Italy, marco.anisetti@unimi.it

Abstract: Smart cities are urban area that uses different types of electronic Internet of Things (IoT) sensors to collect data and then use insights gained from that data to manage assets, resources and services efficiently. Smart cities can enhance quality of life and knowledge in contemporary society stands for the next wave of Civilization. The main techniques promoting to the accomplishment of smart connected cities contain big data, IoT, mobility, smart computing, cyber physical social system, artificial intelligence, data science, machine learning, and cognitive computing. Neural information processing such as artificial intelligence and machine learning integrated with IoT has the ability to answer key challenges presented by an excessive urban population which contains renewable energy, energy crises, transportation, healthcare and finance issues, and disaster management. It can improve the lives of the citizens and businesses that inhabit a smart city.

(9) Uncertainty Estimation: Theories and Applications
Prof. Saeid Nahavandi, Deakin University, saeid.nahavandi@deakin.edu.au
A/Prof. Abbas Khosravi, Deakin University, abbas.khosravi@deakin.edu.au
Prof. Amir F Atiya, Cairo University, Egypt, amir@alumni.caltech.edu

Abstract: How confident is a neural network model about its prediction? How much can one trust predictions of neural networks for new samples? How can one develop neural networks that know when they do not know? Answering these questions is a prerequisite for widespread deployment of neural networks in safety-critical applications. The field of uncertainty quantification of neural networks has received huge attention in recent years from both academia and industry. Several methods and frameworks have been proposed in the literature to generate predictive uncertainty estimates using neural networks. There are currently theoretical gaps and practical issues with proposed frameworks for uncertainty estimation using neural networks. Also, the research on the application of predictive uncertainty estimates for developing uncertainty-aware systems is still rare.

Workshops


(1) Workshop of Cross-Model Learning for Visual Question Answering
Dr. Zhou Zhao, Zhejiang University, China zhaozhou@zju.edu.cn
Dr. Zhou Yu, Hangzhou Dianzi University, China

Abstract: Visual Question Answering (VQA) is a recent hot topic in multimedia analysis, computer vision, natural language processing, and even a broad perspective of artificial intelligence, which has attracted a large amount of interest from the deep learning, computer vision, and natural language processing communities. Given an image (or a video clip) and a question in natural language, VQA requires grounding textural concepts to visual elements so as to infer the correct answer. The challenge lies in that, in most cases, it requires reasoning over the connecting between visual content and languages as well as the external knowledge. Towards general applications, besides the understanding of visual content, potential abilities of VQA largely come from the leveraging of different kinds of data (e.g., visual, audio, and text etc.) across multiple sources (e.g., social-media sites, surveillance videos, and Wikipedia etc.) for knowledge discovering and QA reasoning, which is recognized as cross-modal analysis in multimedia scope. From the above background, this workshop focuses on new theory and algorithms for visual question answering via cross-media analysis, as well as their applications in practice of human-computer interaction, multimedia search, visual description for blinded, incident report for surveillance, seeing chat bot, or even robotic intelligence.

Tutorials

Tutorial 1

Title: Advances in Randomized Learning Techniques for Neural Networks
Author: Dianhui Wang (La Trobe University, Australia)
Email: dh.wang@latrobe.edu.au

Abstract: Randomised learning techniques for training neural networks have received considerable attention in the past decades. The main reason behind this is that this class of learning algorithms can provide a feasible solution with comparable modelling performance. In 1992, Pao and Takefji proposed random vector functional-link (RVFL) nets, where the hidden layer parameters were randomly generated and then fixed during learning process. Indeed, a similar idea of such a randomized learning algorithm for the single layer perceptron model was also proposed by Schmidt et al. in 1992, and they suggested to assign the random input weights and biases in [-1, 1] with the uniform distribution. However, these existing randomized algorithms cannot guarantee to generate a capable learner model, although some theoretical results on the universal approximation property of randomized neural networks have been established by Igelnik and Pao in 1995. Recently, we develop a new randomized learning algorithm and ensure the resulting models, termed Stochastic Configuration Networks (SCNs), share the universal approximation property. This tutorial aims to clarify historical developments with milestone results and provide a deep insight into randomized learning techniques for constructing deep neural networks.

Bio: Dr Wang was awarded a Ph.D. from Northeastern University, Shenyang, China, in 1995. From 1995 to 2001, he worked as a Postdoctoral Fellow at Nanyang Technological University, Singapore, and a Researcher at The Hong Kong Polytechnic University, Hong Kong, China. He joined La Trobe University in July 2001 and is currently a Reader and Associate Professor with the Department of Computer Science and Information Technology, La Trobe University, Australia. He is also a Professor at The State Key Laboratory of Synthetical Automation of Process Industries, Northeastern University, China. His current research focuses on industrial bigdata-oriented machine learning theory and applications, specifically on Deep Stochastic Configuration Networks (http://www.deepscn.com/) for data analytics in process industries, intelligent sensing systems and power engineering.

Dr Wang is a Senior Member of IEEE, and serving as an Associate Editor for IEEE Transactions On Cybernetics, Information Sciences, and WIREs Data Ming and Knowledge Discovery.


Tutorial 2

Title: Transfer Learning for Brain-Computer Interfaces
Author: Dongrui Wu (Huazhong University of Science and Technology, China)
Email: drwu@hust.edu.cn

Abstract: A brain-computer interface (BCI) enables a user to communicate with a computer directly using the brain signals. Electroencephalogram (EEG) is the most frequently used input signal in BCIs. However, EEG signals are weak, easily contaminated by interferences and noise, non-stationary for the same subject, and varying among different subjects and sessions. So, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, in different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time-consuming and user- unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate the learning for a new subject/session/device/task, is frequently used to reduce this calibration effort. This tutorial reviews the basics of EEG-based BCIs, and the progresses on TL approaches in the last few years, i.e., since 2016.

Bio: Dongrui Wu received a B.E in Automatic Control from the University of Science and Technology of China, Hefei, China, in 2003, an M.Eng in Electrical and Computer Engineering from the National University of Singapore in 2005, and a PhD in Electrical Engineering from the University of Southern California, Los Angeles, CA, in 2009. He was a Lead Researcher at GE Global Research, NY, and a Chief Scientist of several startups. He is now a Professor and Deputy Director of the Key Laboratory of the Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China.

Prof. Wu's research interests include affective computing, brain-computer interface, computational intelligence, and machine learning. He has more than 140 publications (6,300+ Google Scholar citations; h=38), including a book "Perceptual Computing" (with Jerry Mendel, Wiley-IEEE, 2010), and five US patents. He received the IEEE International Conference on Fuzzy Systems Best Student Paper Award in 2005, the IEEE Computational Intelligence Society (CIS) Outstanding PhD Dissertation Award in 2012, the IEEE Transactions on Fuzzy Systems Outstanding Paper Award in 2014, the North American Fuzzy Information Processing Society (NAFIPS) Early Career Award in 2014, the IEEE Systems, Man and Cybernetics (SMC) Society Early Career Award in 2017, and the IEEE SMC Society Best Associate Editor Award in 2018. He was a finalist of the IEEE Transactions on Affective Computing Most Influential Paper Award in 2015, the IEEE Brain Initiative Best Paper Award in 2016, the 24th International Conference on Neural Information Processing Best Student Paper Award in 2017, the Hanxiang Early Career Award in 2018, and the USERN Prize in Formal Sciences in 2019. He was a selected participant of the Heidelberg Laureate Forum in 2013, the US National Academies Keck Futures Initiative (NAKFI) in 2015, and the US National Academy of Engineering German-American Frontiers of Engineering (GAFOE) in 2015. His team won the First Prize of the China Brain-Computer Interface Competition in 2019.

Prof. Wu is/was an Associate Editor of the IEEE Transactions on Fuzzy Systems (2011-2018), the IEEE Transactions on Human-Machine Systems (2014-), the IEEE Computational Intelligence Magazine (2017-), and the IEEE Transactions on Neural Systems and Rehabilitation Engineering (2019-). He was the lead Guest Editor of the IEEE Computational Intelligence Magazine Special Issue on Computational Intelligence and Affective Computing, and the IEEE Transactions on Fuzzy Systems Special Issue on Brain Computer Interface. He is a Senior Member of the IEEE, a Board member and Distinguished Speaker of the NAFIPS, and a member of IEEE Systems, Man and Cybernetics Society Brain-Machine Interface Systems Technical Committee, IEEE CIS Fuzzy Systems Technical Committee, Emergent Technologies Technical Committee, and Intelligent Systems Applications Technical Committee. He has been Chair/Vice Chair of the IEEE CIS Affective Computing Task Force since 2012.


Tutorial 3

Title: Fundamentals of Deep Learning for Computer Vision
Author: Jonathan Chan (King Mongkut’s University of Technology Thonburi)
Email: jonathan@sit.kmutt.ac.th

Abstract: The NVIDIA Deep Learning Institute (DLI) and IC2-DLAB, School of Information Technology, King Mongkut’s University of Technology Thonburi (KMUTT) invite you to attend a hands-on deep learning workshop at ICONIP 2020, exclusively for verifiable academic students, staff, and researchers. This workshop teaches deep learning techniques for a range of computer vision tasks through a series of hands-on exercises. You will work with widely-used deep learning tools, frameworks, and workflows to train and deploy neural network models on a fully-configured, GPU-accelerated workstation in the cloud. After a quick introduction to deep learning, you will advance to: building and deploying deep learning applications for image classification and object detection, modifying your neural networks to improve their accuracy and performance, and implementing the workflow you have learned on a final project. At the end of the workshop, you will have access to additional resources to create new deep learning applications on your own. Upon successful completion of the workshop, participants will receive NVIDIA DLI Certification to recognize subject matter competency.

Bio: Dr. Jonathan H. Chan is an Associate Professor of Computer Science and a co-founder of D-Lab at the School of Information Technology (SIT), King Mongkut's University of Technology Thonburi (KMUTT), Thailand. Currently, he is the Acting Director of IC2-DLab at SIT, KMUTT. Jonathan holds a B.A.Sc., M.A.Sc., and Ph.D. degree from the University of Toronto and was a visiting professor back there on several occasions. He also holds an honorary Visiting Scientist status at The Centre for Applied Genomics at The Hospital for Sick Children (SickKids) in Toronto, Canada. Besides being the Section Editor of Heliyon Computer Science (Cell Press), Dr. Chan is an Action Editor of Neural Networks (Elsevier), and a member of the editorial boards of International Journal of Machine Intelligence and Sensory Signal Processing (Inderscience), International Journal of Swarm Intelligence (Inderscience), and Proceedings in Adaptation, Learning and Optimization (Springer). Also, he is a reviewer for a number of refereed international journals including Information Sciences, Applied Soft Computing, Expert Systems with Applications, and Computers in Biology and Medicine. He has served on the program, technical, organizing and/or advisory committees for numerous major international conferences. Moreover, Dr. Chan is a Past-President of the former Asia Pacific Neural Network Assembly (APNNA) and the VP of Education and a Governing Board member of the current Asia Pacific Neural Network Society (APNNS). In addition, he is a founding member and the current Chair of the IEEE-CIS Thailand Chapter. Dr. Chan is a senior member of IEEE, ACM, and INNS, and a member of the Professional Engineers of Ontario (PEO). Furthermore, he holds an NVIDIA Deep Learning Institute (DLI) University Ambassadorship and is a certified DLI instructor. His research interests include intelligent systems, biomedical informatics, and data science and machine learning in general.


Tutorial 4

Title: Compressed Communication for Large-scale Distributed Deep Learning
Author: El Houcine Bergou, Aritra Dutta, and Panos Kalnis (King Abdullah University of Science and Technology, Saudi Arabia)
Email: panos.kalnis@kaust.edu.sa

Abstract: Recent advances in machine learning and availability of huge corpus of dig- ital data resulted in an explosive growth of DNN model sizes; consequently, the required computational resources have dramatically increased. As a result, distributed learning is becoming the de-facto norm. However, scaling various systems to support fast DNN training on large clusters of compute nodes, is challenging. Recent works have identified that most distributed training work- loads are communication-bound. To remedy the network bottleneck, various compression techniques emerged, including sparsification and quantization of the communicated gradients, as well as low-rank methods. Despite the potential gains, researchers and practitioners face a daunting task when choosing an appropriate compression technique. The reason is that training speed and model accuracy depend on multiple factors such as the actual framework used for the implementation, the communication library, the network bandwidth and the characteristics of the model, to name a few.
In this tutorial, we will provide an overview of the state-of-the-art gradient compression methods for distributed deep learning. We will present the theoretical background and convergence guaranties of the most representative sparcification, quantization and low-rank compression methods. We will also discuss their practical implementation on TensorFlow and PyTorch with different communication libraries, such as Horovod, OpenMPI and NCCL. Additionally, we will present a quantitative comparison of the most popular gradient compression techniques in terms of training speed and model accuracy, for a variety of deep neural network models and datasets. We aim to provide a comprehensive theoretical and practical background that will allow researchers and practitioners to utilize the appropriate compression methods in their projects.

Bio:
  1. • Panos Kalnis is Professor at the King Abdullah University of Science and Technology (KAUST) and served as Chair of the Computer Science program from 2014 to 2018. In 2009 he was visiting assistant professor at Stanford University. Before that, he was assistant professor at the National University of Singapore (NUS). In the past he was involved in the designing and testing of VLSI chips and worked in several companies on database designing, e- commerce projects and web applications. He has served as associate editor for the IEEE Transactions on Knowledge and Data Engineering (TKDE) from 2013 to 2015, and on the editorial board of the VLDB Journal from 2013 to 2017. He received his Diploma from the Computer Engineering and Informatics Department, University of Patras, Greece in 1998 and his PhD from the Computer Science Department, Hong Kong University of Science and Technology (HKUST) in 2002. His research interests include Big Data, Parallel and Distributed Systems, Large Graphs and Systems for Machine Learning. He has published extensively in venues such as SIGMOD, PVLDB, SIGKDD, EUROSYS, WWW and AAAI; and his work has received more than 8,800 citations (Google Scholar). https://scholar.google.com/citations?user=-NdSrrYAAAAJ

  2. • Aritra Dutta is Postdoctoral Fellow at the ECRC at the CEMSE Division at KAUST. His research interests include weighted and structured low-rank approximation of matrices, convex, nonlinear, and stochastic optimization, numerical analysis, linear algebra, distributed computing and machine learning. In addition, he works on the applications of image and video analysis in computer vision and machine learning. Dr. Dutta received his B.S. in mathematics from the Presidency College, Calcutta, India, in 2006. He received his M.S. in mathematics and computing from the Indian Institute of Technology (IIT), Dhanbad, in 2008 and a second M.S. in mathematics from the University of Central Florida (UCF), Orlando, in 2011. He received his PhD in mathematics from the UCF in 2016. https://scholar.google.com/citations?user=vquoiHsAAAAJ

  3. • El Houcine Bergou is Research Scientist at KAUST, working mainly on numerical optimization and its applications. He received his PhD in Applied Mathematics at the National Polytechnic Institute in Toulouse, France. His research interests are in all areas that intersect with optimization, including algorithms, machine learning, statistics and operations research. He is particularly interested in algorithms for large-scale optimization including randomized and distributed optimization methods. https://scholar.google.com/ citations?user=qzxprWoAAAAJ


Tutorial 5

Title: Adversarial Attacks on Deep Learning Models in Natural Language Processing
Author: Wei Emma Zhang (The University of Adelaide, Australia)
Email: wei.e.zhang@adelaide.edu.au

Abstract: With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications. However, previous efforts have shown that DNNs are vulnerable to strategically modified samples, named adversarial examples. These samples are generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Inspired by the popularity of generating adversarial examples against DNNs in Computer Vision (CV), research efforts on attacking DNNs for Natural Language Processing (NLP) applications emerge in recent years. However, the intrinsic difference between image (CV) and text (NLP) renders challenges to directly apply attacking methods in CV to NLP. Various methods are proposed addressing this difference and attack a wide range of NLP applications. In this tutorial, we present a systematic introduction on all related academic works since the first appearance in 2017. We categorize the research efforts by five categorization criteria and discuss the representative and the-state-of- the-art works including the very recent BERT-based attack and defence. To make this talk self-contained, we briefly cover preliminary knowledge of NLP and discuss related seminal works in computer vision. We finally discuss open issues to bridge the gap between the existing progress and more robust adversarial attacks on NLP DNNs.

Bio: Dr. Wei Zhang (The publishing name is Wei Emma Zhang) is a lecturer in School of Computer Science, The University of Adelaide, and an early career researcher in information retrieval, natural language processing and text mining. She got her PhD in 2017 and spent half-year in IBM research Australia as full-time intern. She spent two and a half years in Macquarie University as postdoctoral researcher before joining The University of Adelaide. Dr. Wei Zhang has produced 50+ publications as books, refereed book chapters, journal articles and conference papers. She published an authored monograph by Springer in August 2018 on managing data and knowledge bases. Her papers has been published in prestigious journals in computer science including ACM Transactions on Internet Technology (CORE A), World Wide Web Journal (CORE A), Communications of the ACM (flagship publication of ACM), ACM Transactions on Intelligent Systems and Technology, IEEE Transactions on Services Computing, and IEEE Transactions on Big Data. Her research has also appeared in top-tier international conferences including the Web Conference (WWW), Intl. Conference on Extending Database Technology (EDBT), Intl. Conference on Information and Knowledge Management (CIKM), Intl.Conf. on Service Oriented Computing (ICSOC), Intl. Conf. on Web Services (ICWS), all CORE A*/A conferences usually with acceptance rate of 10-19%.

Dr Wei Zhang has been working on the proposed topic of this tutorial, namely adversarial attacks on textual data from 2018 and published an survey paper in 2020. She also published conference paper on black-box attacks. She has presented small scale tutorials within research group.


Tutorial 6

Title: Deep Learning on Graphs: Methods and Applications
Author: Irwin King, Jiani Zhang, Ziqiao Meng, Xinyu Fu, Yankai Chen, Tianyu Liu, Menglin Yang (The Chinese Univeristy of Hong Kong, HKSAR)
Email: king@cse.cuhk.edu.hk

Abstract: Over the past decade, deep Learning has achieved tremendous success in various domains. The representation power of deep learning to extract complex patterns layer-by-layer from underlying data is well recognized. However, applying deep learning to the ubiquitous graph data is non-trivial because of the non-Euclidean structure property, heterogeneity, and diversity of graphs. The main difficulty in analyzing graph data is to find the right way to express and exploit the graph’s underlying structural information. The objective of this tutorial is twofold. First, we provide a comprehensive overview of graph neural networks (GNNs) methods, mainly by following their development history and the ways these methods to solve the challenges posed by graphs. GNNs aim to learn a low-dimensional vector representation for every node in a graph, which can be used for other downstream ML tasks. To better capture the similarity and hierarchy of entities, we introduce the generalization of GNNs for hyperbolic embedding, which enforces hyperbolicity in hidden layers and conducts efficient Riemannian optimization. Second, we present how to utilize GNNs to solve real-world graph applications. To demonstrate the properties and challenges of learning on heterogeneous graphs , we investigate the applications of recommender systems, program understanding, and logical queries over knowledge graphs. We show how to apply GNNs to embed complex relations and multiple types of nodes into low-dimensional vectors to solve these problems. To demonstrate the effectiveness of GNNs on spatiotemporal and temporal graphs, we choose the applications of traffic forecasting and anomaly detection. The challenge of these two tasks is how to capture the structural relations as well as temporal dependencies.

Bio:
• Prof. Irwin King's research interests include machine learning, social computing, AI, web intelligence, data mining, and multimedia information processing. In these research areas, he has over 300 technical publications in journals and conferences. He is an Associate Editor of the Journal of Neural Networks and ACM Transactions on Knowledge Discovery from Data (ACM TKDD). He is President of the International Neural Network Society (INNS) and an IEEE Fellow, Distinguished Member of ACM, and HKIE Fellow. Moreover, he is the General Co-chair of The WebConf 2020, ICONIP 2020, WSDM 2011, RecSys 2013, ACML 2015, and in various capacities in a number of top conferences such as WWW, NIPS, ICML, IJCAI, AAAI, etc. While he was on leave with AT&T Labs Research, San Francisco, he also taught classes as a Visiting Professor at UC Berkeley. He received his B.Sc. degree in Engineering and Applied Science from California Institute of Technology, Pasadena and his M.Sc. and Ph.D. degree in Computer Science from the University of Southern California, Los Angeles.
• Jiani Zhang is a PhD student in computer science and engineering. Her research topic includes deep learning, graph neural networks, recommendations and learning analytics.
• Ziqiao Meng is a PhD student in computer science and engineering. His research topic includes theoretical understanding in graph neural networks and its applications in computer program understanding.
• Xinyu Fu is a PhD student in computer science and engineering. Her research topic includes deep learning, graph neural networks, and heterogeneous graph embedding.
• Yankai Chen is a PhD student in computer science and engineering. His research interest includes knowledge graph and graph neural networks related problems.
• Tianyu Liu is an MPhil student in computer science and engineering. Her research topic includes hyperbolic graph embedding and hyperbolic graph neural networks.
• Menglin Yang is a PhD student in computer science and engineering. His research interest includes dynamic graph embedding, graph optimization and curvature graph representation.


Tutorial 7

Title: Robust Adversarial Learning: Fundamentals, Theory, and Applications
Author: Kaizhu Huang (Xi’an Jiaotong-Liverpool University, China)
Email: kaizhu.huang@xjtlu.edu.cn

Abstract: Adversarial learning is a hot topic in machine learning, pattern recognition, and security. In particular, adversarial examples, referred to as augmented data points generated by imperceptible perturbation of input samples, have recently drawn much attention in the community. Being difficult to distinguish from real examples, such adversarial examples could change the prediction of many best learning models including the state-of-the-art deep learning models. Various attempts have recently been made to build robust models that take into account adversarial examples. However, these methods can either lead to performance drops, or are ad-hoc in nature and lack mathematic motivations. In this tutorial, we will first present the background of adversarial examples including its history, the motivation, properties, and the concepts. We will then discuss in theory how a unified framework and various robust learning models can be built against adversarial examples. Visualization and illustrative examples will be made to understand the theory. Finally we will extend the theory of adversarial examples in various applications including computer vision, pattern recognition, and cybersecurity. A series of experimental investigations will also be presented for illustrating the usefulness of adversarial examples for robust learning.

Bio: Kaizhu Huang is currently a Professor, Department of Electrical and Electronic Engineering, Xi’an Jiaotong- Liverpool University, China. He is also the founding director of Suzhou Municipal Key Laboratory of Cognitive Computation and Applied Technology. Prof. Huang has been working in machine learning, adversarial learning and security, neural information processing, and pattern recognition. He was the recipient of 2011 Asia Pacific Neural Network Society (APNNS) Younger Researcher Award. He also received Best Book Award in National Book Competition and best paper (or finalist) awards in international conferences four times. He has published 8 books in Springer and over 180 international research papers including about 60 SCI-indexed international journals, e.g., in journals (JMLR, Neural Computation, IEEE T-PAMI, IEEE T-NNLS, IEEE T-IP, IEEE T-BME, IEEE T-Cybernetics) and conferences (NeurIPS, IJCAI, SIGIR, UAI, CIKM, ICDM, ICML, ECML, CVPR). He serves as associated editors in four international journals (including three JCR-1 journals) and board member in three international book series. He has been invited as a keynote speaker in over 20 international conferences and forums. He has been sitting in the grant evaluation panels in Hong Kong RGC, Singapore AI programmes, and NSFC, China. He served as chairs and PCs in many international conferences and workshops such as ICONIP, AAAI, ICML, IJCAI, NeurIPS, ICLR, ICDAR, and ACPR. His homepage can be seen at: http://www.premilab.com/KaizhuHUANG.ashx.

Organizing Committee


Honorary Co-Chairs
  • Jonathan Chan, King Mongkut’s University of Technology Thonburi, Thailand
  • Irwin King, The Chinese University of Hong Kong, Hong Kong
General Co-Chairs
  • Andrew Leung, City University of Hong Kong, Hong Kong
  • James Kwok, The Hong Kong University of Science and Technology, Hong Kong
Program Co-Chairs
  • Haiqin Yang, Ping An Life, China
  • Kitsuchart Pasupa, King Mongkut's Institute of Technology Ladkrabang, Thailand
Local Arrangements Co-Chairs
  • Vithida Chongsuphajaisiddhi, King Mongkut University of Technology Thonburi, Thailand
Finance Co-Chairs
  • Vajirasak Vanijja, King Mongkut's University of Technology Thonburi, Thailand
  • Seiichi Ozawa, Kobe University, Japan
Special Sessions Co-Chairs
  • Kaizhu Huang, Xi'an Jiaotong Liverpool University, China
  • Raymond Chi-Wing Wong, The Hong Kong University of Science and Technology, Hong Kong
Tutorial Co-Chairs
  • Zenglin Xu, Harbin Institute of Technology, Shenzhen, China
  • Jing Li, The Hong Kong Polytech University, Hong Kong
Proceedings Co-Chairs
  • Xinyi Le, Shanghai Jiao Tong University, China
  • Jinchang Ren, University of Strathclyde, United Kingdom
Publicity Co-Chairs
  • Zeng-Guang Hou, Institute of Automation, Chinese Academy of Sciences, China
  • Ricky Ka-Chun Wong, City University of Hong Kong, Hong Kong
Regional Liaison
  • Yiu-ming Cheung, Hong Kong Baptist University, Hong Kong
  • Junqiu Wei, Noah's Ark Lab, Huawei Technologies, Hong Kong
  • Jianke Zhu, Zhejiang University, China
  • Jiefeng Cheng, Ping An Life, China
  • Kun Zhang, Carnegie Mellon University, USA
  • Zhirong Yang, Norwegian University of Science and Technology, Norway
  • Bo Zhang, Philips Research France, France
  • Paul S Pang, Federation University Australia, New Zealand
  • Somnuk Phon-Amnuaisuk, Universiti Teknologi Brunei, Brunei