Deep Learning models are at the core of artificial intelligence research today. It is well known that deep learning techniques are disruptive for Euclidean data, such as images or sequence data, and not immediately applicable to graph-structured data such as text. This gap has driven a wave of research for deep learning on graphs, including graph representation learning, graph generation, and graph classification. The new neural network architectures on graph-structured data (graph neural networks, GNNs in short) have performed remarkably on these tasks, demonstrated by applications in social networks, bioinformatics, and medical informatics.  Despite these successes, GNNs still face many challenges ranging from the foundational methodologies to the theoretical understandings of the power of the graph representation learning.This book provides a comprehensive introduction of GNNs. It first discusses the goals of graph representation learning and then reviews the history,current developments, and future directions of GNNs. The second part presents and reviews fundamental methods and theories concerning GNNs while the third part describes various frontiers that are built on the GNNs. The book concludes with an overview of recent developments in a number of applications using GNNs. This book is suitable for a wide audience including undergraduate and graduate students, postdoctoral researchers, professors and lecturers, as well as industrial and government practitioners who are new to this area or who already have some basic background but want to learn more about advanced and promising techniques and applications.
Les mer
This gap has driven a wave of research for deep learning on graphs, including graph representation learning, graph generation, and graph classification.
Chapter 1. Representation Learning.- Chapter 2. Graph Representation Learning.- Chapter 3. Graph Neural Networks.- Chapter 4. Graph Neural Networks for Node Classification.- Chapter 5. The Expressive Power of Graph Neural Networks.- Chapter 6. Graph Neural Networks: Scalability.- Chapter 7. Interpretability in Graph Neural Networks.- Chapter 8. "Graph Neural Networks: Adversarial Robustness".- Chapter 9. Graph Neural Networks: Graph Classification.- Chapter 10. Graph Neural Networks: Link Prediction.- Chapter 11. Graph Neural Networks: Graph Generation.- Chapter 12. Graph Neural Networks: Graph Transformation.- Chapter 13. Graph Neural Networks: Graph Matching.- Chapter 14. "Graph Neural Networks: Graph Structure Learning". Chapter 15. Dynamic Graph Neural Networks.- Chapter 16. Heterogeneous Graph Neural Networks.- Chapter 17. Graph Neural Network: AutoML.- Chapter 18. Graph Neural Networks: Self-supervised Learning.- Chapter 19. Graph Neural Network in Modern Recommender Systems.- Chapter 20. Graph Neural Network in Computer Vision.- Chapter 21. Graph Neural Networks in Natural Language Processing.- Chapter 22. Graph Neural Networks in Program Analysis.- Chapter 23. Graph Neural Networks in Software Mining.- Chapter 24. "GNN-based Biomedical Knowledge Graph Mining in Drug Development".- Chapter 25. "Graph Neural Networks in Predicting Protein Function and Interactions".- Chapter 26. Graph Neural Networks in Anomaly Detection.- Chapter 27. Graph Neural Networks in Urban Intelligence. 
Les mer
Deep Learning models are at the core of artificial intelligence research today. It is well known that deep learning techniques are disruptive for Euclidean data, such as images or sequence data, and not immediately applicable to graph-structured data such as text. This gap has driven a wave of research for deep learning on graphs, including graph representation learning, graph generation, and graph classification. The new neural network architectures on graph-structured data (graph neural networks, GNNs in short) have performed remarkably on these tasks, demonstrated by applications in social networks, bioinformatics, and medical informatics.  Despite these successes, GNNs still face many challenges ranging from the foundational methodologies to the theoretical understandings of the power of the graph representation learning.This book provides a comprehensive introduction of GNNs. It first discusses the goals of graph representation learning and then reviews the history,current developments, and future directions of GNNs. The second part presents and reviews fundamental methods and theories concerning GNNs while the third part describes various frontiers that are built on the GNNs. The book concludes with an overview of recent developments in a number of applications using GNNs. This book is suitable for a wide audience including undergraduate and graduate students, postdoctoral researchers, professors and lecturers, as well as industrial and government practitioners who are new to this area or who already have some basic background but want to learn more about advanced and promising techniques and applications.
Les mer
Provide a comprehensive introduction on graph neural networks Written by leading experts in the field Can be used in various courses, including but not limited to deep learning, data mining, CV and NLP
Les mer

Produktdetaljer

ISBN
9789811660535
Publisert
2022-01-04
Utgiver
Vendor
Springer Verlag, Singapore
Høyde
235 mm
Bredde
155 mm
Aldersnivå
Graduate, P, 06
Språk
Product language
Engelsk
Format
Product format
Innbundet

Biographical note

Dr. Lingfei Wu is a Principal Scientist at JD.COM Silicon Valley Research Center, leading a team of 30+ machine learning/natural language processing scientists and software engineers to build intelligent e-commerce personalization system. He earned his Ph.D. degree in computer science from the College of William and Mary in 2016. Previously, he was a research staff member at IBM Thomas J. Watson Research Center and led a 10+ research scientist team for developing novel Graph Neural Networks methods and systems, which leads to the #1 AI Challenge Project in IBM Research and multiple IBM Awards including three-time Outstanding Technical  Achievement Award. He has published more than 90 top-ranked conference and journal papers, and is a co-inventor of more than 40 filed US patents. Because of the high commercial value of his patents, he has received eight invention achievement awards and has been appointed as IBM Master Inventors, class of 2020. He was the recipients of theBest Paper Award and Best Student Paper Award of several conferences such as IEEE ICC’19, AAAI workshop on DLGMA’20 and KDD workshop on DLG’19. His research has been featured in numerous media outlets, including NatureNews, YahooNews, Venturebeat, TechTalks, SyncedReview, Leiphone, QbitAI, MIT News, IBM Research News, and SIAM News. He has co-organized 10+ conferences (KDD, AAAI, IEEE BigData) and is the founding co-chair for Workshops of Deep Learning on Graphs (with AAAI’21, AAAI’20, KDD’21, KDD’20, KDD’19, and IEEE BigData’19). He has currently served as Associate Editor for IEEE Transactions on Neural Networks and Learning Systems, ACM Transactions on Knowledge Discovery from Data and International Journal of Intelligent Systems, and regularly served as a SPC/PC member of the following major AI/ML/NLP conferences including KDD, IJCAI, AAAI, NIPS, ICML, ICLR, and ACL.

Dr. Peng Cui is an Associate Professor with tenure at Department of Computer Science in Tsinghua University. He obtained his PhD degree from Tsinghua University in 2010. His research interests include data mining, machine learning and multimedia analysis, with expertise on network representation learning, causal inference and stable learning, social dynamics modeling, and user behavior modeling, etc. He is keen to promote the convergence and integration of causal inference and machine learning, addressing the fundamental issues of today’s AI technology, including explainability, stability and  fairness issues. He is recognized as a Distinguished Scientist of ACM, Distinguished Member of CCF and Senior Member of IEEE. He has published more than 100 papers in prestigious conferences and journals in machine learning and data mining. He is one of the most cited authors in network embedding. A number of his pro- posed algorithms on network embedding generate substantial impact in academia and industry. His recent research won the IEEE Multimedia Best Department Paper Award, IEEE ICDM2015 Best Student Paper Award, IEEE ICME 2014 Best Paper Award, ACM MM12 Grand Challenge Multimodal Award, MMM13 Best Paper Award, and were selected into the Best of KDD special issues in 2014 and 2016, respectively. He was PC co-chair of CIKM2019 and MMM2020, SPC or area chair of ICML, KDD, WWW, IJCAI, AAAI, etc., and Associate Editors of IEEE TKDE (2017-), IEEE TBD (2019-), ACM TIST(2018-), and ACM TOMM (2016-) etc. He received ACM China Rising Star Award in 2015, and CCF-IEEE CS Young Scientist Award in 2018.

Dr. Jian Pei is a Professor in the School of Computing Science at Simon Fraser University. He is a well-known leading researcher in the general areas of data science, big data, data mining, and database systems. His expertise is on developing effective and efficient data analysis techniques for novel data intensive applications, and transferring his research results to products and business practice. He is recognized as a Fellow of the Royal Society of Canada (Canada’s national academy), the Canadian Academy of Engineering, the Association of Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). He is one of the most cited authors in data mining, database systems, and information retrieval. Since 2000, he has published one textbook, two monographs and over 300 research papers in refereed journals and conferences, which have been cited extensively by others. His research has generated remarkable impact substantially beyond academia. For example, his algorithms have been adopted by industry in production and popular open-source software suites. Jian Pei also demonstrated outstanding professional leadership in many academic organizations and activities. He was the editor-in-chief of the IEEE Transactions of Knowledge and Data Engineering (TKDE) in 2013-16, the chair of the Special Interest Group on Knowledge Discovery in Data (SIGKDD) of the As- sociation for Computing Machinery (ACM) in 2017-2021, and a general co-chair or program committee co-chair of many premier conferences. He maintains a wide spectrum of industry relations with both global and local industry partners. He is an active consultant and coach for industry on enterprise data strategies, healthcare informatics, network security intelligence, computational finance, and smart retail. He received many prestigious awards, including the 2017 ACM SIGKDD Innovation Award, the 2015 ACM SIGKDD Service Award, the 2014 IEEE ICDM Re- search Contributions Award, the British Columbia Innovation Council 2005 Young Innovator Award, an NSERC 2008 Discovery Accelerator Supplements Award (100 awards cross the whole country), an IBM Faculty Award (2006), a KDD Best Ap- plication Paper Award (2008), an ICDE Influential Paper Award (2018), a PAKDD Best Paper Award (2014), a PAKDD Most Influential Paper Award (2009), and an IEEE Outstanding Paper Award (2007).

Dr. Liang Zhao is an assistant professor at the Department of Compute Science at Emory University. Before that, he was an assistant professor in the Department of Information Science and Technology and the Department of Computer Science at George Mason University. He obtained his PhD degree in 2016 from Computer Science Department at Virginia Tech in the United States. His research interests include data mining, artificial intelligence, and machine learning, with special interests in spatiotemporal and network data mining, deep learning on graphs, nonconvex optimization, model parallelism, event prediction, and interpretable machine learning. He received AWS Ma- chine Learning Research Award in 2020 from Amazon Company for his research on distributed graph neural networks. He won NSF Career Award in 2020 awarded by National Science Foundation for his research on deep learning for spatial networks, and Jeffress Trust Award in 2019 for his research on deep generative models for bio- molecules, awarded by Jeffress Memorial Trust Foundation and Bank of America. He won the Best Paper Award in the 19th IEEE International Conference on Data Mining (ICDM 2019) for the paper of his lab on deep graph transformation. He has also won Best Paper Award Shortlist in the 27th Web Conference (WWW 2021) for deep generative models. He was selected as “Top 20 Rising Star in Data Mining” by Microsoft Search in 2016 for his research on spatiotemporal data mining. He has also won Outstanding Doctoral Student in the Department of Computer Science at Virginia Tech in 2017. He is awarded as CI-Fellow Mentor 2021 by the Computing Community Consortium for his research on deep learning for spatial data. He has published numerous research papers in top-tier conferences and journals such as KDD, TKDE, ICDM, ICLR, Proceedings of the IEEE, ACM Computing Surveys, TKDD, IJCAI, AAAI, and WWW. He has been serving as organizers such as publication chair, poster chair, and session chair for many top-tier conferences such as SIGSPATIAL, KDD, ICDM, and CIKM.