This book introduces readers to the fundamentals of and recent advances in federated learning, focusing on reducing communication costs, improving computational efficiency, and enhancing the security level. Federated learning is a distributed machine learning paradigm which enables model training on a large body of decentralized data. Its goal is to make full use of data across organizations or devices while meeting regulatory, privacy, and security requirements. The book starts with a self-contained introduction to artificial neural networks, deep learning models, supervised learning algorithms, evolutionary algorithms, and evolutionary learning. Concise information is then presented on multi-party secure computation, differential privacy, and homomorphic encryption, followed by a detailed description of federated learning. In turn, the book addresses the latest advances in federate learning research, especially from the perspectives of communication efficiency, evolutionarylearning, and privacy preservation. The book is particularly well suited for graduate students, academic researchers, and industrial practitioners in the field of machine learning and artificial intelligence. It can also be used as a self-learning resource for readers with a science or engineering background, or as a reference text for graduate courses.       
Les mer
This book introduces readers to the fundamentals of and recent advances in federated learning, focusing on reducing communication costs, improving computational efficiency, and enhancing the security level.
Les mer
Introduction.- Communication-Efficient Federated Learning.- Evolutionary Federated Learning.-Secure Federated Learning.- Summary and Outlook.
This book introduces readers to the fundamentals of and recent advances in federated learning, focusing on reducing communication costs, improving computational efficiency, and enhancing the security level. Federated learning is a distributed machine learning paradigm which enables model training on a large body of decentralized data. Its goal is to make full use of data across organizations or devices while meeting regulatory, privacy, and security requirements. The book starts with a self-contained introduction to artificial neural networks, deep learning models, supervised learning algorithms, evolutionary algorithms, and evolutionary learning. Concise information is then presented on multi-party secure computation, differential privacy, and homomorphic encryption, followed by a detailed description of federated learning. In turn, the book addresses the latest advances in federate learning research, especially from the perspectives of communication efficiency, evolutionarylearning, and privacy preservation. The book is particularly well suited for graduate students, academic researchers, and industrial practitioners in the field of machine learning and artificial intelligence. It can also be used as a self-learning resource for readers with a science or engineering background, or as a reference text for graduate courses.              
Les mer
Presents the fundamentals of and latest advances in federated learning Addresses communication efficiency and privacy-preservation problems in federated learning Proposes applying evolutionary neural architecture search for federated learning
Les mer

Produktdetaljer

ISBN
9789811970856
Publisert
2023-12-01
Utgiver
Vendor
Springer Verlag, Singapore
Høyde
235 mm
Bredde
155 mm
Aldersnivå
Research, P, UU, 06, 05
Språk
Product language
Engelsk
Format
Product format
Heftet

Biographical note

Yaochu Jin is an “Alexander von Humboldt Professor for Artificial Intelligence” in the Faculty of Technology, Bielefeld University, Germany. He is also a part-time Distinguished Chair Professor in Computational Intelligence at the Department of Computer Science, University of Surrey, Guildford, UK. He was a “Finland Distinguished Professor” at the University of Jyväskylä, Finland, “Changjiang Distinguished Visiting Professor” at Northeastern University, China, and “Distinguished Visiting Scholar” at the University of Technology in Sydney, Australia. His main research interests include data-driven optimization, multi-objective optimization, multi-objective learning, trustworthy machine learning, and evolutionary developmental systems. Prof Jin is a Member of Academia Europaea and IEEE Fellow.

Hangyu Zhu received B.Sc. degree from Yangzhou University, Yangzhou, China, in 2015, M.Sc. degree from RMIT University, Melbourne, VIC, Australia, in 2017, and PhD degree from University of Surrey, Guildford, UK, in 2021. He is currently a Lecturer with the Department of Artificial Intelligence and Computer Science, Jiangnan University, China. His main research interests are federated learning and evolutionary neural architecture search.

Jinjin Xu received the B.S and Ph.D. degrees from East China University of Science and Technology, Shanghai, China, in 2017 and 2022, respectively. He is currently a researcher with the Intelligent Perception and Interaction Research Department, OPPO Research Institute, Shanghai, China. His research interests include federated learning, data-driven optimization and its applications.

Yang Chen received Ph.D. from the School of Information and Control Engineering, China University of Mining and Technology, China, in 2019. He was a Research Fellow with the School of Computer Science and Engineering, Nanyang Technological University, Singapore, 2019-2022. He is currently with the School of Electrical Engineering,  China University of Mining and Technology, China. His research interests include deep learning, secure machine learning, edge computing, anomaly detection, evolutionary computation, and intelligence optimization.