Reconfigurable intelligent surface (RIS) has emerged as a cutting-edge technology for beyond 5G and 6G networks due to its low-cost hardware production, nearly passive nature, easy deployment, communication without new waves, and energy-saving benefits. Unmanned aerial vehicle (UAV)-assisted wireless networks significantly enhance network coverage.
Resource allocation and real-time decision-making optimisation play a pivotal role in approaching the optimal performance in UAV- and RIS-aided wireless communications. But the existing contributions typically assume having a static environment and often ignore the stringent flight time constraints in real-life applications. It is crucial to improve the decision-making time for meeting the stringent requirements of UAV-assisted wireless networks. Deep reinforcement learning (DRL), which is a combination of reinforcement learning and neural networks, is used to maximise network performance, reduce power consumption, and improve the processing time for real-time applications. DRL algorithms can help UAVs and RIS work fully autonomously, reduce energy consumption and operate optimally in an unexpected environment.
This co-authored book explores the many challenges arising from real-time and autonomous decision-making for 6G. The goal is to provide readers with comprehensive insights into the models and techniques of deep reinforcement learning and its applications in 6G networks and internet-of-things with the support of UAVs and RIS.
Deep Reinforcement Learning for Reconfigurable Intelligent Surfaces and UAV Empowered Smart 6G Communications is aimed at a wide audience of researchers, practitioners, scientists, professors and advanced students in engineering, computer science, information technology, and communication engineering, and networking and ubiquitous computing professionals.
Les mer
This co-authored book explores the many challenges arising from real-time and autonomous decision-making for 6G by covering crucial advanced signal control and real-time decision-making methods for UAV- and RIS-assisted 6G wireless communications including the serious constraints in real-time optimisation problems.
Les mer
Part I: Introduction to machine learning and neural networksChapter 1: Artificial intelligence, machine learning, and deep learningChapter 2: Deep neural networks
Part II: Deep reinforcement learningChapter 3: Markov decision processChapter 4: Value function approximation for continuous state-action spaceChapter 5: Policy search methods for reinforcement learningChapter 6: Actor-critic learning
Part III: Deep reinforcement learning in UAV-assisted 6G communicationChapter 7: UAV-assisted 6G communicationsChapter 8: Distributed deep deterministic policy gradient for power allocation control in UAV-to-UAV-based communicationsChapter 9: Non-cooperative energy-efficient power allocation game in UAV-to-UAV communication: a multi-agent deep reinforcement learning approachChapter 10: Real-time energy harvesting-aided scheduling in UAV-assisted D2D networksChapter 11: 3D trajectory design and data collection in UAV-assisted networks
Part IV: Deep reinforcement learning in reconfigurable intelligent surface-empowered 6G communicationsChapter 12: RIS-assisted 6G communicationsChapter 13: Real-time optimisation in RIS-assisted D2D communicationsChapter 14: RIS-assisted UAV communications for IoT with wireless power transfer using deep reinforcement learningChapter 15: Multi-agent learning in networks supported by RIS and multi-UAVs
Les mer
Produktdetaljer
ISBN
9781839536410
Publisert
2025-01-07
Utgiver
Vendor
Institution of Engineering and Technology
Høyde
234 mm
Bredde
156 mm
Aldersnivå
U, P, 05, 06
Språk
Product language
Engelsk
Format
Product format
Innbundet
Antall sider
293