Deep reinforcement learning for task offloading in unmanned aerial vehicle assisted intelligent farm network
AD |
Wen | Zhang BucaiEditor | Zhang Bucai1. IntroductionAs wireless network communication becomes increasingly powerful,It has become the norm for a scholar to know the world's affairs without going outDrones and artificial intelligence are becoming increasingly important in agriculture,Automatically monitor farmland to improve agricultural landscape and perform numerous image classification tasks to prevent damage to the farm in the event of fires or floods
Wen | Zhang Bucai
Editor | Zhang Bucai
1. Introduction
As wireless network communication becomes increasingly powerful,It has become the norm for a scholar to know the world's affairs without going outDrones and artificial intelligence are becoming increasingly important in agriculture,Automatically monitor farmland to improve agricultural landscape and perform numerous image classification tasks to prevent damage to the farm in the event of fires or floods. However, with current technology,Drones have limited energy and computing power and may not be able to executeAll intensive image classification tasksHow to improve the capabilities of drones has become a top priority.
2. Related work
The use of reinforcement learning (RL) to manage wireless network resources and optimize performance has been widely studied in many different applications. Investigated the challenges and opportunities of AI in 5G and 6G networks. Such as energy management and radio resource allocation. Using AI to achieve energy efficiency in 6G will be essential. In addition, someone has proposed a deep RL method,Joint optimization problem for maximizing computation and minimizing energy consumption through offloadingFor 5G and higher networks.
Their network also utilizes MEC servers as processing units to assist their network in computationally intensive tasks. Similarly, someone introduced the deep RL algorithm in the industrial internet of things environment.Just to find an optimal virtual network function placement and scheduling strategy,To minimize end-to-end latency and costs.
In the study of using drones in intelligent farms, a detailed introduction was given on how to use drones to capture aerial images and use image classification to identify crops and weeds in the field.The idea of using drones to spray insecticides was discussedThe trade-off between latency and battery usage.
In 5G and higher networks, using drones and MEC devices simultaneously is beneficial for applications. For different applications such as space air ground networks and emergency search and rescue missionsProvided extensive investigation on the use of drones and MECs. In addition, the use ofDrones provide the possibility of connecting to 6G car networking applications.
The existing methods for optimizing energy consumption and latency of drones are not limited to smart farm scenarios. For example, by optimizing the following parametersUser association, power control, computing power allocation, and location planning. A network composed of satellites, drones, ground base stations, and IoT devices was considered. Using deep RL as a task scheduling solution to considerMinimizing processing delay while minimizing drone energy limitation. Alternatively, clustering and trajectory planning can be used to optimize energy efficiency and task latency.
In addition, game theory solutions are used to solve the task unloading problem in UAV group scenarios. Although we are exploring similar issues, we focus on using DQLJointly solving energy and task delay optimization problems.
3. System model
j J...MECj0 J +.
t TKBjt.DjtPjt.scheduling algorithm In order to assign each task to the processing unit in a way,Enable tasks to be completed before their deadlines and maximize drone hover time.
WRj0vjtv.The first goal is to maximize the minimum remaining power,To extend the hover time of the drone network.Rj0
Energy consumed.
Pjtj0t0 is a binary decision variable,If processing unit j0 processes tasks, it is equal to 1.jtPjt.
P+jtj0t0 is a binary decision variable,If it is processing unit j0 that starts processing tasksThe time interval t0 is equal to 1t0j0tj.
.
xjtj0j0.When the task will be executed on processing unit j0, it is set to 1, otherwise it will be set to 0..
Q-Learning Q Q . Q Q .The Q value measures the performance of the action in a given stateFuture cumulative discount rewards. Q .
In deep Q-Learning, we use aDeep neural network Q . Q . Q .. Q .
DQL - Q Q .Q-Learning . DQL We use DNN to predict the Q value of each action in a given state, rather than looking up the Q value in the Q table..
Experience is a tuple that includes proxiesCurrent state, next state, actions, and rewards. DNN.DNN Q .
Each drone in the network will have its ownMDP framework...Drones must choose processing units that minimize deadline violations and energy consumption in order to receive the highest reward.MDP
Status: The status includes the uninstalled task type k, Lj0J MEC j1J+tTj2J+.
(L_ja-1)(1-E(vja)+V_L_ja*E(vja)).L_jaActions that do not result in a significant increase in energy consumption.e.V_L_ja.
.If a deadline violation is inevitable, the punishment will be lighter.
4. Benchmark method
1. Recurrent scheduling (RR):j0J+1J+..
2. Maximum Energy Priority (HEF)..Energy levelEnergy level1Energy level.
MECMEC.MEC1 / J +.
3. Minimum queue time and highest energy priority (QHEF).Minimum queuing time.Energy level.Energy levelEnergy level..
4. Q-LearningWe used the proposedQ-Learning algorithm.Q-Learning algorithmepsilon-greedy. Q-Learning algorithmj1J +tTj2J +.
5. Performance evaluation
Simu5G Omnet++ 5G .J=4 MEC L=1..
The task arrival time interval is modeled as an exponential distribution, and each task type has a uniqueAverage arrival rate and processing time.
. Q-Learning Deep Q-Learning 0.05 0.85. [6] ..
Simulate the performance of drones throughout the entire simulation processEnergy level (Bj0) 570 (Hj0) 211 17 4320 12960.
6. Conclusion:
Q-Learning algorithm.Incorporate observation values into the stateQ-Learning.RRHEFQHEFQ-LearningDQLQ-Learning13.
DQLQ-Learning.Q-Learning..
Reference:
[1] A. D. A. Aldabbagh, C. Hairu, and M. Hanafi, , 2020 IEEE (ICSET) pp. 213217IEEE202011.
[2] Y. Lina Y. Xiuming2020 (ICCR) pp. 2124IEEE202012.
[3] J. ZhaoY. WangZ. Fei X. Wang2020 IEEE/CIC (ICCC) pp. 424429IEEE20208.
[4] S. ZhangH. Zhang L. Song D2D6G IEEE 69 pp. 659266022020.
Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])
Mobile advertising space rental |
Tag: Deep reinforcement learning for task offloading in unmanned aerial
Guess you like
-
2024 Spring Festival Travel Rush New Train Schedule: 321 Additional Trains Nationwide Starting January 5th, Further Enhancing Service Quality and EfficiencyDetail
2024-12-23 12:05:44 1
-
Changan Automobile and EHang Intelligent Sign Strategic Cooperation Agreement to Build Future Flying Car EcosystemDetail
2024-12-22 15:08:38 1
-
Liaoning Province and Baidu Sign Strategic Cooperation Framework Agreement to Jointly Promote AI Industry DevelopmentDetail
2024-12-20 19:36:38 1
-
Wanxun Technology Secures Nearly RMB 200 Million in Funding to Lead Global Soft Robotics Innovation, Set to Showcase Breakthroughs at CES 2025Detail
2024-12-20 15:54:19 1
-
Huolala's 2025 Spring Festival Freight Festival: Supporting Spring Festival Travel, Offering New Year Benefits to Users and DriversDetail
2024-12-20 13:38:20 1
-
The Third Meeting of the Third Council of the International New Energy Solutions Platform (INES): Charting a Blueprint for a "Dual Carbon" FutureDetail
2024-12-19 17:03:07 1
-
WeChat's Official Account Launches "Author Read Aloud Voice" Feature for Personalized Article ListeningDetail
2024-12-18 17:19:57 1
-
The 12th China University Students' Polymer Materials Innovation and Entrepreneurship Competition Finals Grand Opening in Guangrao CountyDetail
2024-12-18 16:04:28 1
-
Tracing the Ancient Shu Road, Winds of the Three Kingdoms: Global Influencer Shu Road Journey LaunchesDetail
2024-12-18 15:23:35 1
-
Seres: A Pioneer in ESG Practices, Driving Sustainable Development of China's New Energy Vehicle IndustryDetail
2024-12-17 16:20:26 1
- Detail
-
My Health, My Guard: Huawei WATCH D2 Aids Precise Blood Pressure Management in the Winter Health BattleDetail
2024-12-17 09:36:15 1
-
Investigation into the Chaos of Airline Seat Selection: Paid Seat Selection, Seat Locking Mechanisms, and Consumer Rights ProtectionDetail
2024-12-15 16:45:48 1
-
Japanese Scientists Grow Human Organs in Pigs: A Balancing Act of Breakthrough and EthicsDetail
2024-12-14 19:48:50 1
-
Pang Donglai and Sam's Club: Two Paths to Transformation in China's Retail IndustryDetail
2024-12-14 17:57:03 1
-
In-Depth Analysis of China's Precision Reducer Industry: Technological Innovation and Market CompetitionDetail
2024-12-14 16:04:26 1
-
Alibaba's "TAO" App Launches in Japan, Targeting High-Quality Service and Convenient LogisticsDetail
2024-12-13 13:22:23 1
-
In-depth Analysis of China's Cross-border E-commerce Industry Chain: Opportunities and Challenges CoexistDetail
2024-12-13 11:37:17 1
-
Sweet Potato Robotics: How a Unified Software and Hardware Computing Platform Accelerates Robotics Industry DevelopmentDetail
2024-12-13 06:36:34 1
- Detail