Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

Feng Xie

and 4 more

Multi-arm harvesting robots offer a promising solution to the labor shortage in fruit harvesting, due to their ability to improve harvesting efficiency. However, multi-arm harvesters necessitate additional visual sensors to acquire distribution information of fruits within larger working spaces. Greater demands are consequently imposed on graphics computation, leading to increased costs in computing hardware of robot system. To balance the graphics computing cost and reduce energy consumption, distributed graphics computation frameworks for multi-arm robot vision system are proposed in this study. First, a host-edge framework is proposed to assign the tasks of image inference and depth alignment to host computer and edge computing modules through a decentralized mode of local connection. Moreover, to increase the endurance time of robot in application, the edge computing modules are reduced and the fifth generation mobile communication is integrated into robot graphics computing system to transfer on-board image processing to a remote computing server with MQTT protocol. To verify the effectiveness of the proposed framework, comprehensive experiments were performed, demonstrating that, compared with traditional computing framework, the proposed local distributed framework reduced 35.6% average time consumption, and over 20 FPS average processing speed can be achieve. The remote distributed framework has reduced the computational power consumption of the on-board system by approximately 23.1% while ensuring the performance is not lower than the local distributed framework. Finally, by discussing the two frameworks in terms of stability and cost, we present the commercial viability for the application of multi-arm harvesting robot.

Zhihua Diao

and 9 more

The continuous and close combination of artificial intelligence technology and agriculture promotes the rapid development of smart agriculture, among which the agricultural robot navigation line recognition algorithm based on deep learning has achieved great success in detection accuracy and detection speed. However, there are still many problems, such as the large size of the algorithm is difficult to deploy in hardware equipment, and the accuracy and speed of crop row detection in real farmland environment are low. In order to solve the above problems, this paper proposed a navigation line extraction algorithm for corn spraying robot based on Light-YOLOv8s network. Firstly, the Convolution (Conv) module and C2f module of YOLOv8s network are replaced with Depthwise Convolution (DWConv) module and PP-LCNet module respectively to reduce the parameters (Params) and giga floating-point operations per second (GFLOPs) of the network, so as to achieve the purpose of network lightweight. Secondly, in order to reduce the precision loss caused by network lightweight, the spatial pyramid pooling fast (SPPF) module in the backbone network is changed to atrous spatial pyramid pooling faster (ASPPF) module to improve the accuracy of network feature extraction. Meanwhile, normalization-based attention module (NAM) is introduced into the network to improve the network’s attention to corn plants. Then the corn plant was located by using the midpoint of the corn plant detection box. Finally, the least square method is used to extract the maize crop row line, and the middle line of the maize crop row line is the navigation line of the maize spraying robot. According to the experimental results, the Params of Light-YOLOv8s network decreased by 29.24%, 86.64% and 55.38%, respectively, compared with YOLOv5s network, YOLOv7 network and YOLOv8s network. GFLOPs dropped 26.79%, 88.77%, and 58.74%, respectively, while accuracy lost only 1%, 0.6%, and 2.2%. It shows that the Light-YOLOv8s network proposed in this paper greatly reduces the size of the model, solves the problems such as the difficulty of deployment caused by the large size of the existing algorithm, and also greatly reduces the accuracy loss of the model, and solves the problems such as the reduced accuracy of the algorithm caused by the lightweight network. When the corn spraying robot works in the real farmland environment, the navigation line extraction algorithm proposed in this paper not only ensures the real-time navigation of the corn spraying robot, but also ensures the accuracy of the navigation, and makes a contribution to the development of agricultural robot visual navigation technology.