Vis enkel innførsel

dc.contributor.authorYuanyue, Ge
dc.date.accessioned2022-12-19T13:25:07Z
dc.date.available2022-12-19T13:25:07Z
dc.date.issued2022
dc.identifier.issn1894-6402
dc.identifier.urihttps://hdl.handle.net/11250/3038579
dc.description.abstractDevelopment of agricultural harvesting robots is one of the solutions to the problems of decreasing labor availability, and improving farm productivity and sustainability. A vision system that can provide reliable detection and accurate location of the targets is essential for the harvesting robot. In this work, we present the vision system combining low-cost RGB-D cameras with the state-of-art machine learning methods, for strawberry-harvesting robots, that can identify pickable fruits and provide their accurate 3D locations in unstructured environments. The challenges of developing the vision system for a strawberry-harvesting robot can be divided into three categories: 1) the detection and comprehensive ripeness estimation is difficult under the unstructured environments; 2) some targets are unpickable for the robots due to the unreachable locations in the table-top environment and highly deformed points introduced by the depth camera; 3) various localization errors arise when locating objects in the farm conditions. Thus, the objectives of this thesis are to develop and apply machine learning methods, to detect fruit targets and estimate the fruit ripeness, to identify the pickable strawberries considering safe manipulation region and point cloud quality, and to improve the 3D localization accuracy. These topics were explored and discussed through six academic papers. Firstly, we utilised an instance segmentation network to detect strawberries in three growth stages as well as deformed strawberries in paper I. In addition, we developed a full-view gripper internal sensing setup utilizing deep learning networks for a comprehensive strawberry ripeness estimation in paper VI. Secondly, we presented an algorithm in paper II to define a safe manipulation region where the pickable strawberries for the robot were identified based on the detection results from a deep learning network. The data from the depth camera were found to be inaccurate in many cases and therefore, a method to classify unpickable strawberries with highly deformed points from the camera was developed in paper IV, to avoid relevant picking failures. Lastly, in paper III, a shape completion method was proposed to obtain a complete 3D shape of the detected strawberries, to improve the 3D localization accuracy. In addition, to further improve the detection and 3D localization accuracy and efficiency, paper V proposed and compared different locating algorithms based on the data from two types of cameras. In conclusion, this thesis presents machine learning based methods for the vision system of strawberry-harvesting robots, that can detect the targets and comprehensively estimate the fruit ripeness, identify the unpickable cases for the robots and finally, deal with 3D localization problems. The work in this thesis may lead to provide more robust and accurate vision systems that can be employed in harvesting robots. The proposed approach may be applicable to other similar crop harvesting systems or used as references for other fields.en_US
dc.language.isoengen_US
dc.publisherNorwegian University of Life Sciences, Åsen_US
dc.relation.ispartofseriesPhd Thesis;2022:16
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.no*
dc.titleMachine learning-based perception to identify and localize pickable strawberries for harvester robotsen_US
dc.title.alternativeMaskinlæringsbasert maskinsyn for identifisering og lokalisering av plukkebare jordbær for høsteroboteren_US
dc.typeDoctoral thesisen_US
dc.subject.nsiVDP::Technology: 500en_US
dc.relation.projectThe Research Council of Norway:303607en_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal