dc.contributor.advisor | From, Pål Johan | |
dc.contributor.advisor | Kvam, Johannes | |
dc.contributor.advisor | Moore, Richard J.D. | |
dc.contributor.author | Bakken, Marianne | |
dc.date.accessioned | 2021-10-07T07:16:56Z | |
dc.date.available | 2021-10-07T07:16:56Z | |
dc.date.issued | 2021 | |
dc.identifier.isbn | 978-82-575-1849-3 | |
dc.identifier.issn | 1894-6402 | |
dc.identifier.uri | https://hdl.handle.net/11250/2788278 | |
dc.description.abstract | To feed a growing world population and achieve the goal of zero hunger, we must develop new technologies to improve farm productivity and sustainability. Agri-robots can be a part of this solution, but new research is needed to provide reliable and low-cost autonomous operation across the broad spectrum of agricultural environments. Combining low-cost RGB cameras for vision with the recent advances in deep learning is a promising direction that can enable easier adaption and lower hardware costs than existing solutions.
We explicitly tackle two of the main challenges faced when applying deep learning in robotics: learning from data of limited quantity and/or quality, and making neural networks easier to understand for humans. Thus, the main objectives of this work are to develop and apply methods that are more data-efficient and explainable than state-of-the-art in learning-based visual robot guidance, and to apply this insight to guide agri-robots in the field.
These topics are explored through five papers. First, we investigate the properties of an established end-to-end learning strategy for guidance and apply it in crop row following. Although promising at first, the black-box nature of this approach and inherent difficulties for debugging led to two different strategies; 1) a more explainable network architecture with a new supervision strategy for this task, and 2) a novel visualisation method to better understand visual features in convolutional neural networks. Finally, we unite these strategies in a new hybrid learning approach for row following that is both robust, data-efficient and more transparent.
The main contributions of this thesis are 1) Increased explainability through the development of a novel feature visualisation method, which provides explanations that are complementary to existing methods, 2) Increased data-efficiency and adaptability of learning-based crop row following through a new supervision approach which eliminates the need for hand-drawn labels, and 3) New insight into applications of learning-based methods in the field, by testing several supervision strategies on a real robot in the field, and considering the whole pipeline from data collection to predicted steering angle. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Norwegian University of Life Sciences, Ås | en_US |
dc.relation.ispartofseries | PhD Thesis;2021:73 | |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/deed.no | * |
dc.subject | Computer vision | en_US |
dc.subject | machine learning | en_US |
dc.subject | robotics | en_US |
dc.subject | agricultural robotics | en_US |
dc.subject | visual navigation | en_US |
dc.subject | deep learning | en_US |
dc.subject | autonomous navigation | en_US |
dc.title | Explainable and data-efficient learning for visual guidance of autonomous agri-robots | en_US |
dc.title.alternative | Forklarbar og dataeffektiv maskinlæring for visuell styring av autonome landbruksroboter | en_US |
dc.type | Doctoral thesis | en_US |
dc.subject.nsi | VDP::Mathematics and natural science: 400::Information and communication science: 420::Simulation, visualization, signal processing, image processing: 429 | en_US |
dc.relation.project | Norges forskningsråd: 259869 | en_US |