Vis enkel innførsel

dc.contributor.advisorBurud, Ingunn
dc.contributor.advisorLillemo, Morten
dc.contributor.authorLied, Lars Martin Bøe
dc.date.accessioned2019-10-30T12:02:24Z
dc.date.available2019-10-30T12:02:24Z
dc.date.issued2019
dc.identifier.urihttp://hdl.handle.net/11250/2625385
dc.descriptionBildeanalyse av hvete-felt, med drone og multispektralt kamera. Mål: predikere avling og estimere høyde.nb_NO
dc.description.abstractA need to increase efficiency of plant phenotyping has arisen due to global warming, food shortage, population growth etc. One way to do this, is by using image analysis. This can reduce amount of work by analyzing vast areas and get quick results, in comparison to manual labor. In this thesis, a UAV (Unmanned Aerial Vehicle) attached with an RGB camera, a multispectral camera, a GPS and a light sensor, was flown over three fields of spring wheat. The multispectral camera used the bands; blue, red, green, near-infrared (NIR) and red edge (REG). In addition, the MTCI (MERIS Terrestrial Chlorophyll Index), EVI (Enhanced Vegetation Index) and NDVI (Normalized Difference Vegetation Index) indices were used. The UAV was own over the three different fields in a grid pattern while collecting images. Afterwards, these images were stitched together into maps of the fields using Pix4D. Maps from two fields from the season before were also included. The image-values within each plot were then extracted using QGIS. The data extracted was put into machine learning and deep learning algorithms to predict grain yield for each plot. The grain yield had then been measured manually before-hand after harvest. Using the machine learning algorithm Support Vector Regressor for predicting the grain yield, the following R^2 values for each field were achieved: 0.595, 0.582, 0.735, 0.63 and 0.917. The mean absolute errors (MAE) using the SVR for the different fields are between 5.6 % and 11.1 % of the average grain yield for the specific fields. The deep learning models using the two types ReLU and Selu, produced slightly worse results. The images taken by the RGB camera were used for estimating plant height. This was done by creating a Digital Surface Model (DSM) to model the surface of the fields and a Digital Terrain Model (DTM) to model the terrain the fields were in. To estimate height for the fields, the DTM was subtracted from the DSM. These estimations of height were then compared to manually measured values. The best estimation achieved an R^2 value of 0.33.nb_NO
dc.language.isoengnb_NO
dc.publisherNorwegian University of Life Sciences, Åsnb_NO
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.subjectPhenotypingnb_NO
dc.titleMultispectral image analysis of spring wheat using UAV and machine learningnb_NO
dc.typeMaster thesisnb_NO
dc.subject.nsiVDP::Matematikk og Naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420::Simulering, visualisering, signalbehandling, bildeanalyse: 429nb_NO
dc.subject.nsiVDP::Matematikk og Naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420::Teoretisk databehandling, programmeringsspråk og -teori: 421nb_NO
dc.description.localcodeM-MFnb_NO


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal