Plot extraction from aerial imagery: A precision agriculture approach

The plant phenotyping community is adopting technological innovations in order to record phenotypic attributes more quickly and objectively. Low altitude aerial imaging is an appealing option for increasing throughput but there are still challenges in the image processing pipeline. One such challenge involves the assignment of a spatial reference to each plot entry in an experimental layout. Image‐based approaches are increasingly popular since plot boundaries are often, but not always, clearly visible in low altitude imagery. In addition, workflows that make geometric assumptions about plot layout also show promise. We outline an alternative approach to generate plot boundaries to overlay with aerial imagery. The proposed method involves high‐accuracy georeferencing (i.e., within a few cm) of imagery and planter activity, after which georeferencing of all plot entries is complete and only requires a few simple steps to convert logged spatial positions to polygons using open source geographic information systems (GIS) software. Compared with other approaches, the proposed method provides imagery that is precisely aligned over time and always aligns with plot boundaries, which are fixed and do not vary from image to image.


INTRODUCTION
The application of unmanned aerial systems (UAS) for highthroughput crop phenotyping has potential to streamline data collection in crop research programs. Although many steps in the processing and analysis of UAS image data have been automated, a major limitation for the widespread adoption of this technology in agricultural research is the delineation of plot boundaries within images of field experiments, which is necessary to calculate plot-level statistics (Wallhead, Zhu, Sulik, & Stump, 2017).
For the extraction of plot boundaries from images there are three commonly used methods found in the literature: manual plot boundary drawing, pattern-based methods using field plans and image-based methods using computer vision/image classification. Manual plot delineation is the simplest form of plot extraction used in UAS image analysis, where plots are manually tagged and identified within an image (Shi et al., 2016). This method can be broadly applied to any image, but it is time consuming and tedious for large scale analysis of research plots. Furthermore, this may need to be repeated for each date of image collection since the spatial positions recorded in image metadata is typically only accurate to a few meters (i.e., low accuracy), due to the commercial-grade global navigation satellite system (GNSS) receivers typically used with UAS. This low accuracy results in maps that do not adequately align with each other across time without the use of georeferenced ground control points (GCPs).
Grid or pattern-based plot extraction is widely used in the literature for plot delineation (Anderson et al., 2019;Chapman et al., 2014;Drover et al. 2018;Duan, Chapman, Guo, & Zheng, 2017;Haghighattalab et al., 2016;Hearst & Cherkauer, 2015;Holman et al., 2016). The strength of this method is the ease of outlining large fields with regular dimensions, though manual plot adjustment is required as fields are not as geometrically consistent as plot maps.
Image-based or computer vision methods have also been applied to plot extraction (Haghighattalab et al., 2016;Khan & Miklavcic, 2019). These methods are also available in image analysis tools such as Plot Vision (Stavness, 2019;van der Kamp et al., 2019) and Solvi (Solvi, 2019), although their efficacy varies based on the method chosen. Weaknesses of current computer vision methods include: requirement of an image from a specific development stage where plots are visibly differentiated and the potential overlap of multiple plot boundaries, requiring manual plot adjustment. A comparison of pattern-based and computer-vision based methods indicated that both had weaknesses that need to be resolved with future research (Haghighattalab et al., 2016).
Although methods are available for the extraction of plotlevel data from UAS data, limitations in these methods introduce error due to assumptions of field uniformity and plot spacing requiring manual adjustment of plot boundaries, therefore preventing full automation of the plot extraction process. The method presented in this article makes no assumptions about the underlying spacing of the plots, instead using high accuracy GNSS plot positions from precision planting equipment to generate plot boundaries for use in downstream analysis.

Field location and equipment
Plots were planted on 8 June 2019 using four rows (35-cm row spacing) with a total plot width of 1.4 m, a total plot spacing of 1.65 m (center to center), and a 7-m trip length further trimmed to a 5-m final plot length. A Wintersteiger Plotseed TC (Wintersteiger, Ried/I, Austria) planter applied seed into each plot and recorded the spatial position of the center of each plot with a Trimble TMX-2050 display with built-in GNSS receiver using FmX applications (Trimble, Sunnyvale, CA).
The display receiver is a rover that receives baseline correction signals from either a real-time network (RTN) or fixed base station to acquire spatial positions with real-time

Core Ideas
• We describe a workflow for using precision agriculture techniques to generate plot boundaries to use with georeferenced data from aerial imagery or active optical sensors. • Image features are not required for extracting plot boundaries. • The proposed methodology is effective when all geospatial data layers are georeferenced with a high degree of accuracy. • The approach is spatially accurate, scalable, and does not require artificial intelligence algorithms.
kinematic (RTK) accuracy, which is typically 1-3 cm. Realtime (RT) positioning uses differences in carrier phase cycles, in each available satellite frequency, between a base station and rover at common times (Henning, 2011). It provides threedimensional positions relative to a stationary base station with cm-level accuracy. Real-time methods can be contrasted with static (i.e., not kinematic) methods, which provide positions that are easier to verify; however, the reliability of RT positions can be ensured by following best practices (Henning, 2011) and by assessing the Global Positioning System (GPS) quality attributes written into GPS log files. Knowledge of how best practices overcome common GNSS errors helps ensure repeatability of position estimates (Henning, 2013). Specifically, we used a RTN that generates baseline corrections that are transmitted across a cellular modem, omitting the need to set up a RTK base station (Wallace & Wallace, 2019). Regardless of whether corrections come from a RTN such as a virtual reference station (VRS) or from a fixed base station, the purpose is to reduce and remove errors shared between a rover-base pair in order to acquire survey-grade (i.e., cm-level) position repeatability (Landau, Chen, Kipka, & Vollath, 2007;Powell, 2019). Fundamental GNSS concepts are explained in many introductory GIS textbooks (Bolstad, 2012) and white papers (Henning, 2011(Henning, , 2013. The Trimble TMX-2050 display logged spatial positions of distance-based remote output event signals that triggered the planter at the beginning of each plot. In the Trimble displays, each event typically captures spatial position, the coverage area, elevation, and speed with additional attributes that depend on the type of operation (e.g., seeding, harvesting, etc.). The display then exports a set of feature files, which are geographic data files referred to as shapefiles. Specifically, the point feature file records remote output points, which are based on GNSS antenna offsets from an implement (e.g., where the seed is dropped from a planter) programmed as part of the remote output setup parameters (Trimble, 2016).
Six ground control points were placed around the boundaries of the field, with an additional two GCPs placed within the plot alleys after emergence, in order to provide an accurate basis for georeferencing imagery (Shi et al., 2016). Accuracy may be improved by increasing the number of GCPs and ensuring that at least one GCP is visible in each image capture. Spatial positions for the GCPs were gathered using a Trimble R2 GNSS receiver using the same RTN as the planter GNSS equipment.

Image acquisition
A DJI M100 with an integrated MicaSense RedEdge (Seattle, WA) five-waveband multispectral sensing system was flown in a rectangular single-grid flight pattern with 80% front and side overlap using a commercial-grade low accuracy GNSS receiver. The first image set was collected on 4 July 2019 (early vegetative growth, 30 d after planting) at an altitude of 30 m above ground level (AGL) and the second image set was collected on 14 Aug. 2019 (early reproductive growth, 67 d after planting) at a maximum altitude of 45 m AGL. All imagery had a ground sample distance smaller than 5 cm.

Image processing
Individual camera captures were imported into Agisoft's Metashape photogrammetry software (Agisoft, 2019a) to generate orthoimages of the field. Our photogrammetry workflow was based on vendor-recommended settings (Agisoft, 2019b) with additional steps taken for the optimization of camera alignment. Camera alignment involves estimation of the position and orientation of each image capture. The alignment optimization was exclusively based on image markers that were labeled with the survey-grade GNSS positions of each GCP. That said, industry is starting to integrate improved GNSS technology so that survey-grade coordinates may be written into image metadata in real time or post-processing, thereby simplifying photogrammetry workflows.

Plot extraction
Unlike previous research, we did not extract the plot boundaries from the imagery. Instead, our workflow involves ensuring that planter data and image data are both georeferenced with cm-level accuracy, after which they should spatially align.
We did not load a "prescription" file with plot identifiers into the Trimble display. Therefore, we had to use an inner

Buffer shape Rectangle
Width 5 Height 1

Rotation a −45
Segments 4 a The rotation angle is approximately equal to the angle between the shortest plot edge and geographic north (i.e., how many degrees the map must be rotated for the shortest plot edge to align with north). Since we chose a rectangle the number of segments was set to 4.
(i.e., attribute) join to associate each remote output point with its plot identifier based on a spreadsheet that contained planting order from the field map and timestamps from the planter data. Next, we deleted remote output points that were outside the boundary of the experiment, including points that flanked the ends of the rows, and converted the geometry of the point feature file to a polygon feature layer with coordinates projected into Universal Transverse Mercator (UTM) Zone 17N. All plots were the same dimension and therefore all polygon features were assigned the same dimension and orientation. Figure 1 provides a summary of the specific steps we chose to implement the alignment of the plot data with the imagery.
Quantum GIS version 3.4 (QGIS Development, 2019) includes a Vector Geometry Toolbox called Rectangles, Ovals, Diamonds that converts points into shapes. The tool also accommodates variable length/width features but we used the simpler, fixed rectangle parameterization since all our plots are the same dimension. Rectangles, Ovals, Diamonds parameters chosen are in Table 1. Rectangles, Ovals, Diamonds is written in Python and so it can be used programmatically. The Python script was found in the following path of our installation "C:∖Program Files∖QGIS 3.4∖apps∖qgis-ltr∖python∖plugins∖processing ∖algs∖qgis∖RectanglesOvalsDiamondsFixed.py".
A similar tool may not be available in other GIS software programs. In that case, points can be converted to ellipses with major and minor axes that correspond with plot dimensions. Ellipses can then be converted to rectangles by generating minimum rectangles that enclose the ellipses.

RESULTS
Plot maps in the first row of Figure 2 illustrate the transition from point features that represent Remote Output Events to the rectangles created from the Rectangles, Ovals, Diamonds tool, using the 4 July orthoimage as a base map. The map on the bottom row shows the plot boundaries overlain on the 14 August orthoimage.
A visual comparison between developmental stages of the soybean [Glycine max (L.) Merr.] crop illustrates the stability of the alignment, which only depends on the combined accuracies of the image georeferencing and the planter georeferencing rather than image-based features (Figure 3).
The photogrammetry software estimated the total RMSE between all X,Y coordinates in the image and GCP locations as 1.16 and 4.89 cm for the July and August orthoimages, respectively. Please note that these are average errors and are not based on checkpoints, which would realistically result in RMSE values near 10 cm. Based on visual assessment, the orthoimages from each image acquisition date co-registered extremely well with each other and the planter-logged spatial positions of remote output points in the point feature file.

DISCUSSION
Georeferencing errors can be decreased by using at least 10 GCPs so it appears that our GCP layout could have been more robust. That said, error in the neighborhood of 5 cm is tolerable because our plot boundaries are buffered by ∼40 cm to reduce edge effects. Since plot boundaries are set by the user it is possible compensate for lower georeferencing accuracy by setting larger plot buffers.
In addition, accuracy was partly affected by errors in the photogrammetric estimates of camera orientations. These errors may be reduced by using direct georeferencing solutions that integrate a GNSS receiver with an inertial measurement unit (IMU) to accurately measure camera orientation at the time of each set of image captures (Applanix, 2019). This and other improvements in image georeferencing will increase the overall accuracy of our proposed methodology and allow GCPs to be used for validation of georeferencing instead of calibration.
To be clear, the exact procedures we chose are not required to implement the proposed strategy. Figure 4 describes the generalized process, which can account for variations in tasks such as image georeferencing.
This precision agriculture approach to plot extraction provides a direct plot extraction method for aerial images when using precision agriculture field equipment. The strengths of this method include: no assumptions about the underlying spacing of the plots, plot spatial positioning from the RTK-GNSS planter, and the extraction can be used on any growth stage without issues of overlapping plots or insufficient ground cover (Haghighattalab et al., 2016). Additionally, the planter RTK-GNSS unit can provide elevation data that may be used in other precision agriculture applications.
Although this method provides a highly accurate and reproducible plot extraction, it does have limitations for practical use. The major limitation is the underlying equipment costs to conduct this research, where precision planters can incur significant additional costs over traditional planting F I G U R E 4 Flowchart of the general algorithm proposed. There are variations on the methodology that will produce consistent results equipment. This equipment cost has to be considered against the advantages of automated plot extraction and efficient field work in uniform plots. Another limitation for this approach is that it cannot be applied post-hoc, as the RTK-GNSS data points have to be obtained by the planting equipment. That said, a roving GNSS receiver with error corrections could log plot centers (i.e., on foot) as a surrogate for the planter data.
The precision agriculture plot extraction method enables automated and accurate plot data extraction for researchers with the appropriate equipment. This method could be used alongside computer vision approaches for centering plots in research and development purposes. It is possible that some researchers pursuing computer vision approaches may already have RTK-capable planters but are not aware of the geographic data files it produces.

CONCLUSION
In conclusion, the plot extraction workflow presented here provides accurate and efficient plot extraction methodology using precision agriculture technology without the assumptions required by other plot extraction methods. Plots were accurately identified across multiple imaging dates and plot labels corresponded to the field plan. The major limitation of this approach is the requirement for precision agricultureequipped research equipment, though uptake of this technology is increasing.

ACKNOWLEDGMENTS
We would like to thank Cory Schilling (University of Guelph) for his help with providing the planter GNSS data used for testing and developing this method and Larry Prong (Premier Equipment Ltd.-John Deere) for assistance with the planter and explanation of planter data.

FUNDING INFORMATION
This research was funded in part by the Canada First Research Excellence Fund through the Food from Thought: Agricultural Systems for a Healthy Planet, grant no. CFREF-2015-00004, program at the University of Guelph. We would also like to acknowledge generous funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) Collaborative Research and Development (CRD) grant no. CRDPJ 513541-17, and private funding partners, Cangro-Genetics Inc. and Huron Commodities Inc.

AUTHOR CONTRIBUTIONS
R.B., I.R., and J.S. conceived and planned the experiment. R.B. and J.S. performed plot extraction and developed the methodology. R.B., I.R., and J.S. prepared and wrote the manuscript. All authors have read and approved the manuscript.