Journal list menu

Volume 6, Issue 1 e20076
Open Access

Development of a digital phenotyping system using 3D model reconstruction for zoysiagrass

Sorawich Pongpiyapaiboon

Sorawich Pongpiyapaiboon

Graduate School of Agriculture, University of Miyazaki, Miyazaki, Japan

Contribution: Data curation, Formal analysis, ​Investigation, Methodology, Resources, Software, Validation, Visualization, Writing - original draft, Writing - review & editing

Search for more papers by this author
Hidenori Tanaka

Corresponding Author

Hidenori Tanaka

Faculty of Agriculture, University of Miyazaki, Miyazaki, Japan


Hidenori Tanaka, Faculty of Agriculture, University of Miyazaki, 1-1 Gakuenkibanadai-Nishi, Miyazaki 889-2192, Japan.

Email: [email protected]

Contribution: Conceptualization, Project administration, Supervision, Validation, Writing - review & editing

Search for more papers by this author
Masatsugu Hashiguchi

Masatsugu Hashiguchi

Faculty of Regional Innovation, University of Miyazaki, Miyazaki, Japan

Contribution: Resources, Writing - review & editing

Search for more papers by this author
Takuyu Hashiguchi

Takuyu Hashiguchi

Faculty of Regional Innovation, University of Miyazaki, Miyazaki, Japan

Contribution: Methodology, Resources, Writing - review & editing

Search for more papers by this author
Atsushi Hayashi

Atsushi Hayashi

School of Computer Science, Tokyo University of Technology, Tokyo, Japan

Contribution: Methodology, Software, Writing - review & editing

Search for more papers by this author
Takanari Tanabata

Takanari Tanabata

Department of Frontier Research and Development, Kazusa DNA Research Institute, Chiba, Japan

Contribution: Methodology, Software, Writing - review & editing

Search for more papers by this author
Sachiko Isobe

Sachiko Isobe

Department of Frontier Research and Development, Kazusa DNA Research Institute, Chiba, Japan

Contribution: Methodology, Software, Writing - review & editing

Search for more papers by this author
Ryo Akashi

Ryo Akashi

Faculty of Agriculture, University of Miyazaki, Miyazaki, Japan

Contribution: Funding acquisition, Project administration, Supervision, Writing - review & editing

Search for more papers by this author
First published: 14 June 2023
Citations: 1

Assigned to Associate Editor Michael Gore.


Digital phenotyping, particularly the use of plant 3D models, is a promising method for high-throughput plant evaluation. Although many recent studies on the topic have been published, further research is needed to apply it to breeding research and other related fields. In this study, using a 3D model phenotyping system we developed, we reconstructed and analyzed 20 accessions of zoysiagrass (Zoysia spp.), including three species and their hybrid, over a period of 1 year. Artificial neural network with three hidden layers was able to effectively remove nonplant parts while retaining plant parts that were incorrectly removed using the cropping method, offering a robust and flexible approach for post-processing of 3D models. The system also demonstrated its ability to accurately evaluate a range of traits, including height, area, and color using red green blue (RGB)-based vegetation indices. The results showed a high correlation between the estimated volume obtained from voxel 3D model and dry weight, enabling its use as a non-destructive method for measuring plant volume. In addition, we found that the green red normalized difference index from RGB-based indices was similar to the commonly used normalized difference vegetation index in controlled illumination conditions. These results demonstrate the potential for three-dimensional model phenotyping to facilitate plant breeding, particularly in the field of turfgrass and feed crops.


  • ANN
  • artificial neural network
  • green-red normalized difference index
  • GWAS
  • genome-wide association study
  • LED
  • light emitting diode
  • NDVI
  • normalized difference vegetation index
  • NIR
  • near Infrared
  • RFID
  • radio frequency identification
  • RGB
  • red green blue
  • RMSE
  • root mean squared error
  • SfM
  • structure from motion
  • UAV
  • unoccupied aerial vehicle

    Zoysiagrass, a warm-season turfgrass that is commonly used in landscaping, is widely distributed from New Zealand to Japan (Engelke & Anderson, 2003). Presently, three species of zoysiagrass, Zoysia japonica Steudel, Zoysia matrella (L.) Merrill, and Zoysia pacifica (Goudswaard) M. Hotta and Kuroki, are mainly used as commercial turfgrass. Zoysiagrass is a very diverse group of grass that can be distinguished by its morphology. Zoysia japonica, native to the Japanese mainland, is characterized by wide leaves, rapid growth, and is primarily used as feed grass and in sport turfs. Zoysia pacifica, found in warm tropical areas, has high density needle-like thin leaves and a fine texture with high tolerance for salinity. Zoysia matrella has intermediate characteristics between Z. japonica and Z. pacifica. Through selection or hybridization with local ecotypes, a number of zoysiagrass cultivars have been developed and released (Patton et al., 2017). For instance, one of the famous cultivars "Emerald" was derived from Z. japonica × Z. pacifica (Forbes, 1962) and "Diamond" was selected from ecotypes of Z. matrella (Engelke et al., 2002). Another ecotype of Z. matrella with a fast ground covering, late winter dormancy, and early spring green-up has also been selected and registered as a new cultivar, Z. matrella "Wakaba" (Kunwanlee et al., 2018). The breeding of zoysiagrass has been accelerated by the advent of powerful molecular biology tools, genome sequencing, and genetic transformation (Chandra et al., 2017). Whole genome sequence of zoysiagrass has been accomplished (Tanaka et al., 2016), driving more research toward using these tools for zoysiagrass improvement, for example, in the identification of quantitative traits loci associated with cold acclimation and freezing tolerance in Z. japonica (Brown et al., 2021). A genome-wide association study study revealed the genes that influence tolerance to low temperature and salinity in Z. japonica (Zuo et al., 2021). Transcriptome analysis has also been conducted in zoysiagrass (Guan et al., 2022; Zhang et al., 2022). These research findings provide crucial genomic information for application in zoysiagrass molecular breeding such as in marker-assisted selection.

    While such genomic and transcriptomic analyses in plants have been actively conducted, plant phenomics, for example, the study of composition, growth, and yield, has lagged behind due to technical difficulties. However, a comprehensive analysis of the morphology of plants, which is part of the phenomics study, is as important as obtaining their genomic information when considering their utilization as both breeding materials and production of crops (Mir et al., 2019). Therefore, various digital phenotyping technologies have been reported in recent years. Digital phenotyping is described as “accelerated and automated phenotyping using informative digital data” (Debauche et al., 2017; Ruckelshausen & Busemeyer, 2015). With remote sensing tools such as unoccupied aerial vehicles (UAVs), two dimensional (2D)-based digital photography has been extensively utilized to monitor crop growth and stress in the field (Su et al., 2019; Wang et al., 2019; Yang et al., 2017). However, due to loss of information in the process of projection from plant, that is, from three-dimensional (3D) objects into 2D planes, there are limitations to this method. Therefore, small-scale data such as from a digital photograph is generally not considered as digital phenotyping (Teramoto et al., 2022). To overcome this problem, the technology for the reconstruction of a 3D model has been adopted. Using multiple images acquired from UAV, 3D models of corn (Zea mays L.) plants in the field were reconstructed for evaluation (Zermas et al., 2020). In a study conducted by Numajiri et al. (2021), photogrammetry and X-ray computed tomography scans were used for both shoot and root 3D phenotyping of rice (Oryza sativa L.) under drought stress experiments.

    In our previous study, we developed an automatic 3D modeling system in plants based on structure from motion (SfM) and multi-view stereo methods (Tanabata et al., 2018). This 3D modeling method could automate the data collection process and served as a basis for development of a digital phenotyping system for the evaluation of plants. In this study, we aimed to develop a high-throughput pot-based digital phenotyping system for zoysiagrass using reconstructed 3D models with deep learning-based postprocessing pipeline. We compared various phenotypic traits extracted from zoysiagrass 3D models to conventional methods to assess system performance. Further, we conducted a time course analysis of zoysiagrass growth using the data obtained from 3D models. Here, we discuss the results and the practicality and advantages of digital phenotyping in plant breeding research, especially in grass species.

    Core Ideas

    • A high-throughput digital phenotyping system based on 3D modeling was developed for zoysiagrass evaluation.
    • Machine learning approach for plant 3D model post-processing can effectively segment plant and nonplant parts.
    • The 3D model phenotyping system can accurately evaluate plant height, area, volume, and color.


    2.1 Plant materials

    Twenty accessions of zoysiagrass, including Z. japonica, Z. matrella, Z. pacifica, and their hybrids, were randomly selected from the collection at the University of Miyazaki, Japan. These accessions included wild-type, cultivar, and variant chosen to represent a wide range of diversity (Table S1). Five replicates of each accession were planted in 60 mm plastic pots and transplanted into Wagner pots (1/5000a) after 1 month. Each pot was randomly positioned inside the greenhouse and hung on plant pot stands with pairs of metal arms designed to hold pots during cultivation (Tanabata et al., 2022). Each pot stand holds two pots and was equipped with an automatic watering system. After 3D imaging, each pot was returned to its assigned position. Zoysiagrass was grown from July 2021 to July 2022. N, P, and K (0.07 g per pot) were supplied manually through 10:10:10 fertilizer on a bimonthly basis.

    2.2 Data acquisition

    Noninvasive 3D imaging of zoysiagrass was conducted weekly using an image capture system inside the greenhouse. The image capture system consisted of an originally built robot arm with a conveyer belt and an radio frequency identification (RFID) reader, and a capture box (Figure 1A–D). RFID tags were put on each pot for automatic identification. Arduino Uno with PN532 RFID reader was used to read the RFID tag. Zoysiagrass pots were placed on a conveyer belt with capacity of 10 pots. A robot arm automatically sent and received the pots from the capture box.

    Details are in the caption following the image
    Digital phenotyping image capture system: (A) capture box; (B) in-out conveyer belt; (C) Arduino with PN532 RFID reader for pot identification; (D) automated robot arm; (E) inside view of capture box with cameras, rotating table, and marker poles.

    The capture box used in this study was fitted with a blue screen and light emitting diode (LED) lights for an illumination-controlled environment. In order to eliminate the potential for shadows and ensure uniform lighting conditions, a series of bar-type LED lights were strategically positioned in front, below, above, and behind the plant in the capture box. This setup effectively isolates the plant from outside light and provides controlled illumination for image acquisition. Inside the capture box consisted of OSMS-160YAW translation motorized stage with GIP-101A intelligent positioner controller (SIGMA KOKI Co., Ltd.) for a turntable and four TRI089S-CC cameras (LUCID Vision Labs, Inc.) (Figure 1E). This system was controlled and synchronized by Python 3.9 with pywinauto, cdio, pySerial packages, and company-provided APIs. Zoysiagrass pots were placed on the rotating table within the capture box, which was equipped with three poles featuring random dot patterns and target markers to detect the capture position. A total of 144 multi-view stereo images were captured at 36 positions for 360° views, with a resolution of 3500 × 2560 pixels. 3D model imaging was conducted once a week for 52 weeks.

    After the 3D model was photographed, manual measurements and top-view 2D images were taken on the same day. A digital caliper (Absolute Digimatic Caliper, Mitsutoyo) was used for manual determination of plant height by measuring from the soil to the highest point of the plant.

    The top-view 2D images were taken in a lighting-controlled environment using a digital single-lens reflex (DSLR) camera setup (Nikon D5600 with 24 mm lens, Nikon), with a resolution of 4000 × 6000 pixels. Acquired top-view 2D images were preprocessed by applying a mask using the color threshold method to exclude nonplant parts. The total plant area was then calculated by outlining the plant using a contour line using findContours function and obtaining the area from the contour shape by function contourArea from OpenCV version 4.5.4. Obtained values were adjusted to the scale of pixels to millimeters measured using the CASMATCH color-chart scale (Kenis, Ltd.).

    The normalized difference vegetation index (NDVI) was measured bi-weekly from September to July, using a FieldScout TCM 500NDVI Turf Color Meter (Spectrum Technologies, Inc.). On week 52, after the 3D model imaging was obtained, zoysiagrass was harvested and dry weights were collected.

    2.3 3D model reconstruction and postprocessing

    The acquired images were transformed into a three-dimensional model using the SfM method implemented in the Metashape software (Agisoft LLC), following the protocol described by Tanabata et al. (2018). The plant shape and color were represented as point clouds, comprising XYZ positions and red green blue (RGB) colors. In post-processing, artificial neural network (ANN) algorithms implemented using Keras API (Chollet, 2015) in TensorFlow version 2.9.2 were utilized to segment the plant and nonplant parts (Figure 2). The model has five layers including, an input layer with X, Y, Z, R, G, and B as inputs, and three hidden dense layers with 32, 16, and 8 nodes each with rectified linear unit activation. The output layer is a sigmoid function providing output ranging from 0 to 1, serving as a prediction. Points with predictions lower than 0.5 are classified as plant parts, while those with predictions higher than 0.5 are classified as nonplant parts. This labeling process is applied to each point in the point cloud data. The model has a total of 897 trainable parameters. In training, the Adam optimizer was used with a learning rate of 0.001 and binary cross-entropy was selected as a loss function. Total training is 10 epochs. The model was trained on a manually labeled sample set of 16,384 points, with 8,192 points from each class. The training set includes five replications of randomly selected representative accessions from each group, a total of 20 individual pots across 52 weeks (20% of the dataset), with a total training point of 17,039,360 points. The accuracy of the machine learning model was subsequently assessed using validation sets including each pot of 16 accessions not selected as training sets, across 52 weeks (16% of the dataset), with a total validation point of 13,305,653 points. Nonplant components, including pots and poles, were automatically removed after segmentation for further analysis.

    Details are in the caption following the image
    Comparison of post-processing method of 3D model. (A) Reconstructed 3D model; (B–D) cropping method; (E–G) segmentation method. (B,E) (D, G) show retained plant parts; (C, F) show removed parts. Red dash lines in (B) indicate the cropping area.

    2.4 Digital phenotyping by 3D model

    Plant height was determined by measuring the position of the highest point of the point cloud model. The height of the pot was subtracted and adjusted for soil depth in each pot to obtain the true plant height. Area calculation was conducted by images created from the 3D models using OpenCV version 4.5.4. The procedure for converting a 3D model to a top-view 2D image is as follows: First, the point clouds are projected onto a grid in the XY-plane, with the grid spacing of 1 mm, and the color of the point cloud with the largest z-coordinate for each grid is used as the color of the grid. A 2D image is subsequently generated based on the grid, and the color of the pixel is determined by the color of the grid. In situations where no point cloud is projected onto a particular grid, the color of the grid is set to black, represented by the (R, G, B) values of (0,0,0). To analyze the image and measure plant area, created images were processed and analyzed in the same method as top-view 2D image obtained manually.

    The volume of the zoysiagrass was estimated using the voxelization technique. This process was conducted using Open3D version 0.12.0, with the voxel size parameter set to 0.003 m and a custom filling function used to fill any holes within the 3D model in order to obtain a more accurate volume measurement. This method involves generating voxel grid coordinates from zoysiagrass voxel, iterating over each Z level to convert X and Y coordinates into a 2D grid, utilizing a binary_fill_holes function from SciPy.ndimage package to fill the closed space, and subsequently counting the voxels in each level. Plant volume was obtained by multiplying voxel counts with voxel volume and expressed in cubic centimeters (cm3).

    In color measurement, 12 RGB-based indices were selected for comparison with NDVI (Table S2) because 3D models were reconstructed using RGB camera images, which lack the necessary spectral bands outside the visible spectrum, such as near-infrared (NIR), to calculate vegetation indices such as NDVI. RGB values from each point in the model were calculated individually and the average value was taken to represent the overall value for each model. These candidate RGB indices were assessed for their correlation with manually obtained NDVI values. The accuracy and reliability of the digital phenotyping system were evaluated by comparing the digital measurements of traits to the corresponding measurements obtained through manual observation. A correlation analysis was performed using Pearson correlation coefficient (r) and coefficient of determination (r2). 3D-measured traits were also aggregated to demonstrate the system's potential for evaluating the growth of zoysiagrass.


    3.1 3D model construction and postprocessing

    A multi-view imaging system for zoysiagrass was developed (Figure 1) and used to capture 144 images per model, with a total acquisition time of approximately 7 minutes per model. The images were stored as bitmap files (.bmp) and had an average storage size of approximately 3.87 GB per set. The study included a total of 5175 models, consisting of 100 pots of zoysiagrass with 20 accessions and five replications, and 52 data points over the course of the study. However, 25 of the 5200 models were excluded due to technical errors in constructing the 3D model. The models were reconstructed using the Metashape application and exported as Stanford triangle format files (.ply), with an average size of 36.21 Mb, a maximum size of 84.46 Mb, and a minimum size of 8.13 Mb.

    In postprocessing, a TensorFlow machine learning model was trained to segment plant and nonplant parts. Training time was 50 minutes per epoch, and segmentation time varied from 10 s to 1 min due to the different number of points in each model using Apple Mac mini (M1 processor, 16 GB memory). The model was evaluated using a validation test, which showed an accuracy of 99.56%, a precision of 99.96%, a recall of 99.16%, and an F1 score (calculated from precision and recall) of 99.56%. Further analysis across time points shows that validation samples from the first 2 months have an accuracy of 99.15% with precision at 99.97% and recall at 98.30%. The analysis showed that segmented plant models retain 7.69% more points than their cropped counterparts (Figure 2). Only the segmented models containing the plant parts were used for further analysis.

    3.2 Evaluation of digital phenotyping

    3.2.1 Plant height

    Plant height estimated from 3D models was compared to the ground truth obtained from manual measurements using a digital caliper (Figure 3A). The regression model for overall plant height had an r2 score of 0.953, which indicates a strong correlation between the model-derived height values and the manually measured values. Root mean square error (RMSE) between measured value and model-derived value is 16.357. For each group, the r2 scores obtained were 0.892 for Z. japonica, 0.936 for Z. matrella, 0.872 for Z. pacifica, and 0.952 for the hybrid group (Figure S1), and RMSE of 15.427 for Z. japonica, 14.929 for Z. matrella, 20.436 for Z. pacifica, and 15.089 for the hybrid group.

    Details are in the caption following the image
    Digital estimated traits versus manual measurement: (A) Height, (B) area, (C) color, and (D) volume to dry weight (week 52 only). Green squares represent Z. japonica, blue triangles represent Z. matrella, red stars represent Z. pacifica, and yellow circles represent the hybrid group. Black lines represent the best fitted linear line.

    3.2.2 Plant area

    The plant area estimated from 3D models was compared to the ground truth obtained from top-view 2D image analysis. The regression line for overall plant contour area had an r2 score of 0.929 and RMSE of 69.387 cm2. For each group, the r2 scores obtained were 0.848 for Z. japonica, 0.825 for Z. matrella, 0.886 for Z. pacifica, and 0.920 for the hybrid group (Figure S2), and RMSE of 78.883 for Z. japonica, 68.636 for Z. matrella, 48.166 for Z. pacifica, and 72.277 for the hybrid group.

    3.2.3 Plant leaf color

    For plant color measurement, 12 RGB-based candidate indices were calculated for each 3D model and compared to manually measured NDVI. The correlation analysis was performed and Pearson coefficient of correlation (r) was calculated for comparison. Green-red normalized difference index (GRNDI) was the best index with a correlation score of 0.919, followed by Green Index at 0.892, and red ratio at −0.885 (Figure 3C, Figure S3). Therefore, GRNDI was selected as the best RGB-based index for describing plant color in this system. Further analysis showed that for each group, r scores were 0.943 for Z. japonica, 0.924 for Z. matrella, 0.872 for Z. pacifica, and 0.929 for the hybrid group (Figure S4).

    3.2.4 Plant volume estimation

    Plant volume measured from 3D models in the final week (week 52) was compared to Zoysiagrass dry weight. Pearson coefficient of correlation score was 0.812, showing a high correlation between estimated volume and plant dry weight (Figure 3D).

    3.3 Plant growth analysis

    After all phenotypic traits were taken from the 3D model, a full 1-year growth analysis was performed (Figure 4, Figure S5). The results revealed distinct growth patterns among the different species and hybrid of zoysiagrass. Zoysia japonica, Z. matrella, and "Emerald" from the hybrid group exhibited rapid growth, with rapid spread and tall growth from July to October. However, during November to February, these species displayed a change in color and entered a state of dormancy. In March, they resumed growth at a similar rate. In contrast, Z. pacifica displayed slower growth, which continued to spread in winter and reached the same growth level as the other groups after 1 year due to the lack of winter dormancy. These results were consistent with manual measurements. Moreover, our system provided additional information on volume estimation that cannot be measured periodically through conventional methods (Figure 4G).

    Details are in the caption following the image
    Growth analysis using three-dimensional (3D) estimated traits compared with conventional method. (A) Height estimated from 3D model. (B) Height from manual measurement. (C) Area estimated from 3D model. (D) Area obtained from top-view two-dimensional (2D) image analysis. (E) Green-red normalized difference index (GRNDI) calculated from 3D model. (F) Normalized difference vegetation index (NDVI) from manual measurement. (G) Plant volume estimated from 3D model. Green dash-dotted lines represent Z. japonica "Mayer," blue lines represent Z. matrella "TM9," red dotted lines represent Z. pacifica "MZC46024," and yellow dashed lines represent "Emerald" from the hybrid group.


    Plant breeding has a long history, dating back to the domestication of crops. The practice of selecting plants based on their observable physical characteristics, known as phenotypic selection, has been a commonly used method in the field for improving crops (Hallauer, 2011). However, traditional methods of evaluating morphological traits are often time consuming and labor intensive. As a result, digital phenotyping has gained popularity as a means of reducing the workload for researchers. In recent years, a variety of methods for 3D model reconstruction of plants in digital phenotyping have been reported (Okura, 2022). However, there is still a paucity of studies evaluating the phenotypic traits of plants using 3D models, as the technology is still under development with a focus on methodology and software. In this study, we demonstrated a time-course evaluation of the phenotypic traits of zoysiagrass with an originally developed digital phenotyping system to reconstruct and evaluate 20 accessions of zoysiagrass, including three species and their hybrids, in over a period of 1 year.

    Removing the nonplant parts (pots and poles) and extracting the plant part for further analysis of the plant phenotype is an important step after reconstructing the 3D model of zoysiagrass. In the development of this system, cropping was initially used in the post-processing of the 3D model. Cropping involves removing a predefined area of the 3D model, which is a relatively uncomplicated process. Cropping by XYZ range, color, or clustering algorithm has been previously reported on 3D model reconstruction and can be applicable to plants showing vertical growth (Gao et al., 2021; Thapa et al., 2018; Zermas et al., 2020). However, due to the lateral growth characteristics of zoysiagrass and the presence of marker poles in our phenotyping system, there were many overlap regions between the plant, pot, and poles. While addressing these, portions of the grass below the pot height were cropped out, while occasionally, portions of the poles were not correctly removed (Figure 2B–D). This inaccuracy from cropping could lead to downstream effects. Incorrectly removed plant points may have a minimal impact on area and volume estimates due to the large number of points in the point cloud model, but could significantly affect plant height, width, length, and area when points at extreme positions are removed, as demonstrated in the cropping results. Additionally, we found that point cloud noise, which in general is not concerned in 3D model reconstruction, had an effect on the color measurement results of this experiment.

    Previous research by Turgut et al. (2022) has reported the use of deep learning for plant-part segmentation, specifically classifying leaves, stems, and flowers of plants. This method has shown promise in effectively labeling various parts from 3D models. Therefore, in the present study, we introduced a robust and flexible approach using machine learning in the present study to address the cropping problem. The artificial neural network model was able to remove the nonplant parts while retaining the plant parts of zoysiagrass that were incorrectly removed using the cropping method (Figure 2E–G). The result from the validation set showed that the accuracy of ANN model in this study decreases when zoysiagrasses are still in the early growth stages but improves after 2 months of growth. This could potentially be attributed to the presence of soil in the model, which may affect the accuracy of plant phenotypic traits estimation downstream. Further improvements may be needed to address this issue and enhance the accuracy of the model, particularly during the early growth stages of zoysiagrasses.

    Additionally, generating training sets for 3D model segmentation can be challenging and time consuming due to the complexity of labeling points in three dimensions, which is more difficult than labeling 2D images. One potential approach to mitigate this challenge is by using artificially generated 3D plant models; however, utilizing real-world data for training sets would be preferred as it helps optimize the model for real-world implementation, taking into account the complexities and variations present in real plant environments.

    The performance evaluation of our phenotyping system showed that it has high accuracy in height and area measurement. This accuracy was demonstrated through comparison with manual measurement, with consistent results across all measurements and r2 scores higher than 0.9 for both parameters. Statistical analysis showed that while the average RMSE is 16.357, Z. pacifica had the highest RMSE with the shortest height, suggesting that accessions with narrow leaves may be more susceptible to errors, indicating a need for further optimization in the imaging and 3D reconstruction steps to improve accuracy. RMSE also showed that area measurements from 3D models consistently have higher values than that of manually taken 2D images. This difference could be due to the effects of different image resolutions and scales.

    NDVI is commonly used as vegetative index in plant color and health assessment. NDVI is computed with the NIR band and can compensate for changing ambient illumination and atmospheric effects (Jones & Vaughan, 2010). Therefore, NDVI has been typically preferred over RGB-based indices due to its ability to function in varying lighting conditions. However, recent research has shown that, with careful consideration of shadow and imaging position, RGB-based indices can also be effective in the breeding evaluation of forage crops outdoors using UAVs (Kikawada et al., 2022). This suggests that RGB-based indices may be suitable for use in certain situations, particularly when imaging conditions are controlled. In previous literature on 3D model reconstruction, additional multispectral cameras and sensory fusion steps were required to incorporate NDVI into 3D models of plants (Rincón et al., 2022). In contrast, in our system where the illumination conditions were controlled, RGB-based indices, particularly the GRNDI performed nearly as well as NDVI. Compared to using a turf color meter, this method of evaluation is noninvasive, as it does not require physical contact with the plant to obtain color data, thereby reducing the stress on the plant and the risk of altering its morphological traits.

    Plant volume is an important trait for many crops as it is a key indicator of biomass. However, plant volume is difficult to estimate even with height and area information. Conventional methods of measuring plant traits associated with volume, weight, or density often require harvesting and manually counting tillers, use of xylometry, or measurement of dry weight. These methods result in the destruction of the plant and are unable to evaluate plant growth sequentially. In this study, voxelization and filling algorithms were utilized to estimate the volume of zoysiagrass. A previous study has demonstrated the accuracy of voxelization in estimating the wood volume of laser-scanned 3D models of trees (Bienert et al., 2014). The present study showed a strong correlation between the estimated volume of zoysiagrass and its dry weight, indicating the accuracy of this method.

    Representative cultivars in time course analysis showed distinct patterns of growth during a 1-year period (Figure 4). Zoysia japonica exhibited rapid growth in the summer and entered a state of winter dormancy. Zoysia matrella also exhibited fast growth at first spreading as quickly as Z. japonica, but entered a state of winter dormancy, as indicated by a drop in the GRNDI. The representative hybrid cultivar "Emerald" showed characteristics similar to Z. japonica, but with darker green leaves color and milder winter dormancy, traits potentially derived from Z. pacifica. Zoysia pacifica was the most distinctive species, with very slow but consistent growth compared to the other groups. It did not exhibit winter dormancy and, with steady growth, reached the growth levels of the other species by the end of the experiment. These results were consistent with both manual measurements and replicate the distinctive characteristics that have been reported in these species (Engelke & Anderson, 2003; Forbes, 1955; Patton et. al. et al., 2017). In addition to the ability to decipher the diverse growth patterns of zoysiagrass and its accuracy, the digital phenotyping method we have developed offers a nondestructive way of plant phenotyping through periodic plant volume estimation and shows another way to help improve future plant breeding research.

    Turfgrass phenotypic evaluation heavily relies on a turfgrass rating system from The National Turfgrass Evaluation Program. This system is based on a visual rating from a properly trained evaluator measuring turfgrass quality, color, texture, density, and more ( However, there are concerns over its subjectivity in the measurement processes, which makes comparisons between individual evaluator's scores and the overall quantification difficult. Our phenotyping system can eliminate the subjectivity in measurements by providing objective results such as GRNDI for evaluating genetic color, winter color, seasonal color/color retention, and spring green up. This system can also support evaluation of living ground cover and density using estimated area and volume, respectively.

    In conclusion, we have demonstrated the ability of our high-throughput phenotyping system using 3D models to accurately evaluate the phenotypic traits of zoysiagrass, including species, height, area, volume, and vegetation indices. The use of an artificial neural network with three hidden layers in the post-processing of the 3D models allowed for the removal of nonplant parts while retaining the plant parts of zoysiagrass. This digital phenotyping system has the potential to be a noninvasive tool for plant evaluation and could facilitate plant breeding research by enabling the evaluation of numerous accessions of turfgrass. Future research will focus on expanding the use of this digital phenotyping system to a wider range of grasses and other plant species.


    Sorawich Pongpiyapaiboon: Data curation; formal analysis; investigation; methodology; resources; software; validation; visualization; writing—original draft; writing—review & editing. Hidenori Tanaka: Conceptualization; project administration; supervision; validation; writing—review & editing. Masatsugu Hashiguchi: Resources; writing—review & editing. Takuyu Hashiguchi: Methodology; resources; writing—review & editing. Atsushi Hayashi: Methodology; software; writing—review & editing. Takanari Tanabata: Methodology; software; writing—review & editing. Sachiko Isobe: Methodology; software; writing—review & editing. Ryo Akashi: Funding acquisition; project administration; supervision; writing—review & editing.


    A part of this study is supported by JST CREST grant number JPMJCR16O1.


      The authors declare no conflicts of interest.


      Codes used for analysis in this study are openly available on GitHub at