背景:叶片毛羽(柔毛)是调节叶片蒸腾作用的重要植物表型,影响阳光穿透,并提供对某些昆虫的抗性或易感性增加。棉花占全球天然纤维产量的80%,在这种作物中,叶片毛羽也会影响纤维的产量和价值。目前,这个关键的表型是视觉测量的,这是缓慢的,费力和运营商偏见。这里,我们提出了一个简单的,结合深度学习模型的高通量和低成本成像方法,HairNet,以非常准确的方式对叶子图像进行分类。
结果:生成了来自27个基因型棉花的13,600个叶片图像的[公式:参见正文]数据集。从树冠中两个不同位置的叶子(叶子3和叶子4)收集图像,来自连续两年和两种生长环境(温室和田地)中生长的基因型。该数据集用于构建一个名为HairNet的四部分深度学习模型。在整个数据集上,HairNet的精度为每张图像89%,每片叶子95%。叶子选择的影响,然后使用整个数据集的子集研究了HairNet上的年份和环境准确性。结果发现,只要培训人群中存在测试年份和环境的示例,HairNet实现了每个图像(86-96%)和每个叶子(90-99%)的非常高的准确性。叶片选择对HairNet准确性没有影响,使其成为一个稳健的模型。
结论:HairNet根据棉花叶片的毛羽以非常高的准确性对其图像进行分类。这项研究中提出的简单成像方法以及HairNet在每片叶子上获得的单个图像上的高精度表明,它是可大规模实施的。我们建议HairNet替换此特征的当前视觉评分。HairNet代码和数据集可以用作基线,以测量其他物种的这种性状或对其他微观但重要的表型进行评分。
BACKGROUND: Leaf
hairiness (pubescence) is an important plant phenotype which regulates leaf transpiration, affects sunlight penetration, and provides increased resistance or susceptibility against certain insects. Cotton accounts for 80% of global natural fibre production, and in this crop leaf
hairiness also affects fibre yield and value. Currently, this key phenotype is measured visually which is slow, laborious and operator-biased. Here, we propose a simple, high-throughput and low-cost imaging method combined with a deep-learning model, HairNet, to classify leaf images with great accuracy.
RESULTS: A dataset of [Formula: see text] 13,600 leaf images from 27 genotypes of Cotton was generated. Images were collected from leaves at two different positions in the canopy (leaf 3 & leaf 4), from genotypes grown in two consecutive years and in two growth environments (glasshouse & field). This dataset was used to build a 4-part deep learning model called HairNet. On the whole dataset, HairNet achieved accuracies of 89% per image and 95% per leaf. The impact of leaf selection, year and environment on HairNet accuracy was then investigated using subsets of the whole dataset. It was found that as long as examples of the year and environment tested were present in the training population, HairNet achieved very high accuracy per image (86-96%) and per leaf (90-99%). Leaf selection had no effect on HairNet accuracy, making it a robust model.
CONCLUSIONS: HairNet classifies images of cotton leaves according to their
hairiness with very high accuracy. The simple imaging methodology presented in this study and the high accuracy on a single image per leaf achieved by HairNet demonstrates that it is implementable at scale. We propose that HairNet replaces the current visual scoring of this trait. The HairNet code and dataset can be used as a baseline to measure this trait in other species or to score other microscopic but important phenotypes.