[1]任守纲,朱勇杰,顾兴健,等.基于稀疏实例与位置感知卷积的植物叶片实时分割方法[J].江苏农业学报,2024,(03):478-489.[doi:doi:10.3969/j.issn.1000-4440.2024.03.010]
 REN Shou-gang,ZHU Yong-jie,GU Xing-jian,et al.Real-time segmentation of plant leaves based on sparse instances and position aware convolution[J].,2024,(03):478-489.[doi:doi:10.3969/j.issn.1000-4440.2024.03.010]
点击复制

基于稀疏实例与位置感知卷积的植物叶片实时分割方法()
分享到:

江苏农业学报[ISSN:1006-6977/CN:61-1281/TN]

卷:
期数:
2024年03期
页码:
478-489
栏目:
农业信息工程
出版日期:
2024-03-30

文章信息/Info

Title:
Real-time segmentation of plant leaves based on sparse instances and position aware convolution
作者:
任守纲12朱勇杰1顾兴健1武鹏飞3徐焕良1
(1.南京农业大学人工智能学院,江苏南京210095;2.国家信息农业工程技术中心,江苏南京210095;3.新疆兴农网信息中心/新疆维吾尔自治区农业气象台,新疆乌鲁木齐830002)
Author(s):
REN Shou-gang12ZHU Yong-jie1GU Xing-jian1WU Peng-fei3XU Huan-liang1
(1.College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210095, China;2.National Engineering and Technology Center for Information Agriculture, Nanjing 210095, China; 3.Xinjiang Xingnong Network Information Center/Xinjiang Uygur Autonomous Region Agricultural Meteorological Observatory, Urumqi 830002, China)
关键词:
实例分割计算机视觉植物表型叶片分割
Keywords:
instance segmentationcomputer visionplant phenotypesleaf segmentation
分类号:
TP391.41
DOI:
doi:10.3969/j.issn.1000-4440.2024.03.010
摘要:
植物叶片分割在高通量植物表型数据获取任务中起着关键作用。目前,多数植物叶片分割方法专注于提高模型分割精度,却忽视模型复杂度和推理速度。针对该问题,本研究提出一种基于稀疏实例激活与有效位置感知卷积的实例分割模型(ePaCC-SparseInst),实现植物叶片实时、精确分割。在ePaCC-SparseInst 中引入1组稀疏实例激活图作为叶片对象表示方式,并使用二部图匹配算法实现预测对象与实例激活图的一一映射,从而避免了繁琐的非极大值抑制(Non-maximum suppression,NMS)运算,提高了模型的推理速度。此外,在实例分支中引入有效位置感知卷积(ePaCC)模块,在增大模型全局感受野的同时提高了模型的推理速度。在Komatsuna数据集上,ePaCC-SparseInst平均分割精度(AP)达到85.33%,每秒传输帧数达到43.52。在相同训练条件下,ePaCC-SparseInst的性能优于SparseInst、Mask R-CNN、CondInst等实例分割算法。此外在CVPPP A5数据集上,ePaCC-SparseInst较上述算法同样取得了更好的分割精度和推理速度。本研究提出的方法采用纯卷积的架构实现了叶片的实时分割,可以为在移动端或边缘设备上获取植物表型数据提供技术支持。
Abstract:
The segmentation of plant leaves plays a crucial role in high-throughput plant phenotyping data acquisition tasks. Currently, most methods for plant leaf segmentation focus on improving the accuracy of the segmentation model but overlook the model’s complexity and inference speed. In response to this issue, this study proposed an instance segmentation model (ePaCC-SparseInst) based on sparse instance activation and efficient position-aware convolution to achieve real-time and accurate segmentation of plant leaves. In ePaCC-SparseInst, a set of sparse instance activation maps was introduced as the representation of leaf objects. A bipartite graph matching algorithm was employed to establish a one-to-one mapping between predicted objects and instance activation maps, thereby avoiding the cumbersome non-maximum suppression (NMS) operation and improving the model’s inference speed. Additionally, an effective position-aware circulate convolution (ePaCC) module was introduced into the instance branch, which increased the model’s global receptive field and enhanced its inference speed. On the Komatsuna dataset, ePaCC-SparseInst achieved an average segmentation precision (AP) of 85.33% and an inference speed of 43.52 frames per second (FPS). Under the same training conditions, its performance surpassed instance segmentation algorithms such as SparseInst, Mask R-CNN, and CondInst. Furthermore, on the CVPPP A5 dataset, ePaCC-SparseInst achieved better segmentation accuracy and inference speed than the aforementioned algorithms. The proposed method used a pure convolutional architecture to achieve real-time leaf segmentation, which could provide technical support for obtaining plant phenotypic data on mobile or edge devices.

参考文献/References:

[1]ZHOU Y, SRINIVASAN S, MIRNEZAMI S V, et al. Semiautomated feature extraction from RGB images for sorghum panicle architecture GWAS[J]. Plant Physiology,2019,179(1):24-37.
[2]PIERUSCHKA R, SCHURR U. Plant phenotyping:past,present,and future[J]. Plant Phenomics,2019,2019(3):1-6.
[3]SCHARR H, MINERVINI M, FRENCH A P, et al. Leaf segmentation in plant phenotyping:a collation study[J]. Machine Vision and Applications,2016,27(4):585-606.
[4]GUO R, QU L, NIU D, et al. LeafMask:towards greater accuracy on leaf segmentation[C]. Montreal,BC,Canada:IEEE,2021.
[5]GRAND-BROCHIER M, VACAVANT A, CERUTTI G, et al. Tree leaves extraction in natural images:comparative study of preprocessing tools and segmentation methods[J]. IEEE Transactions on Image Processing,2015,24(5):1549-1560.
[6]SCHARR H, PRIDMORE T, TSAFTARIS S A. Computer vision problems in plant phenotyping[C]. Venice,Italy:IEEE,2017.
[7]UCHIYAMA H, SAKURAI S, MISHIMA M, et al. An easy-to-setup 3D phenotyping platform for KOMATSUNA dataset[C]. Venice,Italy:IEEE,2017.
[8]蒋焕煜,施经挥,任烨,等. 机器视觉在幼苗自动移钵作业中的应用[J]. 农业工程学报,2009,25(5):127-131.
[9]孙国祥,汪小旵,何国敏. 基于边缘链码信息的番茄苗重叠叶面分割算法[J]. 农业工程学报,2010,26(12):206-211.
[10]王纪章,顾容榕,孙力,等. 基于Kinect相机的穴盘苗生长过程无损监测方法[J]. 农业工程学报,2021,52(2):227-235.
[11]伍艳莲,赵力,姜海燕. 基于改进均值漂移算法的绿色作物图像分割方法[J]. 农业工程学报,2014,30(24):161-167.
[12]胡静,陈志泊,张荣国, 等. 基于鲁棒随机游走的交互式植物叶片分割[J]. 模式识别与人工智能,2018,31(10):933-940.
[13]KAN J, GU Z, MA C, et al. Leaf segmentation algorithm based on improved U-shaped network under complex background[C]. Chongqing,China:IEEE,2021.
[14]YIN X, LIU X, CHEN J, et al. Multi-leaf alignment from fluorescence plant images[C]. Steamboat Springs,CO,USA:IEEE,2014.
[15]REN M, ZEMEL R S. End-to-end instance segmentation with recurrent attention[C]. Honolulu,HI,USA:IEEE,2017.
[16]HE K, GKIOXARI G, DOLLR P, et al. Mask R-CNN[C]. Venice,Italy:IEEE,2017.
[17]乔虹,冯全,赵兵,等. 基于Mask R-CNN的葡萄叶片实例分割[J]. 林业机械与木工设备,2019,47(10):15-22.
[18]袁山,汤浩,郭亚. 基于改进Mask R-CNN模型的植物叶片分割方法[J]. 农业工程学报,2022,38(1):212-220.
[19]邢洁洁,谢定进,杨然兵,等.基于YOLOv5s的农田垃圾轻量化检测方法[J]. 农业工程学报,2022,38(19):153-161.
[20]CHENG T, WANG X, CHEN S, et al. Sparse instance activation for real-time instance segmentation[C]. New Orleans,LA,USA:IEEE,2022.
[21]HU H, GU J, ZHANG Z, et al. Relation networks for object detection[C]. Salt Lake City,UT,USA:IEEE,2018.
[22]REN S, HE K, GIRSHICK R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149.
[23]GUO R, NIU D, QU L, et al. SOTR:segmenting objects with transformers[C]. Montreal,QC,Canada:IEEE,2021.
[24]WANG X, ZHANG R, KONG T, et al. SOLOv2: dynamic and fast instance segmentation[C]. Red Hook,NY,USA:Curran Associates Inc.,2020.
[25]BUSLAEV A, IGLOVIKOV V I, KHVEDCHENYA E, et al. Albumentations:fast and flexible image augmentations:2[J]. Information,2020,11(2):125.
[26]LIN T-Y, DOLLR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]. Honolulu,HI,USA:IEEE,2017.
[27]ZHAO H, SHI J, QI X, et al. Pyramid scene parsing network[C]. Honolulu,HI,USA:IEEE,2017.
[28]TIAN Z, SHEN C, CHEN H, et al. FCOS:fully convolutional one-stage object detection[C]. Seoul,South Korea:IEEE,2019.
[29]ZHANG H, HU W, WANG X. ParC-Net:position aware circular convolution with merits from ConvNets and transformer[C]. Tel Aviv,Israel:Springer Nature Switzerland,2022.
[30]YU W, LUO M, ZHOU P, et al. Metaformer is actually what you need for vision[C]. New Orleans,LA,USA:IEEE,2022.
[31]HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2020,42(8):2011-2023.
[32]LEE Y, PARK J. CenterMask:real-time anchor-free instance segmentation[C]. Seattle,WA,USA:IEEE,2020.
[33]STEWART R, ANDRILUKA M, NG A Y. End-to-end people detection in crowded scenes[C]. Las Vegas,NV,USA:IEEE,2016.
[34]CAI Z, VASCONCELOS N. Cascade R-CNN: delving into high quality object detection[C]. Salt Lake City,UT,USA:IEEE,2018.

相似文献/References:

[1]李颀,王康,强华,等.基于颜色和纹理特征的异常玉米种穗分类识别方法[J].江苏农业学报,2020,(01):24.[doi:doi:10.3969/j.issn.1000-4440.2020.01.004]
 LI Qi,WANG Kang,QIANG Hua,et al.Classification and recognition method of abnormal corn ears based on color and texture features[J].,2020,(03):24.[doi:doi:10.3969/j.issn.1000-4440.2020.01.004]

备注/Memo

备注/Memo:
收稿日期:2023-01-13基金项目:国家自然科学基金项目(61806097) 作者简介:任守纲(1977-),男,山东日照人,博士,副教授,主要从事软件工程、人工智能研究。 (E-mail)rensg@njau.edu.cn 通讯作者:武鹏飞,(E-mail)445305370@qq.com
更新日期/Last Update: 2024-05-20