[1]田泽薇,张云伟,陈瑶.基于双路径网络的四足动物运动参数提取方法[J].江苏农业学报,2022,38(02):403-413.[doi:doi:10.3969/j.issn.1000-4440.2022.02.014]
 TIAN Ze-wei,ZHANG Yun-wei,CHEN Yao.Extraction of quadruped motion parameters based on dual path network[J].,2022,38(02):403-413.[doi:doi:10.3969/j.issn.1000-4440.2022.02.014]
点击复制

基于双路径网络的四足动物运动参数提取方法()
分享到:

江苏农业学报[ISSN:1006-6977/CN:61-1281/TN]

卷:
38
期数:
2022年02期
页码:
403-413
栏目:
农业信息工程
出版日期:
2022-04-30

文章信息/Info

Title:
Extraction of quadruped motion parameters based on dual path network
作者:
田泽薇1张云伟123陈瑶1
(1.昆明理工大学信息工程与自动化学院,云南昆明650500;2.昆明理工大学云南省人工智能重点实验室,云南昆明650500;3.昆明理工大学云南省计算机技术应用重点实验室,云南昆明650500)
Author(s):
TIAN Ze-wei1ZHANG Yun-wei123CHEN Yao1
(1.College of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China;2.Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming 650500, China;3.Yunnan Key Laboratory of Computer Technology Application, Kunming University of Science and Technology, Kunming 650500, China)
关键词:
四足动物Dpnet斜率法骨架结构运动参数
Keywords:
quadrupeddual path network (Dpnet)slope methodskeleton structuremotion parameters
分类号:
TP391.4
DOI:
doi:10.3969/j.issn.1000-4440.2022.02.014
文献标志码:
A
摘要:
为了实现畜牧业集约化、规模化、精准化管理中及时分析动物的运动姿态,针对不同环境中动物关节点识别准确率较低的问题,提出了一种双路径网络(Dual path net,Dpnet)特征提取方法。首先,在HRnet(High resolution net)网络输出分支后面添加混合空洞卷积层(Hybrid dilated convolution,HDC)和并行注意力模块(Parallel attention module,PAM)以识别关键点坐标。然后,利用模型生成的热力图与本研究设定的精确圈进一步精准定位关键点位置,通过斜率法区分并匹配动物四肢的关节点归属,从而获取正确的骨架结构。最后,模型通过骨架结构提取并分析了不同速度下动物的同侧步距、步频、关节角度与步态占空比4种运动参数。试验结果表明,该方法中模型识别分类动物5类关键点的精确率为91.1%,召回率为91.0%,5类关键点的平均相似度为87.0%,可有效监测农场或牧场犬类、牛群、羊群等个体的健康状况。
Abstract:
In order to achieve more intensive, large-scale and precise management of the animal husbandry, timely analysis of animal movement posture is a key link in animal breeding. Aiming at the problem of low recognition accuracy of animal joint points in different environments, a feature extraction method based on dual path network (Dpnet) was proposed. Firstly, the hybrid dilated convolution (HDC) and the parallel attention module (PAM) were added after the output branches of the high-resolution network (HRnet) to identify the coordinates of key points. Secondly, the heat map generated by the model and the precise circle set in this study were used to further accurately locate the positions of key points, the slope method was used to distinguish and match the joint points of the animal limbs, so as to obtain the correct skeleton structure. Finally, four motion parameters of animals (ipsilateral stride, stride frequency, joint angle and gait duty cycle) were extracted and analyzed by the model at different speeds through the skeleton structure. The experimental results showed that the accuracy rate of the model to identify and classify the key points of the animal was 91.1%, the recall rate was 91.0%, and the average similarity of the five key points was 87.0%, indicating that this method could effectively monitor the health status of dogs, cattle, sheep and other individuals on farms or ranches.

参考文献/References:

[1]NASIRAHMADI A, EDWARDS S A, STURM B. Implementation of machine vision for detecting behaviour of cattle and pigs[J]. Livestock Science, 2017,202:25-38.
[2]CHEN C, ZHU W, MA C, et al. Image motion feature extraction for recognition of aggressive behaviors among group-housed pigs[J]. Computers and Electronics in Agriculture, 2017, 142:380-387.
[3]朱家骥,朱伟兴. 基于星状骨架模型的猪步态分析[J].江苏农业科学,2015,43(12):453-457.
[4]DING W, HU B, LIU H,et al. Human posture recognition based on multiple features and rule learning[J]. International Journal of Machine Learning and Cybernetics, 2020, 11(11):2529-2540.
[5]JIANG B, WU Q, YIN X,et al. FLYOLOv3 deep learning for key parts of dairy cow body detection[J]. Computers and Electronics in Agriculture, 2019, 166:104982.
[6]ALVAREZ J R, ARROQUI M, MANGUDO P,et al. Body condition estimation on cows from depth images using Convolutional Neural Networks[J]. Computers and Electronics in Agriculture, 2018, 155:12-22.
[7]王浩,曾雅琼,裴宏亮,等. 改进Faster R-CNN的群养猪只圈内位置识别与应用[J].农业工程学报,2020,36(21):201-209.
[8]钱建轩. 基于骨架分析和步态能量图的猪的步态识别[D].镇江:江苏大学,2018.
[9]张满囤,王萌萌,刘天鹤,等. 基于骨架能量图的奶牛步态识别[J].江苏农业科学,2020,48(19):257-262.
[10]SONG H B, JIANG B, WU Q,et al. Detection of dairy cow lameness based on fitting line slope feature of head and neck outline[J].Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE),2018,34(15):190-199.
[11]GUTIERREZ-GALAN D, DOMINGUEZ-MORALES J P, CEREZUELA-ESCUDERO E,et al. Embedded neural network for real-time animal behavior classification[J]. Neurocomputing, 2017, 272(jan.10):17-26.
[12]ZHAO K, BEWLEY J M, HE D,et al. Automatic lameness detection in dairy cattle based on leg swing analysis with an image processing technique[J]. Computers and Electronics in Agriculture, 2018, 148:226-236.
[13]张苏楠,田建艳,菅垄,等. 基于帧间差分法-单点多框检测器的圈养生猪打斗行为识别方法[J].江苏农业学报,2021,37(2):397-404.
[14]YANG A, HUANG H, ZHU X,et al. Automatic recognition of sow nursing behaviour using deep learning-based segmentation and spatial and temporal features[J]. Biosystems Engineering, 2018, 175:133-145.
[15]ZHENG C, ZHU X M, YANG X F, et al. Automatic recognition of lactating sow postures from depth images by deep learning detector[J]. Computers and Electronics in Agriculture,2018,147:51-63.
[16]WU D, WANG Y, HAN M,et al. Using a CNN-LSTM for basic behaviors detection of a single dairy cow in a complex environment[J]. Computers and Electronics in Agriculture, 2021,182:1-12.
[17]YANG A, HUANG H, ZHENG B,et al. An automatic recognition framework for sow daily behaviours based on motion and image analyses[J]. Biosystems Engineering, 2020, 192:56-71.
[18]GUO Y Y, ZHANG Z, HE D J, et al. Detection of cow mounting behavior using region geometry and optical flow characteristics [J]. Computers & Electronics in Agriculture, 2019, 163(8):34-45.
[19]SUN K, XIAO B, LIU D. Deep high-resolution representation learning for human pose estimation[C]//IEEE. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach,USA:IEEE, 2019: 5693-5703.
[20]CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(4):834-848.

备注/Memo

备注/Memo:
收稿日期:2021-06-19基金项目:国家自然科学基金项目(51365019)作者简介:田泽薇(1997-),女,黑龙江大庆人,硕士研究生,主要从事计算机视觉、智能化研究。(E-mail)1537341112@qq.com通讯作者:张云伟,(E-mail)zhangyunwei72@gmail.com
更新日期/Last Update: 2022-05-07