由于猪发声是监测猪状况的重要指标,利用深度学习的猪发声检测和识别在现代养猪业的管理和福利中起着至关重要的作用。然而,采集猪声数据进行深度学习模型训练需要费时费力。认识到收集猪声音数据用于模型训练的挑战,这项研究引入了一种深度卷积神经网络(DCNN)架构,用于猪发声和非发声分类,并使用真实的猪场数据集。对各种音频特征提取方法进行了单独评估,以比较性能差异,包括梅尔频率倒谱系数(MFCC),梅尔谱图,色度,还有Tonnetz.本研究提出了一种新的特征提取方法,称为混合MMCT,通过集成MFCC来提高分类精度,梅尔谱图,色度,和Tonnetz功能。这些特征提取方法用于从猪声音数据集中提取相关特征,以输入到深度学习网络中。对于实验,从三个实际的猪场收集了三个数据集:尼亚斯,Gimje,还有正统.每个数据集由4000个WAV文件(2000个猪发声和2000个猪非发声)组成,持续时间为3秒。在训练集中利用各种音频数据增强技术来提高模型性能和泛化,包括变桨,时移,时间拉伸,和背景噪音。在这项研究中,在每个数据集上使用k折交叉验证(k=5)技术评估预测性深度学习模型的性能.通过严格的实验,混合MMCT在Nias上显示出较高的准确性,Gimje,和贞洁,率达到99.50%,99.56%,99.67%,分别。通过使用两个农场数据集作为训练集和一个农场作为测试集,进行了鲁棒性实验以证明模型的有效性。混合MMCT在精度方面的平均性能,精度,召回,F1分数达到95.67%,96.25%,95.68%,和95.96%,分别。所有结果表明,所提出的Mixed-MMCT特征提取方法优于其他有关猪发声和非发声分类的方法。
Since pig vocalization is an important indicator of monitoring pig conditions, pig vocalization detection and recognition using deep learning play a crucial role in the management and welfare of modern pig livestock farming. However, collecting pig sound data for deep learning model training takes time and effort. Acknowledging the challenges of collecting pig sound data for model training, this study introduces a deep convolutional neural network (DCNN) architecture for pig vocalization and non-vocalization classification with a real pig farm dataset. Various audio feature extraction methods were evaluated individually to compare the performance differences, including Mel-frequency cepstral coefficients (MFCC), Mel-spectrogram, Chroma, and Tonnetz. This study proposes a novel feature extraction method called Mixed-MMCT to improve the classification accuracy by integrating MFCC, Mel-spectrogram, Chroma, and Tonnetz features. These feature extraction methods were applied to extract relevant features from the pig sound dataset for input into a deep learning network. For the experiment, three datasets were collected from three actual pig farms: Nias, Gimje, and Jeongeup. Each dataset consists of 4000 WAV files (2000 pig vocalization and 2000 pig non-vocalization) with a duration of three seconds. Various audio data augmentation techniques are utilized in the training set to improve the model performance and generalization, including pitch-shifting, time-shifting, time-stretching, and background-noising. In this study, the performance of the predictive deep learning model was assessed using the k-fold cross-validation (k = 5) technique on each dataset. By conducting rigorous experiments, Mixed-MMCT showed superior accuracy on Nias, Gimje, and Jeongeup, with rates of 99.50%, 99.56%, and 99.67%, respectively. Robustness experiments were performed to prove the effectiveness of the model by using two farm datasets as a training set and a farm as a testing set. The average performance of the Mixed-MMCT in terms of accuracy, precision, recall, and F1-score reached rates of 95.67%, 96.25%, 95.68%, and 95.96%, respectively. All results demonstrate that the proposed Mixed-MMCT feature extraction method outperforms other methods regarding pig vocalization and non-vocalization classification in real pig livestock farming.