针对舞蹈视频与动作识别技术相结合的问题,文中研究探讨一种基于计算机视觉的舞蹈视频动作识别技术。该技术首先将获取到的舞蹈视频图像进行灰度化、背景消除和滤波去噪等预处理操作后,得到该视频序列中人物的动作特征。然后利用支持向量机SVM从对象特征样本集中抽取一部分数据样本用于模型的学习训练,训练完成后再对其他部分进行动作分类识别。KTH动作数据库与实拍舞蹈视频的仿真测试结果说明,该方法能够迅速、有效地识别出舞蹈视频中所出现的动作,且平均识别准确率在85%以上,验证了该技术应用于舞蹈动作识别中的可行性。
2024-05-21 10:24:07 1.65MB 计算机视觉; 动作识别
1
Matlab RGB代码================================================== ============================= 卷积两流网络融合,用于视频动作识别 该存储库包含我们的CVPR 2016论文的代码: Christoph Feichtenhofer, Axel Pinz, Andrew Zisserman "Convolutional Two-Stream Network Fusion for Video Action Recognition" in Proc. CVPR 2016 如果您发现该代码对您的研究有用,请引用我们的论文: @inproceedings{feichtenhofer2016convolutional, title={Convolutional Two-Stream Network Fusion for Video Action Recognition}, author={Feichtenhofer, Christoph and Pinz, Axel and Zisserman, Andrew}
2022-05-24 22:03:16 2.68MB 系统开源
1
slow-fast network 何凯明最新论文源码,内含部分注释
1
3-D convolutional neural networks (3-D-convNets) have been very recently proposed for action recognition in videos, and promising results are achieved. However, existing 3- D-convNets has two “artificial” requirements that may reduce the quality of video analysis: 1) It requires a fixed-sized (e.g., 112×112) input video; and 2)most of the 3-D-convNets require a fixed-length input (i.e., video shots with fixed number of frames). To tackle these issues, we propose an end-to-end pipeline named Two-stream 3-D-convNet Fusion, which can recognize human actions in videos of arbitrary size and length using multiple features. Specifically, we decompose a video into spatial and temporal shots. By taking a sequence of shots as input, each stream is implemented using a spatial temporal pyramid pooling (STPP) convNet with a long short-term memory (LSTM) or CNN-E model, softmax scores of which are combined by a late fusion.We devise the STPP convNet to extract equal-dimensional descriptions for each variable-size shot, andwe adopt theLSTM/CNN-Emodel to learn a global description for the input video using these time-varying descriptions. With these advantages, our method should improve all 3-D CNN-based video analysis methods. We empirically evaluate our method for action recognition in videos and the experimental results show that our method outperforms the state-of-the-art methods (both 2-D and 3-D based) on three standard benchmark datasets (UCF101, HMDB51 and ACT datasets).
2021-09-25 11:29:08 983KB Action recog 3D convoluti
1