openpose代码 只需在文件包内下载有关库文件即可通过Cmd进行调用运行 Github上地址请看https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases
2022-03-30 09:50:34 432.63MB Openpose 姿态识别 人体识别 人脸识别
1
计算机视觉是一门研究如何使机器通过“看”去理解世界的学科,是目前深度学习领域最热门的研究领域之一。具体来说我们可以通过相机与计算单元的结合,在一定场景下机器视觉系统代替人眼对目标进行识别、跟踪和测量等工作。本门课程将带领大家深入浅出计算机视觉技术的核心,了解视觉领域项目和落地情况。为更多有志加入深度学习-计算机视觉领域的IT人员搭建有力的通道,建立坚实的基础。
1
数字图像处理及机器视觉书中用Matlab作人脸识别的完整例子
2022-03-29 22:29:00 14.11MB 人脸识别 Matlab
1
一、先添加库libjpeg.so for arm: 交叉编译libjpeg 交叉编译工具 版本:3.4.1 路径:/usr/local/arm/3.4.1/bin 编译libjpeg前要确保系统装了libtool库。否则编译不成功。 1、下载libjpeg源码:ftp://ftp.uu.net/graphics/jpeg/jpegsrc.v6b.tar.gz 2、解压缩 jpegsrc.v6b.tar.gz 3、进入/jpeg-6b目录配置: ./configure --host=arm-linux --enable-shared --prefix= TARGETDIR 4、进入自动生成的Makefile,修改 CC= arm-linux-gcc AR= arm-linux-ar AR2=arm-linux-ranlib 5、在TARGETDIR目录下建立 man/man1,include,lib,bin四个目录 6、#make #make install 7、将/TARGETDIR/include/中(jconfig.h, jerror.h, jmorecfg.h, jpeglib.h)四个头文件拷贝到:/usr/local/arm/3.4.1/arm-linux/include中。 将/TARGETDIR/lib中(libjpeg.la, libjpeg.so, libjpeg.so.62, libjpeg.so.62.0.0)中库文件拷贝到:/usr/local/arm/3.4.1/arm-linux/lib中。 注意:执行以下命令检查生成的libjpeg.so是否为ARM版: 下为正确输出,否则检查交叉编译器路径以及Makefile并重新编译。 file libjpeg.so libjpeg.so: ELF 32-bit LSB shared object, ARM, version 1 (ARM), not tripped 注意:执行完以上操作后执行以下命令检查库文件是否已正确安装: arm-linux-gcc -print-file-name=libjpeg.so 如果输出为"libjpeg.so"则说明没有正确安装,重复7步骤。 如果输出为"DIR/libjpeg.so"则说明安装正确。 二、编译采集图像小程序 小程序在:http://blog.chinaunix.net/u3/93660/showart_1860238.html 1、交叉编译: #arm-linux-gcc test.c -o test -I /usr/local/arm/3.4.1/arm-linux/include -L /usr/local/arm/3.4.1/arm-linux/lib 2、将生成的可执行文件和libjpeg等三个库文件拷贝到板子上执行。ok,done!
2022-03-29 18:48:25 137KB 人脸识别
1
借助百度云中的百度人脸识别用java开发一个人脸识别-附件资源
2022-03-29 17:41:16 106B
1
人脸识别:使用PCA方法,即主成分分析,区分人脸和非人脸。
2022-03-27 15:17:32 5KB PCA 人脸识别
1
文件包含了水平条纹噪声,和点噪声,运用二维傅里叶变换通过matlab编程实现对噪声的去除。
2022-03-27 12:01:55 2KB 傅里叶变换、K-L变换、PCA
1
超好的VC++人脸识别demo,请开发志同道合者尽快下载参考!
2022-03-26 14:55:36 5.84MB 人脸识别
1
Human parsing has been extensively studied recently (Yamaguchi et al. 2012; Xia et al. 2017) due to its wide applications in many important scenarios. Mainstream fashion parsing models (i.e., parsers) focus on parsing the high-resolution and clean images. However, directly applying the parsers trained on benchmarks of high-quality samples to a particular application scenario in the wild, e.g., a canteen, airport or workplace, often gives non-satisfactory performance due to domain shift. In this paper, we explore a new and challenging cross-domain human parsing problem: taking the benchmark dataset with extensive pixel-wise labeling as the source domain, how to obtain a satisfactory parser on a new target domain without requiring any additional manual labeling? To this end, we propose a novel and efficient crossdomain human parsing model to bridge the cross-domain differences in terms of visual appearance and environment conditions and fully exploit commonalities across domains. Our proposed model explicitly learns a feature compensation network, which is specialized for mitigating the cross-domain differences. A discriminative feature adversarial network is introduced to supervise the feature compensation to effectively reduces the discrepancy between feature distributions of two domains. Besides, our proposed model also introduces a structured label adversarial network to guide the parsing results of the target domain to follow the high-order relationships of the structured labels shared across domains. The proposed framework is end-to-end trainable, practical and scalable in real applications. Extensive experiments are conducted where LIP dataset is the source domain and 4 different datasets including surveillance videos, movies and runway shows without any annotations, are evaluated as target domains. The results consistently confirm data efficiency and performance advantages of the proposed method for the challenging cross-domain human parsing problem. Abstract—This paper presents a robust Joint Discriminative appearance model based Tracking method using online random forests and mid-level feature (superpixels). To achieve superpixel- wise discriminative ability, we propose a joint appearance model that consists of two random forest based models, i.e., the Background-Target discriminative Model (BTM) and Distractor- Target discriminative Model (DTM). More specifically, the BTM effectively learns discriminative information between the target object and background. In contrast, the DTM is used to suppress distracting superpixels which significantly improves the tracker’s robustness and alleviates the drifting problem. A novel online random forest regression algorithm is proposed to build the two models. The BTM and DTM are linearly combined into a joint model to compute a confidence map. Tracking results are estimated using the confidence map, where the position and scale of the target are estimated orderly. Furthermore, we design a model updating strategy to adapt the appearance changes over time by discarding degraded trees of the BTM and DTM and initializing new trees as replacements. We test the proposed tracking method on two large tracking benchmarks, the CVPR2013 tracking benchmark and VOT2014 tracking challenge. Experimental results show that the tracker runs at real-time speed and achieves favorable tracking performance compared with the state-of-the-art methods. The results also sug- gest that the DTM improves tracking performance significantly and plays an important role in robust tracking.
2022-03-26 14:11:37 26.39MB 人脸识别 行人Reid
1
人脸识别,基于opencv平台,在vs任何版本都可以运行。方便快捷。
2022-03-26 02:41:28 3KB 人脸识别
1