1、街道和公路场景下YOLOv5车辆行人检测 2、车辆行人数据集较大,被拆分为两部分,此为part1,共1万多张标注好的 图片 3、目标类别:person、car 4、标签格式为VOC和YOLO两种格式,即xml和txt格式 5、可直接用于智能驾驶场景的车辆行人检测
1、智能驾驶交通场景行人数据集,都已标注好,标签格式为两种,分别为YOLO格式的txt,和VOC格式的xml,共1万多张,可以直接使用 2、街道和公路上的 行人 3、目标类别: person
2022-04-06 03:11:05 945.8MB YOLO智能驾驶交通场景行人数据
车辆行人数据集、街道和公路场景下YOLO、SSD、faster-RCNN 车辆行人检测 ,标签格式为VOC和YOLO两种格式,共一万多张标注好的图片 目标类别:person、car
2022-04-06 03:09:34 954.78MB 、faster-RCNN车辆行 车辆行人数据集
0 智能驾驶自动驾驶深度感知技术对车和行人的检测.docx
2022-04-06 01:48:11 1.85MB
这是hog+svm行人检测算法中D盘的文件,把这个压缩包解压到D盘,并结合hog+svm行人检测算法中的代码使用。把待检测的图片放入TestData这个子文件夹中即可
2022-04-05 20:54:42 18.63MB hog svm 行人检测 训练样本
1
人脸识别正负样本集,负样本2500多,且为处理后灰度图;正样本1000多张,且为归一化后的图片;同时负样本也是适应于车辆识别,车牌识别,行人检测等
2022-04-03 18:31:18 58B 人工智能 机器学习
1
为实现在无法接收到卫星信号的复杂室内环境下行人精确定位与导航,提出一种多源信息融合的室内定位方法,通过融合智能手机中方向传感器数据进行室内行人航位推算(PDR),采集室内环境中的WiFi与蓝牙信号信息进行信号强度匹配及修正PDR累计误差;针对室内复杂环境下WiFi指纹定位精度低的情况,提出一种将随机森林(RF)分类与改进的K最近邻(KNN)算法相结合的WiFi指纹定位算法,降低计算复杂度,提高定位精度;在改进的WiFi指纹定位算法及多源信息融合分析的基础上,提出使用粒子滤波及地图约束的方法,进一步提高定位精度。实验结果表明:结合粒子滤波及地图约束的多源信息融合定位结果方法估计的路径与真实路径最
1
Human parsing has been extensively studied recently (Yamaguchi et al. 2012; Xia et al. 2017) due to its wide applications in many important scenarios. Mainstream fashion parsing models (i.e., parsers) focus on parsing the high-resolution and clean images. However, directly applying the parsers trained on benchmarks of high-quality samples to a particular application scenario in the wild, e.g., a canteen, airport or workplace, often gives non-satisfactory performance due to domain shift. In this paper, we explore a new and challenging cross-domain human parsing problem: taking the benchmark dataset with extensive pixel-wise labeling as the source domain, how to obtain a satisfactory parser on a new target domain without requiring any additional manual labeling? To this end, we propose a novel and efficient crossdomain human parsing model to bridge the cross-domain differences in terms of visual appearance and environment conditions and fully exploit commonalities across domains. Our proposed model explicitly learns a feature compensation network, which is specialized for mitigating the cross-domain differences. A discriminative feature adversarial network is introduced to supervise the feature compensation to effectively reduces the discrepancy between feature distributions of two domains. Besides, our proposed model also introduces a structured label adversarial network to guide the parsing results of the target domain to follow the high-order relationships of the structured labels shared across domains. The proposed framework is end-to-end trainable, practical and scalable in real applications. Extensive experiments are conducted where LIP dataset is the source domain and 4 different datasets including surveillance videos, movies and runway shows without any annotations, are evaluated as target domains. The results consistently confirm data efficiency and performance advantages of the proposed method for the challenging cross-domain human parsing problem. Abstract—This paper presents a robust Joint Discriminative appearance model based Tracking method using online random forests and mid-level feature (superpixels). To achieve superpixel- wise discriminative ability, we propose a joint appearance model that consists of two random forest based models, i.e., the Background-Target discriminative Model (BTM) and Distractor- Target discriminative Model (DTM). More specifically, the BTM effectively learns discriminative information between the target object and background. In contrast, the DTM is used to suppress distracting superpixels which significantly improves the tracker’s robustness and alleviates the drifting problem. A novel online random forest regression algorithm is proposed to build the two models. The BTM and DTM are linearly combined into a joint model to compute a confidence map. Tracking results are estimated using the confidence map, where the position and scale of the target are estimated orderly. Furthermore, we design a model updating strategy to adapt the appearance changes over time by discarding degraded trees of the BTM and DTM and initializing new trees as replacements. We test the proposed tracking method on two large tracking benchmarks, the CVPR2013 tracking benchmark and VOT2014 tracking challenge. Experimental results show that the tracker runs at real-time speed and achieves favorable tracking performance compared with the state-of-the-art methods. The results also sug- gest that the DTM improves tracking performance significantly and plays an important role in robust tracking.
2022-03-26 14:11:37 26.39MB 人脸识别 行人Reid
1
目的 一些已知的细胞自动机的实现。 仅用于学术目的。 工作正在进行中 ... 地板场模型(cellular_automaton.py) 参考 蜂窝自动机。 地板场模型[Burstedde2001]使用二维元胞自动机的行人动力学仿真Physica A,295,507-525,2001 用法 python cellular_automaton.py 可选参数 选项 价值 描述 -h, --help 显示此帮助消息并退出 -s, --ks 堪萨斯州 静态楼层字段的灵敏度参数(默认为2 ) -d, --kd KD 动态地板场的灵敏度参数(默认为1 ) -n, --numPeds 编号 座席数(默认为10 ) -p, --plotS 绘制静态地板场 --plotD 绘制动态地板场 --plotAvgD 绘制平均动态地板场 -P, -
2022-03-23 16:03:53 489KB Python
1
行人数据集部分遮挡
2022-03-22 09:38:24 50MB 行人数据集
1