机器学习论文合集(pdf格式).zip

上传者: LLL_mg | 上传时间: 2021-09-15 10:35:04 | 文件大小: 74.67MB | 文件类型: ZIP
25篇机器学习经典论文合集,有需要欢迎积分自取
Efficient sparse coding algorithms论文附有代码
[1] Zheng S, Kwok J T. Follow the moving leader in deep learning[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 4110-4119.
[2] Kalai A, Vempala S. Efficient algorithms for online decision problems[J]. Journal of Computer and System Sciences, 2005, 71(3): 291-307.
[3] Kingma, D. and Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference for Learning Representations, 2015.
[4] Lee H, Battle A, Raina R, et al. Efficient sparse coding algorithms[C]//Advances in neural information processing systems. 2007: 801-808.
[5] Fan J, Ding L, Chen Y, et al. Factor Group-Sparse Regularization for Efficient Low-Rank Matrix Recovery[J]. 2019.
[6] Z. Lai, Y. Chen, J. Wu, W. W. Keung, and F. Shen, “Jointly sparse hashing for image retrieval,” IEEE Transactions on Image Processing, vol. 27, no. 12, pp. 6147–6158, 2018.
[7] Z. Zhang, Y. Chen, and V. Saligrama, “Efficient training of very deep neural networks for supervised hashing,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2016, pp. 1487–1495.
[8] Wei-Shi Zheng, Shaogang Gong, Tao Xiang. Person re-identification by probabilistic relative distance comparison[C]// CVPR 2011. IEEE, 2011.
[9] Liao S, Hu Y, Zhu X, et al. Person re-identification by local maximal occurrence representation and metric learning[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 2197-2206.
[10] Liu X, Li H, Shao J, et al. Show, tell and discriminate: Image captioning by self-retrieval with partially labeled data[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 338-354.
[11] Yao T, Pan Y, Li Y, et al. Exploring visual relationship for image captioning[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 684-699.
[12] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang., ”Image

文件下载

资源详情

[{"title":"( 25 个子文件 74.67MB ) 机器学习论文合集(pdf格式).zip","children":[{"title":"paper","children":[{"title":"15 Girshick R . Fast R-CNN[J]. Computer Science.pdf <span style='color:#111;'> 713.99KB </span>","children":null,"spread":false},{"title":"12Image Super-Resolution Using Deep Convolutional Networks.pdf <span style='color:#111;'> 11.63MB </span>","children":null,"spread":false},{"title":"14Rich feature hierarchies for accurate object detection and semantic segmentation.pdf <span style='color:#111;'> 6.23MB </span>","children":null,"spread":false},{"title":"10 Image captioning by self-retrieval with partially labeled data.pdf <span style='color:#111;'> 2.16MB </span>","children":null,"spread":false},{"title":"20Visualizing and understanding convolutional networks.pdf <span style='color:#111;'> 34.56MB </span>","children":null,"spread":false},{"title":"11 Exploring visual relationship for image captioning.pdf <span style='color:#111;'> 2.93MB </span>","children":null,"spread":false},{"title":"25Variational autoencoder for deep learning of images, labels and captions..pdf <span style='color:#111;'> 1.47MB </span>","children":null,"spread":false},{"title":"9 Person re-identification by local maximal occurrence representation and metric learning.pdf <span style='color:#111;'> 1.43MB </span>","children":null,"spread":false},{"title":"18 Reducing the dimensionality of data with neural networks.pdf <span style='color:#111;'> 360.60KB </span>","children":null,"spread":false},{"title":"24Disentangled representation learning gan for pose-invariant face recognition..pdf <span style='color:#111;'> 930.52KB </span>","children":null,"spread":false},{"title":"17 Gradient-based learning applied to document recognition.pdf <span style='color:#111;'> 867.14KB </span>","children":null,"spread":false},{"title":"19 Imagenet classification with deep convolutional neural networks.pdf <span style='color:#111;'> 1.35MB </span>","children":null,"spread":false},{"title":"5Factor Group-Sparse Regularization for Efficient Low-Rank Matrix Recovery.pdf <span style='color:#111;'> 241.83KB </span>","children":null,"spread":false},{"title":"23Generative adversarial nets.pdf <span style='color:#111;'> 518.05KB </span>","children":null,"spread":false},{"title":"2Efficient algorithms for online decision problems.pdf <span style='color:#111;'> 259.96KB </span>","children":null,"spread":false},{"title":"22 Group normalization.pdf <span style='color:#111;'> 926.02KB </span>","children":null,"spread":false},{"title":"13Deconvolutional networks,.pdf <span style='color:#111;'> 1.06MB </span>","children":null,"spread":false},{"title":"1Follow the moving leader in deep learning.pdf <span style='color:#111;'> 1023.67KB </span>","children":null,"spread":false},{"title":"16You Only Look Once Unified, Real-Time Object Detection.pdf <span style='color:#111;'> 5.05MB </span>","children":null,"spread":false},{"title":"3A method for stochastic optimization..pdf <span style='color:#111;'> 570.94KB </span>","children":null,"spread":false},{"title":"8 Person re-identification by probabilistic relative distance comparison.pdf <span style='color:#111;'> 981.17KB </span>","children":null,"spread":false},{"title":"Efficient sparse coding algorithms.zip <span style='color:#111;'> 45.03KB </span>","children":null,"spread":false},{"title":"7 Efficient training of very deep neural networks for supervised hashing.pdf <span style='color:#111;'> 1.58MB </span>","children":null,"spread":false},{"title":"4Efficient sparse coding algorithms.pdf <span style='color:#111;'> 1.06MB </span>","children":null,"spread":false},{"title":"21Going deeper with convolutions.pdf <span style='color:#111;'> 1.16MB </span>","children":null,"spread":false}],"spread":false}],"spread":true}]

评论信息

免责申明

【只为小站】的资源来自网友分享,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,【只为小站】 无法对用户传输的作品、信息、内容的权属或合法性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论 【只为小站】 经营者是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。
本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二条之规定,若资源存在侵权或相关问题请联系本站客服人员,zhiweidada#qq.com,请把#换成@,本站将给予最大的支持与配合,做到及时反馈和处理。关于更多版权及免责申明参见 版权及免责申明