模型剪枝2015-2020论文合集|已分类已整理

上传者: 31622015 | 上传时间: 2021-05-16 20:42:28 | 文件大小: 83.13MB | 文件类型: ZIP
模型剪枝领域,2015-2020论文合集,根据github上awesome-pruning 下载整理的,包括基本所有的论文和综述,已分类整理重命名(如CVPR2020-论文名)十分全,强烈建议下载阅读~

文件下载

资源详情

[{"title":"( 57 个子文件 83.13MB ) 模型剪枝2015-2020论文合集|已分类已整理","children":[{"title":"2015-2017","children":[{"title":"(NIPS 2015)Learning both Weights and Connections for Efficient Neural Networks.pdf <span style='color:#111;'> 1.03MB </span>","children":null,"spread":false},{"title":"(ICLR 2017)Pruning Filters for Efficient ConvNets.pdf <span style='color:#111;'> 4.22MB </span>","children":null,"spread":false},{"title":"(BMVC 2017 Oral)Fine-Pruning Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization.pdf <span style='color:#111;'> 2.06MB </span>","children":null,"spread":false},{"title":"(ICCV 2017)ThiNet A Filter Level Pruning Method for Deep Neural Network Compression.pdf <span style='color:#111;'> 870.62KB </span>","children":null,"spread":false},{"title":"(ICLR 2017)Faster CNNs with Direct Sparse Convolutions and Guided Pruning.pdf <span style='color:#111;'> 706.68KB </span>","children":null,"spread":false},{"title":"(NIPS 2017)Runtime Neural Pruning.pdf <span style='color:#111;'> 369.24KB </span>","children":null,"spread":false},{"title":"(NIPS 2016)Dynamic Network Surgery for Efficient DNNs.pdf <span style='color:#111;'> 1.47MB </span>","children":null,"spread":false},{"title":"(ICLR 2017)Pruning Convolutional Neural Networks for Resource Efficient Inference.pdf <span style='color:#111;'> 1.96MB </span>","children":null,"spread":false},{"title":"(CVPR 2017)Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning.pdf <span style='color:#111;'> 816.24KB </span>","children":null,"spread":false},{"title":"(ICLR 2016 Best)Deep Compression Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding.pdf <span style='color:#111;'> 1.01MB </span>","children":null,"spread":false},{"title":"(NIPS 2016)ESE Efficient Speech Recognition Engine with Sparse LSTM on FPGA.pdf <span style='color:#111;'> 5.34MB </span>","children":null,"spread":false},{"title":"(NIPS 2017)Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon.pdf <span style='color:#111;'> 576.51KB </span>","children":null,"spread":false},{"title":"(NIPS 2017)Net-Trim Convex Pruning of Deep Neural Networks with Performance Guarantee.pdf <span style='color:#111;'> 1.24MB </span>","children":null,"spread":false},{"title":"(ICCV 2017)Channel Pruning for Accelerating Very Deep Neural Networks.pdf <span style='color:#111;'> 963.17KB </span>","children":null,"spread":false},{"title":"(ICCV 2017)Learning Efficient Convolutional Networks Through Network Slimming.pdf <span style='color:#111;'> 919.67KB </span>","children":null,"spread":false}],"spread":false},{"title":"2019","children":[{"title":"(ICML 2019)EigenDamage Structured Pruning in the Kronecker-Factored Eigenbasis.pdf <span style='color:#111;'> 7.43MB </span>","children":null,"spread":false},{"title":"(ICLR 2019)SNIP SINGLE -SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY.pdf <span style='color:#111;'> 785.24KB </span>","children":null,"spread":false},{"title":"(ICML 2019)Approximated Oracle Filter Pruning for Destructive CNN Width Optimization.pdf <span style='color:#111;'> 3.61MB </span>","children":null,"spread":false},{"title":"(2019)Structured Pruning of Large Language Models.pdf <span style='color:#111;'> 671.60KB </span>","children":null,"spread":false},{"title":"(CVPR 2019)On Implicit Filter Level Sparsity in Convolutional Neural Networks.pdf <span style='color:#111;'> 8.07MB </span>","children":null,"spread":false},{"title":"(2019)Adversarial Neural Pruning.pdf <span style='color:#111;'> 6.39MB </span>","children":null,"spread":false},{"title":"(ICLR 2019)Rethinking the value of network pruning.pdf <span style='color:#111;'> 1.73MB </span>","children":null,"spread":false},{"title":"(CVPR 2019)OICSR Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks.pdf <span style='color:#111;'> 6.41MB </span>","children":null,"spread":false},{"title":"(CVPR 2019)Partial Order Pruning for Best Speed Accuracy Trade-off in Neural Architecture Search.pdf <span style='color:#111;'> 837.03KB </span>","children":null,"spread":false},{"title":"(CVPR 2019)Variational Convolutional Neural Network Pruning.pdf <span style='color:#111;'> 1.04MB </span>","children":null,"spread":false},{"title":"(NeurIPS 2019)Network Pruning via transformable architecture search.pdf <span style='color:#111;'> 616.60KB </span>","children":null,"spread":false},{"title":"(ICLR 2019)Dynamic Channel Pruning Feature Boosting and Suppression.pdf <span style='color:#111;'> 4.50MB </span>","children":null,"spread":false},{"title":"(CVPR 2019)Towards Optimal Structured CNN Pruning via Generative Adversarial Learning.pdf <span style='color:#111;'> 1.05MB </span>","children":null,"spread":false},{"title":"(ICLR 2019 Best)The lottery hypothesis finding sparse,trainable neural networks.pdf <span style='color:#111;'> 3.82MB </span>","children":null,"spread":false},{"title":"(ICLR 2019)Dynamic Sparse Graph for Efficient Deep Learning.pdf <span style='color:#111;'> 2.00MB </span>","children":null,"spread":false},{"title":"(ICML 2019)Collaborative Channel Pruning for Deep Networks.pdf <span style='color:#111;'> 692.21KB </span>","children":null,"spread":false},{"title":"(CVPR 2019)Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure.pdf <span style='color:#111;'> 1.08MB </span>","children":null,"spread":false},{"title":"(CVPR 2019)Structured Pruning of Neural Networks with Budget-Aware Regularization.pdf <span style='color:#111;'> 3.18MB </span>","children":null,"spread":false},{"title":"(ICONIP 2019)self-adaptive network pruning.pdf <span style='color:#111;'> 599.23KB </span>","children":null,"spread":false},{"title":"(CVPR 2019 Oral)Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration.pdf <span style='color:#111;'> 1.28MB </span>","children":null,"spread":false},{"title":"(CVPR 2019)Importance Estimation for Neural Network Pruning.pdf <span style='color:#111;'> 471.31KB </span>","children":null,"spread":false},{"title":"(2019)Pruning from Scratch.pdf <span style='color:#111;'> 2.66MB </span>","children":null,"spread":false}],"spread":false},{"title":"2018","children":[{"title":"(CVPR 2018)CLIP-Q Deep Network Compression Learning by In-Parallel Pruning-Quantization.pdf <span style='color:#111;'> 671.70KB </span>","children":null,"spread":false},{"title":"(IJCAI 2018)Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks.pdf <span style='color:#111;'> 477.71KB </span>","children":null,"spread":false},{"title":"(ECCV 2018)A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers.pdf <span style='color:#111;'> 573.04KB </span>","children":null,"spread":false},{"title":"(ECCV 2018)Data-Driven Sparse Structure Selection for Deep Neural Networks.pdf <span style='color:#111;'> 793.20KB </span>","children":null,"spread":false},{"title":"(ECCV 2018)Amc Automl for model compression and acceleration on mobile devices.pdf <span style='color:#111;'> 792.64KB </span>","children":null,"spread":false},{"title":"(ECCV 2018)Constraint-Aware Deep Neural Network Compression.pdf <span style='color:#111;'> 780.31KB </span>","children":null,"spread":false},{"title":"(NIPS 2018)Discrimination-aware Channel Pruning for Deep Neural Networks.pdf <span style='color:#111;'> 1.92MB </span>","children":null,"spread":false},{"title":"(CVPR 2018)PackNet Adding Multiple Tasks to a Single Network by Iterative Pruning.pdf <span style='color:#111;'> 349.17KB </span>","children":null,"spread":false},{"title":"(CVPR 2018)“Learning-Compression” Algorithms for Neural Net Pruning.pdf <span style='color:#111;'> 462.16KB </span>","children":null,"spread":false},{"title":"(ECCV 2018)Coreset-Based Neural Network Compression.pdf <span style='color:#111;'> 4.02MB </span>","children":null,"spread":false},{"title":"(NIPS 2018)Frequency-Domain Dynamic Pruning for Convolutional Neural Networks.pdf <span style='color:#111;'> 987.67KB </span>","children":null,"spread":false},{"title":"(ICLR 2018)Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers.pdf <span style='color:#111;'> 327.41KB </span>","children":null,"spread":false},{"title":"(ICLR 2018)To prune, or not to prune exploring the efficacy of pruning for model compression.pdf <span style='color:#111;'> 361.86KB </span>","children":null,"spread":false},{"title":"(CVPR 2018)NISP Pruning Networks using Neuron Importance Score Propagation.pdf <span style='color:#111;'> 1.20MB </span>","children":null,"spread":false}],"spread":false},{"title":"Survey","children":[{"title":"深度网络模型压缩综述_雷杰.caj <span style='color:#111;'> 882.12KB </span>","children":null,"spread":false},{"title":"(2018 arXiv)Recent Advances in Efficient Computation of Deep Convolutional Neural Networks.pdf <span style='color:#111;'> 1.48MB </span>","children":null,"spread":false},{"title":"深度神经网络模型压缩综述_李江昀.caj <span style='color:#111;'> 324.71KB </span>","children":null,"spread":false},{"title":"(2017)Model compression as constrained optimization, with application to neural nets. general framework.pdf <span style='color:#111;'> 383.83KB </span>","children":null,"spread":false},{"title":"(2017)Model compression as constrained optimization, with application to neural nets. quantization.pdf <span style='color:#111;'> 1.37MB </span>","children":null,"spread":false},{"title":"(2017 arXiv)A Survey of Model Compression and Acceleration for Deep Neural Networks.pdf <span style='color:#111;'> 753.46KB </span>","children":null,"spread":false}],"spread":true}],"spread":true}]

评论信息

  • 鹿鹿最可爱 :
    朋友反应说么有20年的。2020年的不知道咋肥事没传上去,应该只有15-19的可以先下载,正好20年的等全出了再单独传一个
    2020-10-27

免责申明

【只为小站】的资源来自网友分享,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,【只为小站】 无法对用户传输的作品、信息、内容的权属或合法性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论 【只为小站】 经营者是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。
本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二条之规定,若资源存在侵权或相关问题请联系本站客服人员,zhiweidada#qq.com,请把#换成@,本站将给予最大的支持与配合,做到及时反馈和处理。关于更多版权及免责申明参见 版权及免责申明