pytorch_advanced:它是一个存储库,其中放置了“边做边学!PyTorch开发深度学习”一书的实现代码。

上传者: 42150341 | 上传时间: 2022-12-10 22:02:12 | 文件大小: 5.29MB | 文件类型: ZIP
边做边学!PyTorch开发深度学习 支持库。 1.本文档中处理的任务内容和深度学习模型 第1章图像分类和迁移学习(VGG) 第2章对象识别(SSD) 第3章语义分割(PSPNet) 第4章姿势估计(OpenPose) 第5章GAN的图像生成(DCGAN,自我注意GAN) 第6章GAN异常检测(AnoGAN,Efficient GAN) 第7章通过自然语言处理(变压器)进行情感分析 第8章通过自然语言处理(BERT)进行情感分析 第9章视频分类(3DCNN,ECO) 本手册的详细内容在下面分别说明。 2.问题/更正由问题管理 问题和更正在此GitHub问题中进行管理。 如有任何疑问,请单击此处。 3.关于印刷错误 单击此处以获取本书中的印刷错误列表。我很抱歉。

文件下载

资源详情

[{"title":"( 94 个子文件 5.29MB ) pytorch_advanced:它是一个存储库,其中放置了“边做边学!PyTorch开发深度学习”一书的实现代码。","children":[{"title":"pytorch_advanced-master","children":[{"title":"etc","children":[{"title":"pytorch_advanced.png <span style='color:#111;'> 236.26KB </span>","children":null,"spread":false}],"spread":true},{"title":"2_objectdetection","children":[{"title":".gitignore <span style='color:#111;'> 88B </span>","children":null,"spread":false},{"title":"utils","children":[{"title":"data_augumentation.py <span style='color:#111;'> 12.84KB </span>","children":null,"spread":false},{"title":"match.py <span style='color:#111;'> 5.85KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 111B </span>","children":null,"spread":false},{"title":"ssd_model.py <span style='color:#111;'> 42.02KB </span>","children":null,"spread":false},{"title":"ssd_predict_show.py <span style='color:#111;'> 5.65KB </span>","children":null,"spread":false}],"spread":true},{"title":"2-8_SSD_inference.ipynb <span style='color:#111;'> 716.94KB </span>","children":null,"spread":false},{"title":"2-7_SSD_training.ipynb <span style='color:#111;'> 103.60KB </span>","children":null,"spread":false},{"title":"2-8_SSD_inference_appendix.ipynb <span style='color:#111;'> 13.22KB </span>","children":null,"spread":false},{"title":"make_folders_and_data_downloads.ipynb <span style='color:#111;'> 4.73KB </span>","children":null,"spread":false},{"title":"2-4-5_SSD_model_forward.ipynb <span style='color:#111;'> 68.58KB </span>","children":null,"spread":false},{"title":"data","children":[{"title":"cowboy-757575_640.jpg <span style='color:#111;'> 88.79KB </span>","children":null,"spread":false}],"spread":true},{"title":"2-6_loss_function.ipynb <span style='color:#111;'> 12.63KB </span>","children":null,"spread":false},{"title":"2-2-3_Dataset_DataLoader.ipynb <span style='color:#111;'> 216.28KB </span>","children":null,"spread":false}],"spread":true},{"title":"README.md <span style='color:#111;'> 1.79KB </span>","children":null,"spread":false},{"title":"4_pose_estimation","children":[{"title":".gitignore <span style='color:#111;'> 118B </span>","children":null,"spread":false},{"title":"utils","children":[{"title":"data_augumentation.py <span style='color:#111;'> 18.25KB </span>","children":null,"spread":false},{"title":"decode_pose.py <span style='color:#111;'> 23.15KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 114B </span>","children":null,"spread":false},{"title":"openpose_net.py <span style='color:#111;'> 7.47KB </span>","children":null,"spread":false},{"title":"dataloader.py <span style='color:#111;'> 13.13KB </span>","children":null,"spread":false}],"spread":true},{"title":"4-5_TensorBoardX.ipynb <span style='color:#111;'> 15.03KB </span>","children":null,"spread":false},{"title":"4-6_OpenPose_training.ipynb <span style='color:#111;'> 14.45KB </span>","children":null,"spread":false},{"title":"4-3-4_NetworkModel.ipynb <span style='color:#111;'> 146.72KB </span>","children":null,"spread":false},{"title":"make_folders_and_data_downloads.ipynb <span style='color:#111;'> 5.19KB </span>","children":null,"spread":false},{"title":"4-7_OpenPose_inference.ipynb <span style='color:#111;'> 1.16MB </span>","children":null,"spread":false},{"title":"4-2_DataLoader.ipynb <span style='color:#111;'> 1.21MB </span>","children":null,"spread":false},{"title":"data","children":[{"title":"hit-1407826_640.jpg <span style='color:#111;'> 95.93KB </span>","children":null,"spread":false}],"spread":true}],"spread":true},{"title":"LICENSE <span style='color:#111;'> 1.17KB </span>","children":null,"spread":false},{"title":"5_gan_generation","children":[{"title":".gitignore <span style='color:#111;'> 36B </span>","children":null,"spread":false},{"title":"make_folders_and_data_downloads.ipynb <span style='color:#111;'> 11.28KB </span>","children":null,"spread":false},{"title":"5-4_SAGAN.ipynb <span style='color:#111;'> 247.50KB </span>","children":null,"spread":false},{"title":"5-1_2_DCGAN.ipynb <span style='color:#111;'> 163.69KB </span>","children":null,"spread":false}],"spread":true},{"title":"3_semantic_segmentation","children":[{"title":".gitignore <span style='color:#111;'> 88B </span>","children":null,"spread":false},{"title":"utils","children":[{"title":"data_augumentation.py <span style='color:#111;'> 4.95KB </span>","children":null,"spread":false},{"title":"pspnet.py <span style='color:#111;'> 10.41KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 81B </span>","children":null,"spread":false},{"title":"dataloader.py <span style='color:#111;'> 5.14KB </span>","children":null,"spread":false}],"spread":true},{"title":"3-8_PSPNet_inference.ipynb <span style='color:#111;'> 423.01KB </span>","children":null,"spread":false},{"title":"3-3-6_NetworkModel.ipynb <span style='color:#111;'> 54.91KB </span>","children":null,"spread":false},{"title":"make_folders_and_data_downloads.ipynb <span style='color:#111;'> 3.90KB </span>","children":null,"spread":false},{"title":"3-7_PSPNet_training.ipynb <span style='color:#111;'> 94.96KB </span>","children":null,"spread":false},{"title":"3-8_PSPNet_inference_appendix.ipynb <span style='color:#111;'> 236.90KB </span>","children":null,"spread":false},{"title":"data","children":[{"title":"cowboy-757575_640.jpg <span style='color:#111;'> 88.79KB </span>","children":null,"spread":false}],"spread":false},{"title":"3-2_DataLoader.ipynb <span style='color:#111;'> 138.54KB </span>","children":null,"spread":false}],"spread":true},{"title":"7_nlp_sentiment_transformer","children":[{"title":"7-4_vectorize.ipynb <span style='color:#111;'> 63.32KB </span>","children":null,"spread":false},{"title":".gitignore <span style='color:#111;'> 317B </span>","children":null,"spread":false},{"title":"7-2_torchtext.ipynb <span style='color:#111;'> 12.88KB </span>","children":null,"spread":false},{"title":"7-6_Transformer.ipynb <span style='color:#111;'> 22.30KB </span>","children":null,"spread":false},{"title":"utils","children":[{"title":"__init__.py <span style='color:#111;'> 53B </span>","children":null,"spread":false},{"title":"transformer.py <span style='color:#111;'> 6.90KB </span>","children":null,"spread":false},{"title":"dataloader.py <span style='color:#111;'> 4.49KB </span>","children":null,"spread":false}],"spread":true},{"title":"7-1_Tokenizer.ipynb <span style='color:#111;'> 6.84KB </span>","children":null,"spread":false},{"title":"make_folders_and_data_downloads.ipynb <span style='color:#111;'> 6.56KB </span>","children":null,"spread":false},{"title":"data","children":[{"title":"text_val.tsv <span style='color:#111;'> 412B </span>","children":null,"spread":false},{"title":"text_test.tsv <span style='color:#111;'> 412B </span>","children":null,"spread":false},{"title":"text_train.tsv <span style='color:#111;'> 412B </span>","children":null,"spread":false}],"spread":false},{"title":"7-7_transformer_training_inference.ipynb <span style='color:#111;'> 70.67KB </span>","children":null,"spread":false},{"title":"7-5_IMDb_Dataset_DataLoader.ipynb <span style='color:#111;'> 54.48KB </span>","children":null,"spread":false}],"spread":true},{"title":"6_gan_anomaly_detection","children":[{"title":".gitignore <span style='color:#111;'> 36B </span>","children":null,"spread":false},{"title":"6-2_AnoGAN.ipynb <span style='color:#111;'> 280.15KB </span>","children":null,"spread":false},{"title":"make_folders_and_data_downloads.ipynb <span style='color:#111;'> 18.67KB </span>","children":null,"spread":false},{"title":"6-4_EfficientGAN.ipynb <span style='color:#111;'> 538.00KB </span>","children":null,"spread":false}],"spread":true},{"title":"8_nlp_sentiment_bert","children":[{"title":".gitignore <span style='color:#111;'> 74B </span>","children":null,"spread":false},{"title":"utils","children":[{"title":"bert.py <span style='color:#111;'> 30.16KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 45B </span>","children":null,"spread":false},{"title":"tokenizer.py <span style='color:#111;'> 8.07KB </span>","children":null,"spread":false}],"spread":true},{"title":"8-2-3_bert_base.ipynb <span style='color:#111;'> 217.78KB </span>","children":null,"spread":false},{"title":"make_folders_and_data_downloads.ipynb <span style='color:#111;'> 7.31KB </span>","children":null,"spread":false},{"title":"8-4_bert_IMDb.ipynb <span style='color:#111;'> 178.38KB </span>","children":null,"spread":false}],"spread":true},{"title":"9_video_classification_eco","children":[{"title":"9-4_2_convert_mp4_to_jpeg.ipynb <span style='color:#111;'> 5.33KB </span>","children":null,"spread":false},{"title":".gitignore <span style='color:#111;'> 50B </span>","children":null,"spread":false},{"title":"video_download","children":[{"title":"kinetics_400_label_dicitionary.csv <span style='color:#111;'> 7.30KB </span>","children":null,"spread":false},{"title":"environment.yml <span style='color:#111;'> 554B </span>","children":null,"spread":false},{"title":"download.py <span style='color:#111;'> 8.41KB </span>","children":null,"spread":false},{"title":"kinetics-400_val_8videos.csv <span style='color:#111;'> 372B </span>","children":null,"spread":false}],"spread":false},{"title":"utils","children":[{"title":"eco.py <span style='color:#111;'> 16.96KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 61B </span>","children":null,"spread":false},{"title":"kinetics400_eco_dataloader.py <span style='color:#111;'> 9.73KB </span>","children":null,"spread":false}],"spread":false},{"title":"9-2-3_eco.ipynb <span style='color:#111;'> 35.51KB </span>","children":null,"spread":false},{"title":"9-4_1_kinetics_download_for_python2.ipynb <span style='color:#111;'> 3.65KB </span>","children":null,"spread":false},{"title":"9-4_3_ECO_DataLoader.ipynb <span style='color:#111;'> 31.21KB </span>","children":null,"spread":false},{"title":"9-5_ECO_inference.ipynb <span style='color:#111;'> 44.28KB </span>","children":null,"spread":false}],"spread":true},{"title":"1_image_classification","children":[{"title":"1-3_transfer_learning.ipynb <span style='color:#111;'> 343.14KB </span>","children":null,"spread":false},{"title":".gitignore <span style='color:#111;'> 84B </span>","children":null,"spread":false},{"title":"1_3_transfer_learning_on_GoogleColab.ipynb <span style='color:#111;'> 344.92KB </span>","children":null,"spread":false},{"title":"utils","children":[{"title":"dataloader_image_classification.py <span style='color:#111;'> 3.91KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 46B </span>","children":null,"spread":false}],"spread":false},{"title":"1-1_load_vgg.ipynb <span style='color:#111;'> 374.29KB </span>","children":null,"spread":false},{"title":"1-1_load_vgg_on_GoogleColab.ipynb <span style='color:#111;'> 388.98KB </span>","children":null,"spread":false},{"title":"make_folders_and_data_downloads.ipynb <span style='color:#111;'> 3.28KB </span>","children":null,"spread":false},{"title":"1-5_fine_tuning.ipynb <span style='color:#111;'> 15.24KB </span>","children":null,"spread":false},{"title":"data","children":[{"title":"goldenretriever-3724972_640.jpg <span style='color:#111;'> 85.87KB </span>","children":null,"spread":false}],"spread":false}],"spread":true}],"spread":false}],"spread":true}]

评论信息

免责申明

【只为小站】的资源来自网友分享,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,【只为小站】 无法对用户传输的作品、信息、内容的权属或合法性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论 【只为小站】 经营者是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。
本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二条之规定,若资源存在侵权或相关问题请联系本站客服人员,zhiweidada#qq.com,请把#换成@,本站将给予最大的支持与配合,做到及时反馈和处理。关于更多版权及免责申明参见 版权及免责申明