stylegan2-ada-pytorch:StyleGAN2-ADA-官方PyTorch实施-源码

上传者: 42151772 | 上传时间: 2021-09-18 16:29:22 | 文件大小: 1.12MB | 文件类型: ZIP
StyleGAN2-ADA —官方PyTorch实施 用有限的数据训练生成对抗网络Tero Karras,Miika Aittala,Janne Hellsten,Samuli Laine,Jaakko Lehtinen,Timo Aila 摘要:使用太少的数据训练生成对抗网络(GAN)通常会导致判别器过度拟合,从而导致训练分散。 我们提出了一种自适应鉴别器增强机制,该机制可显着稳定有限数据环境中的训练。 该方法不需要更改丢失功能或网络体系结构,并且适用于从头开始训练以及在其他数据集上对现有GAN进行微调时。 我们在几个数据集上证明,仅使用几千个训练图像就可以取得良好的效果,并且通常将StyleGAN2的结果与较少的图像数量级进行匹配。 我们希望这将为GAN开辟新的应用程序域。 我们还发现,实际上,广泛使用的CIFAR-10只是一个有限的数据基准,并将记录FID从5.59提高到2.4

文件下载

资源详情

[{"title":"( 52 个子文件 1.12MB ) stylegan2-ada-pytorch:StyleGAN2-ADA-官方PyTorch实施-源码","children":[{"title":"stylegan2-ada-pytorch-main","children":[{"title":"train.py <span style='color:#111;'> 23.50KB </span>","children":null,"spread":false},{"title":".github","children":[{"title":"ISSUE_TEMPLATE","children":[{"title":"bug_report.md <span style='color:#111;'> 988B </span>","children":null,"spread":false}],"spread":true}],"spread":true},{"title":"LICENSE.txt <span style='color:#111;'> 4.32KB </span>","children":null,"spread":false},{"title":"dataset_tool.py <span style='color:#111;'> 17.46KB </span>","children":null,"spread":false},{"title":"generate.py <span style='color:#111;'> 5.21KB </span>","children":null,"spread":false},{"title":"metrics","children":[{"title":"perceptual_path_length.py <span style='color:#111;'> 5.41KB </span>","children":null,"spread":false},{"title":"inception_score.py <span style='color:#111;'> 1.83KB </span>","children":null,"spread":false},{"title":"precision_recall.py <span style='color:#111;'> 3.53KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 435B </span>","children":null,"spread":false},{"title":"frechet_inception_distance.py <span style='color:#111;'> 1.99KB </span>","children":null,"spread":false},{"title":"metric_main.py <span style='color:#111;'> 5.58KB </span>","children":null,"spread":false},{"title":"kernel_inception_distance.py <span style='color:#111;'> 2.25KB </span>","children":null,"spread":false},{"title":"metric_utils.py <span style='color:#111;'> 11.53KB </span>","children":null,"spread":false}],"spread":true},{"title":"Dockerfile <span style='color:#111;'> 897B </span>","children":null,"spread":false},{"title":"dnnlib","children":[{"title":"util.py <span style='color:#111;'> 16.24KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 478B </span>","children":null,"spread":false}],"spread":true},{"title":"calc_metrics.py <span style='color:#111;'> 8.14KB </span>","children":null,"spread":false},{"title":"README.md <span style='color:#111;'> 24.95KB </span>","children":null,"spread":false},{"title":"projector.py <span style='color:#111;'> 8.78KB </span>","children":null,"spread":false},{"title":"docker_run.sh <span style='color:#111;'> 1.17KB </span>","children":null,"spread":false},{"title":"torch_utils","children":[{"title":"ops","children":[{"title":"upfirdn2d.h <span style='color:#111;'> 1.79KB </span>","children":null,"spread":false},{"title":"upfirdn2d.cpp <span style='color:#111;'> 4.45KB </span>","children":null,"spread":false},{"title":"conv2d_gradfix.py <span style='color:#111;'> 7.50KB </span>","children":null,"spread":false},{"title":"bias_act.cpp <span style='color:#111;'> 4.27KB </span>","children":null,"spread":false},{"title":"bias_act.h <span style='color:#111;'> 1.25KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 438B </span>","children":null,"spread":false},{"title":"upfirdn2d.py <span style='color:#111;'> 15.91KB </span>","children":null,"spread":false},{"title":"grid_sample_gradfix.py <span style='color:#111;'> 3.22KB </span>","children":null,"spread":false},{"title":"fma.py <span style='color:#111;'> 1.99KB </span>","children":null,"spread":false},{"title":"conv2d_resample.py <span style='color:#111;'> 7.41KB </span>","children":null,"spread":false},{"title":"bias_act.cu <span style='color:#111;'> 6.00KB </span>","children":null,"spread":false},{"title":"bias_act.py <span style='color:#111;'> 9.81KB </span>","children":null,"spread":false},{"title":"upfirdn2d.cu <span style='color:#111;'> 20.56KB </span>","children":null,"spread":false}],"spread":false},{"title":"misc.py <span style='color:#111;'> 10.74KB </span>","children":null,"spread":false},{"title":"custom_ops.py <span style='color:#111;'> 5.51KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 438B </span>","children":null,"spread":false},{"title":"persistence.py <span style='color:#111;'> 9.48KB </span>","children":null,"spread":false},{"title":"training_stats.py <span style='color:#111;'> 10.46KB </span>","children":null,"spread":false}],"spread":true},{"title":"training","children":[{"title":"augment.py <span style='color:#111;'> 25.76KB </span>","children":null,"spread":false},{"title":"loss.py <span style='color:#111;'> 7.13KB </span>","children":null,"spread":false},{"title":"dataset.py <span style='color:#111;'> 8.35KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 435B </span>","children":null,"spread":false},{"title":"training_loop.py <span style='color:#111;'> 21.09KB </span>","children":null,"spread":false},{"title":"networks.py <span style='color:#111;'> 36.52KB </span>","children":null,"spread":false}],"spread":false},{"title":"docs","children":[{"title":"license.html <span style='color:#111;'> 5.47KB </span>","children":null,"spread":false},{"title":"stylegan2-ada-training-curves.png <span style='color:#111;'> 496.07KB </span>","children":null,"spread":false},{"title":"stylegan2-ada-teaser-1024x252.png <span style='color:#111;'> 530.71KB </span>","children":null,"spread":false},{"title":"dataset-tool-help.txt <span style='color:#111;'> 2.26KB </span>","children":null,"spread":false},{"title":"train-help.txt <span style='color:#111;'> 3.59KB </span>","children":null,"spread":false}],"spread":false},{"title":".gitignore <span style='color:#111;'> 21B </span>","children":null,"spread":false},{"title":"legacy.py <span style='color:#111;'> 16.12KB </span>","children":null,"spread":false},{"title":"style_mixing.py <span style='color:#111;'> 4.78KB </span>","children":null,"spread":false}],"spread":false}],"spread":true}]

评论信息

免责申明

【只为小站】的资源来自网友分享,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,【只为小站】 无法对用户传输的作品、信息、内容的权属或合法性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论 【只为小站】 经营者是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。
本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二条之规定,若资源存在侵权或相关问题请联系本站客服人员,zhiweidada#qq.com,请把#换成@,本站将给予最大的支持与配合,做到及时反馈和处理。关于更多版权及免责申明参见 版权及免责申明