stylegan2-pytorch:PyTorch中分析和改善StyleGAN(StyleGAN 2)图像质量的实现-源码

上传者: 42130786 | 上传时间: 2021-07-16 10:51:28 | 文件大小: 52.55MB | 文件类型: ZIP
PyTorch中的StyleGAN 2 在PyTorch中实现并改善StyleGAN的图像质量( ) 注意 我试图尽可能接近官方实施,但也许我遗漏了一些细节。 因此,请谨慎使用此实现。 要求 我已经测试过: PyTorch 1.3.1 CUDA 10.1 / 10.2 用法 首先创建lmdb数据集: python prepare_data.py --out LMDB_PATH --n_worker N_WORKER --size SIZE1,SIZE2,SIZE3,... DATASET_PATH 这会将图像转换为jpeg并预先调整其大小。 此实现不使用渐进式增长,但是对于以后要尝试其他分辨率的情况,可以使用带有逗号分隔列表的大小参数来创建多个分辨率数据集。 然后您可以在分布式设置中训练模型 python -m torch.distributed.launch --nproc

文件下载

资源详情

[{"title":"( 49 个子文件 52.55MB ) stylegan2-pytorch:PyTorch中分析和改善StyleGAN(StyleGAN 2)图像质量的实现-源码","children":[{"title":"stylegan2-pytorch-master","children":[{"title":"distributed.py <span style='color:#111;'> 2.65KB </span>","children":null,"spread":false},{"title":"lpips","children":[{"title":"pretrained_networks.py <span style='color:#111;'> 6.38KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 5.59KB </span>","children":null,"spread":false},{"title":"weights","children":[{"title":"v0.1","children":[{"title":"squeeze.pth <span style='color:#111;'> 10.56KB </span>","children":null,"spread":false},{"title":"vgg.pth <span style='color:#111;'> 7.12KB </span>","children":null,"spread":false},{"title":"alex.pth <span style='color:#111;'> 5.87KB </span>","children":null,"spread":false}],"spread":true},{"title":"v0.0","children":[{"title":"squeeze.pth <span style='color:#111;'> 9.82KB </span>","children":null,"spread":false},{"title":"vgg.pth <span style='color:#111;'> 6.58KB </span>","children":null,"spread":false},{"title":"alex.pth <span style='color:#111;'> 5.33KB </span>","children":null,"spread":false}],"spread":true}],"spread":true},{"title":"networks_basic.py <span style='color:#111;'> 7.31KB </span>","children":null,"spread":false},{"title":"base_model.py <span style='color:#111;'> 1.58KB </span>","children":null,"spread":false},{"title":"dist_model.py <span style='color:#111;'> 11.50KB </span>","children":null,"spread":false}],"spread":true},{"title":"convert_weight.py <span style='color:#111;'> 8.40KB </span>","children":null,"spread":false},{"title":"train.py <span style='color:#111;'> 15.99KB </span>","children":null,"spread":false},{"title":"generate.py <span style='color:#111;'> 2.22KB </span>","children":null,"spread":false},{"title":"LICENSE-LPIPS <span style='color:#111;'> 1.35KB </span>","children":null,"spread":false},{"title":"apply_factor.py <span style='color:#111;'> 2.59KB </span>","children":null,"spread":false},{"title":"op","children":[{"title":"upfirdn2d.cpp <span style='color:#111;'> 1.31KB </span>","children":null,"spread":false},{"title":"conv2d_gradfix.py <span style='color:#111;'> 6.45KB </span>","children":null,"spread":false},{"title":"__init__.py <span style='color:#111;'> 89B </span>","children":null,"spread":false},{"title":"fused_bias_act.cpp <span style='color:#111;'> 1.30KB </span>","children":null,"spread":false},{"title":"upfirdn2d.py <span style='color:#111;'> 5.98KB </span>","children":null,"spread":false},{"title":"upfirdn2d_kernel.cu <span style='color:#111;'> 11.79KB </span>","children":null,"spread":false},{"title":"fused_act.py <span style='color:#111;'> 3.20KB </span>","children":null,"spread":false},{"title":"fused_bias_act_kernel.cu <span style='color:#111;'> 2.76KB </span>","children":null,"spread":false}],"spread":true},{"title":"swagan.py <span style='color:#111;'> 10.92KB </span>","children":null,"spread":false},{"title":"factor_index-13_degree-5.0.png <span style='color:#111;'> 3.15MB </span>","children":null,"spread":false},{"title":"non_leaking.py <span style='color:#111;'> 13.68KB </span>","children":null,"spread":false},{"title":"fid.py <span style='color:#111;'> 3.55KB </span>","children":null,"spread":false},{"title":"model.py <span style='color:#111;'> 18.72KB </span>","children":null,"spread":false},{"title":"dataset.py <span style='color:#111;'> 1.02KB </span>","children":null,"spread":false},{"title":"calc_inception.py <span style='color:#111;'> 3.91KB </span>","children":null,"spread":false},{"title":"inception_ffhq.pkl <span style='color:#111;'> 32.01MB </span>","children":null,"spread":false},{"title":"doc","children":[{"title":"stylegan2-church-config-f.png <span style='color:#111;'> 1.86MB </span>","children":null,"spread":false},{"title":"sample.png <span style='color:#111;'> 6.74MB </span>","children":null,"spread":false},{"title":"sample-metfaces.png <span style='color:#111;'> 5.19MB </span>","children":null,"spread":false},{"title":"stylegan2-ffhq-config-f.png <span style='color:#111;'> 4.90MB </span>","children":null,"spread":false}],"spread":false},{"title":"LICENSE <span style='color:#111;'> 1.05KB </span>","children":null,"spread":false},{"title":"inception.py <span style='color:#111;'> 11.35KB </span>","children":null,"spread":false},{"title":"closed_form_factorization.py <span style='color:#111;'> 835B </span>","children":null,"spread":false},{"title":"README.md <span style='color:#111;'> 4.39KB </span>","children":null,"spread":false},{"title":"prepare_data.py <span style='color:#111;'> 2.86KB </span>","children":null,"spread":false},{"title":"sample","children":[{"title":".gitignore <span style='color:#111;'> 6B </span>","children":null,"spread":false}],"spread":false},{"title":"projector.py <span style='color:#111;'> 6.82KB </span>","children":null,"spread":false},{"title":"ppl.py <span style='color:#111;'> 3.76KB </span>","children":null,"spread":false},{"title":".gitignore <span style='color:#111;'> 1.78KB </span>","children":null,"spread":false},{"title":"checkpoint","children":[{"title":".gitignore <span style='color:#111;'> 5B </span>","children":null,"spread":false}],"spread":false},{"title":"LICENSE-FID <span style='color:#111;'> 11.09KB </span>","children":null,"spread":false},{"title":"LICENSE-NVIDIA <span style='color:#111;'> 4.66KB </span>","children":null,"spread":false}],"spread":false}],"spread":true}]

评论信息

免责申明

【只为小站】的资源来自网友分享,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,【只为小站】 无法对用户传输的作品、信息、内容的权属或合法性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论 【只为小站】 经营者是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。
本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二条之规定,若资源存在侵权或相关问题请联系本站客服人员,zhiweidada#qq.com,请把#换成@,本站将给予最大的支持与配合,做到及时反馈和处理。关于更多版权及免责申明参见 版权及免责申明