Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Related: I know the authors used a DCGAN implementation (a pre-trained model, it looks like), but is it known what their approach for up-scaling the generated images is? In art generation I've seen GAN output of 128x128, e.g. that is then upscaled with a super-resolution network. Is something similar being done for the "final painting", or is the GAN somehow efficient enough to do large-format output in a decent training time?


I don't know exactly what was used to upscale here, but Progressive Growing of GANs[0] was a breakthrough last year that proved at least 1024x1024 was viably producable.

The short explanation is: Train and freeze the most basic layer of the model progressively to "let the network" understand higher resolution concepts one piece at the time, and avoid mode-collapse.

The network architecture illustrating this a bit better is shown on page 3 of the paper [1]

[0]: https://github.com/tkarras/progressive_growing_of_gans

[1]: https://arxiv.org/pdf/1710.10196.pdf


I think they’ve upscaled using Lanczos resampling and have then done wavelet deconvolution - the thing is furry with artefacts, and looks just like when I push an astronomical image too far.


You can use neural style transfer algorithms to upscale to a decent resolution, if you use style image similar to the DCGAN generated output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: