Title: User-Controlled Texture Synthesis and Interpolation via Deep Variational Autoencoder
Recent progress on deep generative adversarial networks (GANs) has shown encouraging results on texture synthesis. Feedforward-based methods [1,2] notably decrease computational costs compared to optimization-based methods. Subsequent work  further enables the synthesis of multiple textures with one single network model. However, there are still two remaining open problems for neural network texture synthesis: user control and interpolation among multiple textures. In this work, we investigate these. We propose a deep variational autoencoder (VAE) network which balances between two goals: reconstruction of a given input texture and interpolation of textures. We propose a special representation of the latent code and a special training scheme that give high quality on both of these goals. We validate the effectiveness of the proposed model in terms of interpolation quality and reconstruction quality. We also demonstrate user control of the resulting texture synthesis.
 Johnson J, Alahi A, Fei-Fei L. Perceptual losses for real-time style transfer and super-resolution[C]//European Conference on Computer Vision. Springer, Cham, 2016: 694-711.
 Li C, Wand M. Precomputed real-time texture synthesis with markovian generative adversarial networks[C]//European Conference on Computer Vision. Springer, Cham, 2016: 702-716.
 Li Y, Fang C, Yang J, et al. Universal style transfer via feature transforms[C]//Advances in Neural Information Processing Systems. 2017: 386-396.
Connelly Barnes (Advisor), Yanjun Qi (Chair), Vicente Ordonez-Roman