Show simple item record

dc.contributor.authorGao, Linen_US
dc.contributor.authorWu, Leien_US
dc.contributor.authorMeng, Xiangxuen_US
dc.contributor.editorUmetani, Nobuyukien_US
dc.contributor.editorWojtan, Chrisen_US
dc.contributor.editorVouga, Etienneen_US
dc.date.accessioned2022-10-04T06:41:27Z
dc.date.available2022-10-04T06:41:27Z
dc.date.issued2022
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14687
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14687
dc.description.abstractAlthough some progress has been made in the layout-to-image generation of complex scenes with multiple objects, object-level generation still suffers from distortion and poor recognizability. We argue that this is caused by the lack of feature encodings for edge information during image generation. In order to solve these limitations, we propose a novel edge-enhanced Generative Adversarial Network for layout-to-image generation (termed EL-GAN). The feature encodings of edge information are learned from the multi-level features output by the generator and iteratively optimized along the generator's pipeline. Two new components are included at each generator level to enable multi-scale learning. Specifically, one is the edge generation module (EGM), which is responsible for converting the output of the multi-level features by the generator into images of different scales and extracting their edge maps. The other is the edge fusion module (EFM), which integrates the feature encodings refined from the edge maps into the subsequent image generation process by modulating the parameters in the normalization layers. Meanwhile, the discriminator is fed with frequency-sensitive image features, which greatly enhances the generation quality of the image's high-frequency edge contours and low-frequency regions. Extensive experiments show that EL-GAN outperforms the state-of-the-art methods on the COCO-Stuff and Visual Genome datasets. Our source code is available at https://github.com/Azure616/EL-GAN.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies → Scene understanding; Image processing
dc.subjectComputing methodologies → Scene understanding
dc.subjectImage processing
dc.titleEL-GAN: Edge-Enhanced Generative Adversarial Network for Layout-to-Image Generationen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersImage Synthesis
dc.description.volume41
dc.description.number7
dc.identifier.doi10.1111/cgf.14687
dc.identifier.pages407-418
dc.identifier.pages12 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

  • 41-Issue 7
    Pacific Graphics 2022 - Symposium Proceedings

Show simple item record