Day-to-Night Road Scene Image Translation Using Semantic Segmentation
Date
2020Metadata
Show full item recordAbstract
We present a semi-automated framework that translates day-time domain road scene images to those for the night-time domain. Unlike recent studies based on the Generative Adversarial Networks (GANs), we avoid learning for the translation without random failures. Our framework uses semantic annotation to extract scene elements, perceives a scene structure/depth, and applies per-element translation. Experimental results demonstrate that our framework can synthesize higher-resolution results without artifacts in the translation.
BibTeX
@inproceedings {10.2312:pg.20201231,
booktitle = {Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, Burkhard},
title = {{Day-to-Night Road Scene Image Translation Using Semantic Segmentation}},
author = {Baek, Seung Youp and Lee, Sungkil},
year = {2020},
publisher = {The Eurographics Association},
ISBN = {978-3-03868-120-5},
DOI = {10.2312/pg.20201231}
}
booktitle = {Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, Burkhard},
title = {{Day-to-Night Road Scene Image Translation Using Semantic Segmentation}},
author = {Baek, Seung Youp and Lee, Sungkil},
year = {2020},
publisher = {The Eurographics Association},
ISBN = {978-3-03868-120-5},
DOI = {10.2312/pg.20201231}
}