dc.contributor.author | Nalbach, Oliver | en_US |
dc.contributor.author | Arabadzhiyska, Elena | en_US |
dc.contributor.author | Mehta, Dushyant | en_US |
dc.contributor.author | Seidel, Hans-Peter | en_US |
dc.contributor.author | Ritschel, Tobias | en_US |
dc.contributor.editor | Zwicker, Matthias and Sander, Pedro | en_US |
dc.date.accessioned | 2017-06-19T06:50:49Z | |
dc.date.available | 2017-06-19T06:50:49Z | |
dc.date.issued | 2017 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | http://dx.doi.org/10.1111/cgf.13225 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13225 | |
dc.description.abstract | In computer vision, convolutional neural networks (CNNs) achieve unprecedented performance for inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen space shading has boosted the quality of real-time rendering, converting the same kind of attributes of a virtual scene back to appearance, enabling effects like ambient occlusion, indirect light, scattering and many more. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images. | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | Computing methodologies | |
dc.subject | | |
dc.subject | > Neural networks | |
dc.subject | Rendering | |
dc.subject | Rasterization | |
dc.title | Deep Shading: Convolutional Neural Networks for Screen Space Shading | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Lighting and Shading | |
dc.description.volume | 36 | |
dc.description.number | 4 | |
dc.identifier.doi | 10.1111/cgf.13225 | |
dc.identifier.pages | 065-078 | |