dc.contributor.author | Ivanova, Daniela | en_US |
dc.contributor.author | Williamson, John | en_US |
dc.contributor.author | Henderson, Paul | en_US |
dc.contributor.editor | Myszkowski, Karol | en_US |
dc.contributor.editor | Niessner, Matthias | en_US |
dc.date.accessioned | 2023-05-03T06:09:54Z | |
dc.date.available | 2023-05-03T06:09:54Z | |
dc.date.issued | 2023 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.14749 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf14749 | |
dc.description.abstract | Digital scans of analogue photographic film typically contain artefacts such as dust and scratches. Automated removal of these is an important part of preservation and dissemination of photographs of historical and cultural importance. While state-of-the-art deep learning models have shown impressive results in general image inpainting and denoising, film artefact removal is an understudied problem. It has particularly challenging requirements, due to the complex nature of analogue damage, the high resolution of film scans, and potential ambiguities in the restoration. There are no publicly available highquality datasets of real-world analogue film damage for training and evaluation, making quantitative studies impossible. We address the lack of ground-truth data for evaluation by collecting a dataset of 4K damaged analogue film scans paired with manually-restored versions produced by a human expert, allowing quantitative evaluation of restoration performance. We have made the dataset available at https://doi.org/10.6084/m9.figshare.21803304. We construct a larger synthetic dataset of damaged images with paired clean versions using a statistical model of artefact shape and occurrence learnt from real, heavily-damaged images. We carefully validate the realism of the simulated damage via a human perceptual study, showing that even expert users find our synthetic damage indistinguishable from real. In addition, we demonstrate that training with our synthetically damaged dataset leads to improved artefact segmentation performance when compared to previously proposed synthetic analogue damage overlays. The synthetically damaged dataset can be found at https://doi.org/10.6084/m9. figshare.21815844, and the annotated authentic artefacts along with the resulting statistical damage model at https:// github.com/daniela997/FilmDamageSimulator. Finally, we use these datasets to train and analyse the performance of eight state-of-the-art image restoration methods on high-resolution scans. We compare both methods which directly perform the restoration task on scans with artefacts, and methods which require a damage mask to be provided for the inpainting of artefacts. We modify the methods to process the inputs in a patch-wise fashion to operate on original high resolution film scans. | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.title | Simulating Analogue Film Damage to Analyse and Improve Artefact Restoration on High-resolution Scans | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Image and Video Processinng | |
dc.description.volume | 42 | |
dc.description.number | 2 | |
dc.identifier.doi | 10.1111/cgf.14749 | |
dc.identifier.pages | 133-148 | |
dc.identifier.pages | 16 pages | |