Show simple item record

dc.contributor.authorIvanova, Danielaen_US
dc.contributor.authorWilliamson, Johnen_US
dc.contributor.authorHenderson, Paulen_US
dc.contributor.editorMyszkowski, Karolen_US
dc.contributor.editorNiessner, Matthiasen_US
dc.date.accessioned2023-05-03T06:09:54Z
dc.date.available2023-05-03T06:09:54Z
dc.date.issued2023
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14749
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14749
dc.description.abstractDigital scans of analogue photographic film typically contain artefacts such as dust and scratches. Automated removal of these is an important part of preservation and dissemination of photographs of historical and cultural importance. While state-of-the-art deep learning models have shown impressive results in general image inpainting and denoising, film artefact removal is an understudied problem. It has particularly challenging requirements, due to the complex nature of analogue damage, the high resolution of film scans, and potential ambiguities in the restoration. There are no publicly available highquality datasets of real-world analogue film damage for training and evaluation, making quantitative studies impossible. We address the lack of ground-truth data for evaluation by collecting a dataset of 4K damaged analogue film scans paired with manually-restored versions produced by a human expert, allowing quantitative evaluation of restoration performance. We have made the dataset available at https://doi.org/10.6084/m9.figshare.21803304. We construct a larger synthetic dataset of damaged images with paired clean versions using a statistical model of artefact shape and occurrence learnt from real, heavily-damaged images. We carefully validate the realism of the simulated damage via a human perceptual study, showing that even expert users find our synthetic damage indistinguishable from real. In addition, we demonstrate that training with our synthetically damaged dataset leads to improved artefact segmentation performance when compared to previously proposed synthetic analogue damage overlays. The synthetically damaged dataset can be found at https://doi.org/10.6084/m9. figshare.21815844, and the annotated authentic artefacts along with the resulting statistical damage model at https:// github.com/daniela997/FilmDamageSimulator. Finally, we use these datasets to train and analyse the performance of eight state-of-the-art image restoration methods on high-resolution scans. We compare both methods which directly perform the restoration task on scans with artefacts, and methods which require a damage mask to be provided for the inpainting of artefacts. We modify the methods to process the inputs in a patch-wise fashion to operate on original high resolution film scans.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleSimulating Analogue Film Damage to Analyse and Improve Artefact Restoration on High-resolution Scansen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersImage and Video Processinng
dc.description.volume42
dc.description.number2
dc.identifier.doi10.1111/cgf.14749
dc.identifier.pages133-148
dc.identifier.pages16 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License