Show simple item record

dc.contributor.authorJiang, Hongdaen_US
dc.contributor.authorWang, Xien_US
dc.contributor.authorChristie, Marcen_US
dc.contributor.authorLiu, Libinen_US
dc.contributor.authorChen, Baoquanen_US
dc.contributor.editorBermano, Amit H.en_US
dc.contributor.editorKalogerakis, Evangelosen_US
dc.date.accessioned2024-04-16T14:43:16Z
dc.date.available2024-04-16T14:43:16Z
dc.date.issued2024
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.15055
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf15055
dc.description.abstractDesigning effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization-based solving, encoding of empirical rules, learning from real examples,...), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer-based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high-level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text-to-camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies -> Procedural animation; Artificial intelligence
dc.subjectComputing methodologies
dc.subjectProcedural animation
dc.subjectArtificial intelligence
dc.titleCinematographic Camera Diffusion Modelen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersCamera Paths and Motion Tracking
dc.description.volume43
dc.description.number2
dc.identifier.doi10.1111/cgf.15055
dc.identifier.pages14 pages


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record