Automatic Generation of Animated Population
in Virtual Environments
D. Thalmann, N. Magnenat-Thalmann, S. Donikian |
Keywords
population, crowd simulation, informed virtual environments,
autonomous agents.
Overview
The necessity to model virtual population appears in many applications
of computer animation and simulation. Such applications encompass several
different domains - representative or autonomous agents in virtual environments,
human factors analysis, training, education,simulation-based design,
and entertainment. Reproduce in simulation the dynamic life of virtual
environments in realtime is also a great challenge.
Simulating virtual environments populated with virtual but realistic
crowds requires dozens of different face and body geometries. In this
course, we will present methods that allow automatic generation of desired
population models. In particular, we will explain a method which generates
any population group that is statistically calculated to satisfy given
properties. We divide the population generation module into the face
part and the body part. Each of them is based on a database, which is
an organization of information collected and processed from the real
population dataset. We will also explain how to automatically modifying
an existing model by controlling the parameters provided. On any synthesized
model, the underlying bone and skin structure is properly adjusted,
so that the model remains completely animatable using the underlying
skeleton.
It requires at most to enable several hundreds of virtual humans to
navigate inside the mockup. Navigating is for a real human one of the
most basic behaviour but for behavioural animation it requires to integrate
different technologies. First to allow people to navigate, they should
be able to perceive their environment but not only in a geometric way.
Studies in psychology and urbanism has shown that visibility and topology
are also important in the navigation task. We will show how to build
automatically a hierarchical topologic map of the environment (indoor
and outdoor) from a 3D database. This structured environment can then
be used for path planning and reactive navigation of several hundreds
of virtual humans in real time. Another aspect in the navigation process
concerns the mental representation of the environment to introduce differences
in behaviour between people who use to navigate in a specific area and
others who discover it. We will present a spatial memory model for pedestrian
navigation inside virtual cities.
The course will also explore essential aspects to the generation of
virtual crowds. In particular, it will present the aspects concerning
information (intentions, status and knowledge), behavior (innate, group,
complex and guided) and control (programmed, autonomous and guided).
It will emphasize essential concepts like sensory input (vision, audition,
tactile), versatile motion control, artificial intelligence level, and
rendering techniques. The course will also presents the new challenge
in the production of realtime crowds for games, VR systems for training
and simulation. Techniques for rendering a very large number of Virtual
Humans will be emphasized. The course will be illustrated with a lot
of examples from recent movies and real-time applications in Emergency
situations and Cultural Heritage (like adding virtual audience in Roma
or Greek theaters).
Agenda
Introduction (Daniel Thalmann, 10 minutes)
- Objectives, Applications, State-of-the-Art
Creation of population (Nadia Magnenat-Thalmann, 40 minutes)
- Automatic construction of population models
- Animation ready and textured models
- Anthropometric modeling
- Individual deformation operators
Informed Virtual Environments (Stephane Donikian, 45 minutes)
- State of the art
- Path planning and reactive navigation
- Individual and collective pedestrian behaviours
- Animation chain for indoor and outdoor virtual environments
Virtual Humans models for crowds (30 minutes)
- Facial animation and motion capture (Nadia Magnenat-Thalmann)
- Motion models (Daniel Thalmann)
Crowd simulation (Daniel Thalmann, 30 minutes)
- Crowd models
- Artificial life techniques
- Impostor Rendering and Texture Generation
- The paintbrush approach
Case studies (all speakers, 15 minutes)
Conclusions (all speakers, 15 minutes)
Presenters
Daniel Thalmann is Professor and
Director of The Virtual Reality Lab (VRlab) at EPFL, Switzerland. He
is a pioneer in research on Virtual Humans. His current research interests
include Real-time Virtual Humans in Virtual Reality, Networked Virtual
Environments, Artificial Life, and Multimedia. Daniel Thalmann has been
Professor at The University of Montreal. He is coeditor-in-chief of
the Journal of Visualization and Computer Animation, and member of the
editorial board of the Visual Computer and 3 other journals. Daniel
Thalmann was Program Chair of several conferences including IEEE VR
2000. He has also organized 4 courses at SIGGRAPH on human animation.
Daniel Thalmann was the initiator of the Eurographics working group
on Animation and Simulation which he cochaired during more than 10 years.
Daniel Thalmann has published more than 250 papers in Graphics, Animation,
and Virtual Reality. He is coeditor of 30 books, and coauthor of several
books including the recent book on "Avatars in Networked Virtual
Environments", published by John Wiley and Sons. He received his
PhD in Computer Science in 1977 from the University of Geneva and an
Honorary Doctorate (Honoris Causa) from University Paul Sabatier in
Toulouse, France, in 2003.
Nadia Magnenat-Thalmann has pioneered
research into virtual humans over the last 20 years. She obtained several
Bachelors and Masters degrees in various disciplines and
a PhD in Quantum Physics from the University of Geneva. From 1977 to
1989, she was a Professor at the University of Montreal in Canada. She
moved to the University of Geneva in 1989, where she founded MIRALab.
She has received several scientific and artistic awards for her work
in Canada and in Europe. In l997, she has been elected to the Swiss
Academy of Technical Sciences, and more recently, she was nominated
as a Swiss personality who has contributed to the advance of science
in the 150 years history CD-ROM produced by the Swiss Confederation
Parliament, 1998, Bern, Switzerland. She has been invited to give hundreds
of lectures on various topics, all related to virtual humans. Author
and coauthor of a very high number of research papers and books, she
has directed and produced several films and real-time mixed reality
shows, among the latest are CYBERDANCE (l998), FASHION DREAMS (1999)
and the UTOPIANS (2001). She is editor-in-chief of the Visual Computer
Journal published by Springer Verlag and editor of several other research
journals.
Stephane Donikian got a Graduate
Degree in Computer Science in 1989 and a PhD in Computer Science in
1992. He is currently Research Scientist for CNRS (French National Center
for Scientific Research) and member of the Computer Graphics Research
Team at IRISA in Rennes, France. His research interests include Reactive
and Cognitive Behavioural Animation, Informed Virtual Environments,
Scenario Authoring, Interactive Fiction, Animation and Simulation Platform.
Stephane Donikian is co-animator with Jean-Pierre Jessel (IRIT) and
Catherine Pelachaud (Paris 8) of the French national research action
on virtual humans and is also co-animator of the french working group
in animation and simulation. He is author of several papers in journal
and conference in the fields of computer graphics and autonomous agents.
He is this year member of the program committee of EG'04, AAMAS'04,
TIDSE'04, CAVW'04, and AFRIGRAPH'04.
|