Programming Graphics Hardware
Cyril Zeller, Randy Fernando, Matthias Wloka, Mark Harris
|
Keywords
Real-time graphics, graphics pipeline, GPU, 3D API, Direct3D,
OpenGL, High-level shading language, HLSL, GLSL, Cg, graphics pipeline
bottleneck, shader.
Overview
The presentation gives the big picture for the entire tutorial. It
introduces the notions and frameworks that will be revisited in more
details during the following presentations:
- Overview of the tutorial.
- Real-time graphics concept.
- Graphics pipeline concept.
- How the graphics pipeline maps to the PC architecture (CPU <->
GPU).
- Controlling the GPU from the CPU: The driver and 3D API (OpenGL,
Direct3D).
- Evolution of the PC graphics pipeline and 3D APIs through time from
fixed function pipeline to today's programmable pipeline, with introduction
to z-buffering, texturing, TnL, multi-texturing, anti-aliasing, vertex
and pixel shaders.
- Review of real-time graphics applications.
Controlling the GPU from the CPU: The 3D API (Cyril Zeller)
Creating a real-time graphics application starts with writing a CPU
program, based on a 3D API, that feeds the GPU with data and commands.
The presentation first outlines how such a program would typically look
and then goes into the implementation specifics of Direct3D and OpenGL.
- Rendering loop skeleton
- Transferring data from the CPU to the GPU and vice-versa:
- Geometry data: rendering primitives, index buffers, Direct3D's
vertex buffers and OpenGL's vertex buffer objects
- Texture data: texture types and OpenGL's pixel buffer objects
- Resource locking
- Off-line rendering: Direct3D’s render targets and OpenGL's
superbuffers
- Rendering commands (render states, texture states, shaders, draw
call)
- Brooks: an interesting alternative to Direct3D and OpenGL.
Shader units are where most of the computation takes place to produce
the final image. In the last generations of GPUs these became full-fledged
processing units that are better programmed with a high-level language
similar to C for the CPU. The presentation recounts this evolution that
led to the development of Cg, HLSL and GLSL to then dive into the details
of those languages. They are very similar to each other in a lot of
aspects, so the presentation describes only one language outlining specificities
when it applies.
- Evolution of the GPU programming language from assembly to high-level
languages.
- Compilation: In the application, in the driver, off-line, on-line,
compilation profile.
- Syntax: data types, function, operator, semantics.
- Example of a vertex shader and a pixel shader.
- HLSL FX framework: techniques, passes, render states.
- Example of an FX file from within FXComposer.
To achieve maximum high-performance rendering of high-quality images
it is essential to know how to optimize the hardware graphics pipeline.
The presentation starts by introducing the notion of a bottleneck and
proceeds through every stage of the pipeline analyzing potential bottlenecks,
how to detect them, and how to remove them.
- Bottleneck concept.
- Overview of potential bottlenecks in a hardware graphics pipeline.
- For every pipeline stage (CPU, bus transfer, vertex shader, pixel
shader, texture bandwidth, framebuffer bandwidth): possible causes
of bottleneck, detection method and possible solutions for how to
remove it.
This talk showcases the effects that current graphics processors are
capable of rendering in real-time. For example, NVIDIA's latest demos
include Nalu, the mermaid, featuring life-like hair and iridescent scales
that smoothly transition into skin. The presentation describes how to
achieve these and other effects in detail. Along the way, we highlight
the latest features available on current graphics hardware, and how
to best take advantage of them.
The programmability and parallel processing power of last generation
GPUs open the door to new applications in non-graphics related domains.
The presentation goes through some of those, giving a detailed analysis
on how they manage to map to the graphics pipeline and leverage its
computation horsepower.
Intended audience:
This tutorial assumes basic knowledge in programming and familiarity
with the principles of 3D computer graphics, but not necessarily with
PC graphics hardware.
Presenters' background:
Cyril Zeller works in the developer
technology group at NVIDIA where he is involved in various demos and
tool development, as well as developer education events. Before joining
NVIDIA, Cyril was developing games at Electronic Arts Inc. He received
a Ph.D. in Computer Vision from the Ecole Polytechnique, France.
Randima (Randy) Fernando has loved
computer graphics from the age of eight. Working in NVIDIA’s
Developer Technology group, he helps to teach developers how to take
advantage of the latest GPU technology. Randy has a BS in computer science
and an MS in computer graphics, both from Cornell University. He has
been published in SIGGRAPH and has worked on two books: GPU Gems:
Programming Techniques, Tips, and Tricks for Real-Time Graphics
(as the editor) and The Cg Tutorial: The Definitive Guide to Programmable
Real-Time Graphics (as a co-author with Mark Kilgard).
Matthias Wloka works in the technical
developer relations group at NVIDIA. There, he gets to collaborate with
game-developers on, for example, performance-optimizing their game.
He is also always tinkering with the latest graphics hardware to explore
the limits of interactive real-time rendering. Before joining NVIDIA,
Matthias was a game developer himself, working for GameFX/THQ Inc. He
received his M.Sc in computer science from Brown University in 1990,
and his B.Sc from Christian Albrechts University in Kiel, Germany in
1987.
Mark Harris received a BS from
the University of Notre Dame in 1998, and a PhD in computer science
from the University of North Carolina at Chapel Hill in 2003. At UNC,
Mark's research covered a wide variety of computer graphics topics,
including real-time cloud simulation and rendering, general purpose
computation on GPUs, global illumination, nonphotorealistic rendering,
and virtual environments. During his graduate studies Mark worked briefly
at Intel, iROCK Games, and NVIDIA. Mark now works with NVIDIA's Technical
Developer Relations team based in the United Kingdom.
|