Graphics Systems and Models (II)

Slides:



Advertisements
Similar presentations
CS 352: Computer Graphics Chapter 7: The Rendering Pipeline.
Advertisements

Graphics Pipeline.
HCI 530 : Seminar (HCI) Damian Schofield.
Chapter 6: Vertices to Fragments Part 2 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley Mohan Sridharan Based on Slides.
Graphics Systems I-Chen Lin’s CG slides, Doug James’s CG slides Angel, Interactive Computer Graphics, Chap 1 Introduction to Graphics Pipeline.
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Models and Architectures Ed Angel Professor of Computer Science, Electrical and Computer.
Ch 1 Intro to Graphics page 1CS 367 First Day Agenda Best course you have ever had (survey) Info Cards Name, , Nickname C / C++ experience, EOS experience.
CHAPTER 7 Viewing and Transformations © 2008 Cengage Learning EMEA.
1 Chapter 1: Graphics Systems and Models. 2 Applications of C. G. – 1/4 Display of information Maps GIS (geographic information system) CT (computer tomography)
C O M P U T E R G R A P H I C S Guoying Zhao 1 / 46 C O M P U T E R G R A P H I C S Guoying Zhao 1 / 46 Computer Graphics Introduction II.
Graphics System CSC 2141 Introduction to Computer Graphics.
ISC/GAM 4322 ISC 6310 Multimedia Development and Programming Unit 1 Graphics Systems and Models.
1 Computer Graphics Week3 –Graphics & Image Processing.
Chapter 1: Graphics Systems and Models Instructor: Shih-Shinh Huang 1.
Using OpenGL in Visual C++ Opengl32.dll and glu32.dll should be in the system folder Opengl32.dll and glu32.dll should be in the system folder Opengl32.lib.
Graphics Systems and Models Chapter 1. CS 480/680 2Chapter 1 -- Graphics Systems and Models Introduction: Introduction: Computer Graphics Computer Graphics.
Week 2 - Wednesday CS361.
Viewing Angel Angel: Interactive Computer Graphics5E © Addison-Wesley
CS 450: COMPUTER GRAPHICS REVIEW: INTRODUCTION TO COMPUTER GRAPHICS – PART 2 SPRING 2015 DR. MICHAEL J. REALE.
Computer Graphics Bing-Yu Chen National Taiwan University.
Graphics Systems and OpenGL. Business of Generating Images Images are made up of pixels.
COMPUTER GRAPHICS Hochiminh city University of Technology Faculty of Computer Science and Engineering CHAPTER 01: Graphics System.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
CS 480/680 Computer Graphics Image Formation Dr. Frederick C Harris, Jr.
OpenGL Conclusions OpenGL Programming and Reference Guides, other sources CSCI 6360/4360.
CSE Real Time Rendering Week 2. Graphics Processing 2.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
1Computer Graphics Lecture 4 - Models and Architectures John Shearer Culture Lab – space 2
Chun-Yuan Lin Graphics Systems and Models 2015/11/12 1 CG.
Computer Graphics Chapter 6 Andreas Savva. 2 Interactive Graphics Graphics provides one of the most natural means of communicating with a computer. Interactive.
1Computer Graphics Implementation II Lecture 16 John Shearer Culture Lab – space 2
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
1 Angel: Interactive Computer Graphics5E © Addison- Wesley 2009 Image Formation Fundamental imaging notions Fundamental imaging notions Physical basis.
1 Chapter 1: Graphics Systems and Models. 2 Applications of C. G. – 1/4 Display of information Maps GIS (geographic information system) CT (computer tomography)
What are shaders? In the field of computer graphics, a shader is a computer program that runs on the graphics processing unit(GPU) and is used to do shading.
Chapter 1 Graphics Systems and Models Models and Architectures.
Viewing Angel Angel: Interactive Computer Graphics5E © Addison-Wesley
1 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 Models and Architectures 靜宜大學 資訊工程系 蔡奇偉 副教授 2012.
Graphics Programming. Graphics Functions We can think of the graphics system as a black box whose inputs are function calls from an application program;
Computer Graphics (Fall 2003) COMS 4160, Lecture 5: OpenGL 1 Ravi Ramamoorthi Many slides courtesy Greg Humphreys.
Lecture 9 From Vertices to Fragments. Objectives Introduce basic implementation strategies Clipping Rasterization hidden-surface removal.
Graphics Pipeline Bringing it all together. Implementation The goal of computer graphics is to take the data out of computer memory and put it up on the.
Chapter 71 Computer Graphics - Chapter 7 From Vertices to Fragments Objectives are: How your program are processed by the system that you are using, Learning.
Rendering Pipeline Fall, 2015.
- Introduction - Graphics Pipeline
Computer Graphics Implementation II
Graphics Systems and Models
Graphics Programming CSC
Computer Graphics.
Photorealistic Rendering vs. Interactive 3D Graphics
Computer Graphics Lecture 32
Rendering Pipeline Aaron Bloomfield CS 445: Introduction to Graphics
CSCE 441 Computer Graphics 3-D Viewing
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
Graphics Processing Unit
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Models and Architectures
Models and Architectures
Models and Architectures
Introduction to Computer Graphics with WebGL
Implementation II Ed Angel Professor Emeritus of Computer Science
Models and Architectures
Models and Architectures
Geometric Objects and Transformations (I)
Curves and Surfaces (I)
CS297 Graphics with Java and OpenGL
Implementation II Ed Angel Professor Emeritus of Computer Science
Presentation transcript:

Graphics Systems and Models (II) 姜明 北京大学数学科学学院 Based on [EA], Chapter 1. 更新时间2019年5月25日星期六2时15分48秒

Outline The Synthetic-Camera Model The Programmer’s Interface Graphics Architectures Performance Characteristics

The Synthetic-Camera Model Models of optical imaging systems lead directly to the conceptual foundation for modern 3D computer graphics. Creating a computer-generated image as being similar to forming an image using an optical system, based on geometric optics. This paradigm has become known as the synthetic-camera model.

Complicated Physical Process A Conceptual Model Human Eye Objects Real Light Real World Complicated Physical Process Synthetic Model Camera Light Source Objects Display Device Human Eye Graphics System

We can emulate this process to create synthetic images. Bellows(风箱式) Camera The lens is located on the front plane of the camera. The image is formed on the film plane at the back of the camera. Both planes are connected by flexible sides. We can move the back of the camera independently of the camera. We can emulate this process to create synthetic images.

Basic Principles First The specification of the objects is independent of the specification of the viewer. There should be separate functions within a graphics library for specifying the objects and the viewer. Second We can compute the image with simple mathematics. Third We must also consider the limited size of the film.

Equivalent Views of Image Formation Images formed on the back of the camera and on the image plane moved in front of the camera. The projection of a point on the object has been discussed. The view on the left is similar to that of the pin-hole camera on the right. We obtain the view on the right by noting the similarity of the two triangles.

Projection Plane As result, we move the image plane in front of the lens. We find the image of a point on the object by drawing a line, a projector, from the point to the center of the lens, the center of projection. All projectors are rays emanating from the center of projection. The film plane that we have moved in front of the lens is called the projection plane. The image of a point is located where the projector passes through the projection plane.

Clipping Rectangle/Window Clipping (a) with window in initial position, and (b) with window shifted. As we saw, not all objects can be imaged onto the film plan/projection plane because of the limited size of the film, or limited FOV. We can move this limitation to the front by placing a clipping rectangle, or clipping window, in the projection plane. This rectangle acts as a window through which a viewer, located at the center of projection , sees the world.

With the synthetic-camera model, given the location of the center of projection, location and orientation of the projection plane, size of the clipping rectangle, we can determine which objects will appear in the image.

The Programmer’s Interface Application Programmer’s Interface The Pen-Plotter Model Three-dimensional APIs The Modeling-Rendering Paradigm

Application Programmer’s Interface There are numerous ways that a user can interact with a graphics system. There are many sophisticated commercial software products with nice graphical interfaces. Some of us still need to develop graphics applications. The interface between an application program and a graphics system can be specified through a set of functions that resides in a graphics library. The specifications are called the application programmer’s interface (API).

The programmer sees only the API, and is thus shielded from the details of both the hardware and software implementation of the graphics library. The functions available through the API should match the underlying conceptual model. API Model

The Pen-Plotter Model Historically, most early graphics systems were 2D systems. The conceptual that they used is now referred to as the pen-plotter model. Pen-plotters are still in use; they are well suited for drawing large diagrams in the printing industry. Various APIs, such as LOGO, GKS, and Postscript, all have their origins in this model.

A pen-plotter produces images by moving a pen held by a gantry, a structure that can move the pen in two orthogonal directions around the paper. The plotter can raise and lower the pen as required to create the desired image. The pen-plotter model creates images similar to the process of drawing on a pad of paper. The user moves a pen around on this surface, leaving an image on the paper.

We can describe such a graphics system with two drawing functions moveto (x, y); lineto (x, y); Execution of the moveto function moves the pen to the location (x, y) on the paper without leaving a mark. The lineto function moves the pen to (x, y) and draws a line from the old to the new location of the pen. Once we add a few initialization and termination procedures, as well as the ability to change pens to alter the color or line thickness, we have a simple, but complete, graphics system.

The following fragment of a program in such a system moveto (0, 0); lineto (1, 0); lineto (1, 1); lineto (0, 1); lineto (0, 0); generates the output on the right.

The pen-plotter model does not extend well to 3D graphics systems. If we wish to use the pen-plotter model to produce the image of a 3D object on our 2D pad, we have to figure out where to place 2D points, which are projections of points on our 3D object. The synthetic camera model is better in handling projections than the pen-plotter model. We prefer to use an API that allows users to work directly in the domain of their problems and to use computer to carry out the details of the projection process automatically, without the user having to make any trigonometric calculations within the application program. More important, users can rely on hardware/software implementations of projections, which are far more efficient than would be any possible implementations within their programs.

Three-dimensional APIs The synthetic camera model is the basis for a number popular APIs, including OpenGL, PHIGS, Direct3D, VRML, and JAVA-3D. In order to model synthetic-camera model we need functions in the API to specify: Objects (Geometry) Viewer Light sources Material properties PHIGS: Programmer's Hierarchical Interactive Graphics System VRML: Virtual Reality Modeling Language

Objects are defined by sets of vertices Objects are defined by sets of vertices. For simple objects, line, rectangle, and polygon, there is a simple relationship between a list of vertices and the object. For more complex objects, there may be multiple ways of defining the object from a set of vertices. A circle can be defined by three points or by its center and radius. Most APIs provide similar sets of primitive objects for the user. Those primitives can usually be displayed rapidly by the hardware.

OpenGL defines primitives objects through lists of vertices. Here is an example of how a triangular polygon is defined: glBegin(GL_POLYGON); glVerex3f(0.0, 0.0, 0.0); glVertex3f(0.0, 1.0, 0.0); glVertex3f(0.0, 0.0, 1.0); glEnd; By adding more vertices, we can define an arbitrary polygon. If we change the type parameter GL_POLYGON, we can use the same vertices to define a different geometric primitive. The type GL_LINE_STRIP uses the vertices to define two connected line segments. The type GL_POINTS uses the vertices to define three points.

Some APIs let the user work directly in the frame buffer by providing functions that read and write pixels. Some APIs provide curves and surfaces as primitives, though these types are approximated by a series of simpler primitives. OpenGL provides access to the frame buffer, curves and surfaces.

We can define a viewer or camera in a variety of ways. Available APIs differ both in how much flexibility they provide in camera selection and in how many different methods they allow. Looking at the camera given here, we can identify four types of necessary specifications: Position: The camera location usually is given by the position of the center of the lens (center of the projection, COP). Orientation: Once we have positioned the camera, we can place a camera coordinate system with its origin at the center of projection. We can then rotate the camera independently around the three axes of this system. Focal length: The focal length of the lens determines the size of the image on the film plane or, equivalently, the portion of the world the camera sees. Film plane: The back of the camera has a height, h, and width, w, on the bellows camera, and in some APIs, the orientation of the back of the camera can be adjusted independently of the orientation of the lens.

These specifications can be satisfied in various ways. One way to develop the specifications for the camera location and orientation uses a series of coordinate system transformations. These transformations convert object positions represented in the coordinate system that specifies object vertices to object positions in a coordinate system centered at the center of projection. This approach is useful both for doing implementation and for getting the full set of views that a flexible camera can provide.

This may require setting and adjusting many parameters which will make it difficult to get a desired image. Part of the problem lies with the synthetic-camera model. Classical viewing techniques, such as are used in architecture, stress the relationship between the object and the viewer, rather than the independence that the synthetic-camera model emphasizes.

OpenGL API allows us to set transformations with complete freedom. A classical two-point perspective of a cube. It presents a particular relationship between the viewer and the object. OpenGL API allows us to set transformations with complete freedom. However, none of the APIs built on the synthetic camera model, OpenGL, PHIGS, or VRML, provides functions for specifying desired relationships between the viewer and an object.

A light source can be defined by its location, strength, color, and directionality. APIs provide a set of functions to specify these parameters for each source. Material properties are the characteristics, or attributes, of the objects. Such properties are defined through a series of function calls at the time that each object is defined. Both the light sources and material properties depends on the models of light-material interactions supported by the API.

The Modeling-Rendering Paradigm OpenGL allows you to write graphical application programs. The images will be generated automatically from models defined in programs by the hardware and software implementation. A common approach in developing realistic images is to separate the modeling of the scene from the production of the image – the rendering of the scene. Thus, the modeler and renderer might be done with different software and hardware. Models, including the geometric objects, lights, cameras, and materials, are placed in a data structure called scene graph, that is passed to a renderer or game machine.

Graphics Architectures On one side of the API is the application program. Some combination of hardware and software implements the functionality of the API. Researchers have taken various approaches to develop architectures to support graphic APIs. Early graphics systems used general-purpose computers with the standard von Neumann architecture. So slow that refreshing even simple images would burden an expensive computer.

Display Processors Pipeline Architectures Graphics Pipelines Vertex Processing Clipping and Primitive Assembly Rasterization Fragment Processing

Display Processors The earliest attempts to build special-purpose graphics systems were concerned primarily with relieving the general-purpose computer from the task of refreshing the display continuously. These display processors had conventional architectures, but included instructions to display primitives on the CRT. The main advantage was that the instructions to generate the image could be assembled once in the host and sent to the display processor, where they are stored in the display processor’s own memory as a display list or display file.

Early graphics system Display-processor architecture

Pipeline Architectures The major advances in graphics architectures parallel closely the advances in workstations. Special-purpose VLSI circuits have significantly improved the graphics technology. The availability of cheap solid-state memory led to the universality of raster displays.

The most important use of custom VLSI circuits has been in creating pipeline architectures. This architecture may not make any difference when computing a single multiplication and addition, but it will make a significant difference when we do this operations for many sets of data. That is just the situation in computer graphics, where large sets of vertices must be processed in the same manner. In a complex scene, there may be thousands – even millions – of vertices that define the objects. We must process all these vertices in a similar manner to form an image in the frame buffer, so that they can be processed in parallel.

The Graphic Pipeline There are four major steps in the imaging formation process: Vertex, Clipping and primitive assembly, Rasterization, Fragment processing. Here, we are content to show that these operations can be pipelined.

Vertex Processing In the 1st block of the pipeline, each vertex is processed independently: to carry out coordinate transformation. to compute a color for each vertex. Many of the steps in the imaging process can be viewed as transformations between representations of objects in different coordinate systems. In the synthetic camera model, a major part of viewing is to convert to a representation of objects from the system in which they are defined to a representation in terms of the coordinate system of the camera. Another example arises when we finally put our images onto a CRT display. The internal representation of objects eventually must be represented in terms of coordinate system of the display.

We can represent each change of coordinate system by a matrix We can represent each change of coordinate system by a matrix. Successive changes in coordinate systems are represented by multiplying, or concatenating, the individual matrices into a single matrix. Because multiplying one matrix by another one yields a third matrix, a sequence of transformations is an obvious candidate for a pipeline architecture. In addition, because the matrices used in computer graphics will always small (4 x 4), we have the opportunity to use parallelism within transformation blocks in the pipeline. Color assignment is discussed later.

Clipping and Primitive Assembly We do clipping because of the limitation that no imaging system can see the whole world at once. Cameras have films of limited size, and we can adjust their field of view by selecting different lens. We obtain the equivalent property in the synthetic camera by placing a clipping window of limited size in the projection plane. Objects whose projections are within the window appear in the image. Those that are outside do not, and are said to be clipped out.

Clipping can occur at various stages in the imaging process. For simple geometric objects, we can determine from its vertices whether or not an object is clipped out. Because clippers work with vertices, clippers can be inserted with transformers into the pipeline. Clippers can even be subdivided further into a sequence of pipelined clippers. Efficient clipping algorithms are developed in subsequent chapters.

Rasterization In general, 3D objects are kept in 3D representations as long as possible, as they pass through the pipeline. Eventually, after multiple stages of transformations and clipping, the geometry of the remaining primitives (those not clipped out) must be projected into 2D objects. We will see that we can implement this step using 4 x 4 matrices, and thus, also fit it in the pipeline.

Finally, our projected objects must be represented as pixels in the frame buffer. The scan conversion or rasterization process is discussed in subsequent chapters. Because the refreshing of the display is carried out automatically by the hardware, the details are of minor concern to the application programmer.

Rasterizer produces a set of fragments for each object If an object is not clipped out, the appropriate pixels in the frame buffer must be assigned colors Rasterizer produces a set of fragments for each object Fragments are “potential pixels” Have a location in frame bufffer Color and depth attributes Vertex attributes are interpolated over objects by the rasterizer

Fragment Processing Fragments are processed to determine the color of the corresponding pixel in the frame buffer Colors can be determined by texture mapping or interpolation of vertex colors Fragments may be blocked by other fragments closer to the camera Hidden-surface removal

Performance Characteristics There are two fundamental different types of processing in our architecture. At the frond end, there is geometric processing, based on processing vertices through the various clippers and transformers. This processing is ideally suited for pipelining, and it usually involves floating-point calculations. Examples: SGI geometry engine; Intel i860. Latency of System: the time for a single datum to pass through the system.

Beginning with rasterization and including many features that we discuss later, processing involves a direct manipulation of bits in the frame buffer. This back-end processing is fundamentally different from front-end processing. We implement it more effectively using architectures that have the ability to move blocks of bits quickly.

The overall performance of a system is characterized by how fast we can move geometric entities through the pipeline how many pixels per second we can alter in the frame buffer. Consequently, the fastest graphics workstations are characterized by pipelines at the frond ends and bit processors at the back ends.