Presentation is loading. Please wait.

Presentation is loading. Please wait.

Graphics Systems and Models (II)

Similar presentations


Presentation on theme: "Graphics Systems and Models (II)"— Presentation transcript:

1 Graphics Systems and Models (II)
姜明 北京大学数学科学学院 Based on [EA], Chapter 1. 更新时间2019年5月25日星期六2时15分48秒

2 Outline The Synthetic-Camera Model The Programmer’s Interface
Graphics Architectures Performance Characteristics

3 The Synthetic-Camera Model
Models of optical imaging systems lead directly to the conceptual foundation for modern 3D computer graphics. Creating a computer-generated image as being similar to forming an image using an optical system, based on geometric optics. This paradigm has become known as the synthetic-camera model.

4 Complicated Physical Process
A Conceptual Model Human Eye Objects Real Light Real World Complicated Physical Process Synthetic Model Camera Light Source Objects Display Device Human Eye Graphics System

5 We can emulate this process to create synthetic images.
Bellows(风箱式) Camera The lens is located on the front plane of the camera. The image is formed on the film plane at the back of the camera. Both planes are connected by flexible sides. We can move the back of the camera independently of the camera. We can emulate this process to create synthetic images.

6 Basic Principles First
The specification of the objects is independent of the specification of the viewer. There should be separate functions within a graphics library for specifying the objects and the viewer. Second We can compute the image with simple mathematics. Third We must also consider the limited size of the film.

7 Equivalent Views of Image Formation
Images formed on the back of the camera and on the image plane moved in front of the camera. The projection of a point on the object has been discussed. The view on the left is similar to that of the pin-hole camera on the right. We obtain the view on the right by noting the similarity of the two triangles.

8 Projection Plane As result, we move the image plane in front of the lens. We find the image of a point on the object by drawing a line, a projector, from the point to the center of the lens, the center of projection. All projectors are rays emanating from the center of projection. The film plane that we have moved in front of the lens is called the projection plane. The image of a point is located where the projector passes through the projection plane.

9 Clipping Rectangle/Window
Clipping (a) with window in initial position, and (b) with window shifted. As we saw, not all objects can be imaged onto the film plan/projection plane because of the limited size of the film, or limited FOV. We can move this limitation to the front by placing a clipping rectangle, or clipping window, in the projection plane. This rectangle acts as a window through which a viewer, located at the center of projection , sees the world.

10 With the synthetic-camera model, given the
location of the center of projection, location and orientation of the projection plane, size of the clipping rectangle, we can determine which objects will appear in the image.

11 The Programmer’s Interface
Application Programmer’s Interface The Pen-Plotter Model Three-dimensional APIs The Modeling-Rendering Paradigm

12 Application Programmer’s Interface
There are numerous ways that a user can interact with a graphics system. There are many sophisticated commercial software products with nice graphical interfaces. Some of us still need to develop graphics applications. The interface between an application program and a graphics system can be specified through a set of functions that resides in a graphics library. The specifications are called the application programmer’s interface (API).

13 The programmer sees only the API, and is thus shielded from the details of both the hardware and software implementation of the graphics library. The functions available through the API should match the underlying conceptual model. API Model

14 The Pen-Plotter Model Historically, most early graphics systems were 2D systems. The conceptual that they used is now referred to as the pen-plotter model. Pen-plotters are still in use; they are well suited for drawing large diagrams in the printing industry. Various APIs, such as LOGO, GKS, and Postscript, all have their origins in this model.

15 A pen-plotter produces images by moving a pen held by a gantry, a structure that can move the pen in two orthogonal directions around the paper. The plotter can raise and lower the pen as required to create the desired image. The pen-plotter model creates images similar to the process of drawing on a pad of paper. The user moves a pen around on this surface, leaving an image on the paper.

16 We can describe such a graphics system with two drawing functions
moveto (x, y); lineto (x, y); Execution of the moveto function moves the pen to the location (x, y) on the paper without leaving a mark. The lineto function moves the pen to (x, y) and draws a line from the old to the new location of the pen. Once we add a few initialization and termination procedures, as well as the ability to change pens to alter the color or line thickness, we have a simple, but complete, graphics system.

17 The following fragment of a program in such a system
moveto (0, 0); lineto (1, 0); lineto (1, 1); lineto (0, 1); lineto (0, 0); generates the output on the right.

18 The pen-plotter model does not extend well to 3D graphics systems.
If we wish to use the pen-plotter model to produce the image of a 3D object on our 2D pad, we have to figure out where to place 2D points, which are projections of points on our 3D object. The synthetic camera model is better in handling projections than the pen-plotter model. We prefer to use an API that allows users to work directly in the domain of their problems and to use computer to carry out the details of the projection process automatically, without the user having to make any trigonometric calculations within the application program. More important, users can rely on hardware/software implementations of projections, which are far more efficient than would be any possible implementations within their programs.

19 Three-dimensional APIs
The synthetic camera model is the basis for a number popular APIs, including OpenGL, PHIGS, Direct3D, VRML, and JAVA-3D. In order to model synthetic-camera model we need functions in the API to specify: Objects (Geometry) Viewer Light sources Material properties PHIGS: Programmer's Hierarchical Interactive Graphics System VRML: Virtual Reality Modeling Language

20 Objects are defined by sets of vertices
Objects are defined by sets of vertices. For simple objects, line, rectangle, and polygon, there is a simple relationship between a list of vertices and the object. For more complex objects, there may be multiple ways of defining the object from a set of vertices. A circle can be defined by three points or by its center and radius. Most APIs provide similar sets of primitive objects for the user. Those primitives can usually be displayed rapidly by the hardware.

21 OpenGL defines primitives objects through lists of vertices.
Here is an example of how a triangular polygon is defined: glBegin(GL_POLYGON); glVerex3f(0.0, 0.0, 0.0); glVertex3f(0.0, 1.0, 0.0); glVertex3f(0.0, 0.0, 1.0); glEnd; By adding more vertices, we can define an arbitrary polygon. If we change the type parameter GL_POLYGON, we can use the same vertices to define a different geometric primitive. The type GL_LINE_STRIP uses the vertices to define two connected line segments. The type GL_POINTS uses the vertices to define three points.

22 Some APIs let the user work directly in the frame buffer by providing functions that read and write pixels. Some APIs provide curves and surfaces as primitives, though these types are approximated by a series of simpler primitives. OpenGL provides access to the frame buffer, curves and surfaces.

23 We can define a viewer or camera in a variety of ways.
Available APIs differ both in how much flexibility they provide in camera selection and in how many different methods they allow. Looking at the camera given here, we can identify four types of necessary specifications: Position: The camera location usually is given by the position of the center of the lens (center of the projection, COP). Orientation: Once we have positioned the camera, we can place a camera coordinate system with its origin at the center of projection. We can then rotate the camera independently around the three axes of this system. Focal length: The focal length of the lens determines the size of the image on the film plane or, equivalently, the portion of the world the camera sees. Film plane: The back of the camera has a height, h, and width, w, on the bellows camera, and in some APIs, the orientation of the back of the camera can be adjusted independently of the orientation of the lens.

24 These specifications can be satisfied in various ways.
One way to develop the specifications for the camera location and orientation uses a series of coordinate system transformations. These transformations convert object positions represented in the coordinate system that specifies object vertices to object positions in a coordinate system centered at the center of projection. This approach is useful both for doing implementation and for getting the full set of views that a flexible camera can provide.

25 This may require setting and adjusting many parameters which will make it difficult to get a desired image. Part of the problem lies with the synthetic-camera model. Classical viewing techniques, such as are used in architecture, stress the relationship between the object and the viewer, rather than the independence that the synthetic-camera model emphasizes.

26 OpenGL API allows us to set transformations with complete freedom.
A classical two-point perspective of a cube. It presents a particular relationship between the viewer and the object. OpenGL API allows us to set transformations with complete freedom. However, none of the APIs built on the synthetic camera model, OpenGL, PHIGS, or VRML, provides functions for specifying desired relationships between the viewer and an object.

27 A light source can be defined by its location, strength, color, and directionality.
APIs provide a set of functions to specify these parameters for each source. Material properties are the characteristics, or attributes, of the objects. Such properties are defined through a series of function calls at the time that each object is defined. Both the light sources and material properties depends on the models of light-material interactions supported by the API.

28 The Modeling-Rendering Paradigm
OpenGL allows you to write graphical application programs. The images will be generated automatically from models defined in programs by the hardware and software implementation. A common approach in developing realistic images is to separate the modeling of the scene from the production of the image – the rendering of the scene. Thus, the modeler and renderer might be done with different software and hardware. Models, including the geometric objects, lights, cameras, and materials, are placed in a data structure called scene graph, that is passed to a renderer or game machine.

29 Graphics Architectures
On one side of the API is the application program. Some combination of hardware and software implements the functionality of the API. Researchers have taken various approaches to develop architectures to support graphic APIs. Early graphics systems used general-purpose computers with the standard von Neumann architecture. So slow that refreshing even simple images would burden an expensive computer.

30 Display Processors Pipeline Architectures Graphics Pipelines Vertex Processing Clipping and Primitive Assembly Rasterization Fragment Processing

31 Display Processors The earliest attempts to build special-purpose graphics systems were concerned primarily with relieving the general-purpose computer from the task of refreshing the display continuously. These display processors had conventional architectures, but included instructions to display primitives on the CRT. The main advantage was that the instructions to generate the image could be assembled once in the host and sent to the display processor, where they are stored in the display processor’s own memory as a display list or display file.

32 Early graphics system Display-processor architecture

33 Pipeline Architectures
The major advances in graphics architectures parallel closely the advances in workstations. Special-purpose VLSI circuits have significantly improved the graphics technology. The availability of cheap solid-state memory led to the universality of raster displays.

34 The most important use of custom VLSI circuits has been in creating pipeline architectures.
This architecture may not make any difference when computing a single multiplication and addition, but it will make a significant difference when we do this operations for many sets of data. That is just the situation in computer graphics, where large sets of vertices must be processed in the same manner. In a complex scene, there may be thousands – even millions – of vertices that define the objects. We must process all these vertices in a similar manner to form an image in the frame buffer, so that they can be processed in parallel.

35 The Graphic Pipeline There are four major steps in the imaging formation process: Vertex, Clipping and primitive assembly, Rasterization, Fragment processing. Here, we are content to show that these operations can be pipelined.

36 Vertex Processing In the 1st block of the pipeline, each vertex is processed independently: to carry out coordinate transformation. to compute a color for each vertex. Many of the steps in the imaging process can be viewed as transformations between representations of objects in different coordinate systems. In the synthetic camera model, a major part of viewing is to convert to a representation of objects from the system in which they are defined to a representation in terms of the coordinate system of the camera. Another example arises when we finally put our images onto a CRT display. The internal representation of objects eventually must be represented in terms of coordinate system of the display.

37 We can represent each change of coordinate system by a matrix
We can represent each change of coordinate system by a matrix. Successive changes in coordinate systems are represented by multiplying, or concatenating, the individual matrices into a single matrix. Because multiplying one matrix by another one yields a third matrix, a sequence of transformations is an obvious candidate for a pipeline architecture. In addition, because the matrices used in computer graphics will always small (4 x 4), we have the opportunity to use parallelism within transformation blocks in the pipeline. Color assignment is discussed later.

38 Clipping and Primitive Assembly
We do clipping because of the limitation that no imaging system can see the whole world at once. Cameras have films of limited size, and we can adjust their field of view by selecting different lens. We obtain the equivalent property in the synthetic camera by placing a clipping window of limited size in the projection plane. Objects whose projections are within the window appear in the image. Those that are outside do not, and are said to be clipped out.

39 Clipping can occur at various stages in the imaging process.
For simple geometric objects, we can determine from its vertices whether or not an object is clipped out. Because clippers work with vertices, clippers can be inserted with transformers into the pipeline. Clippers can even be subdivided further into a sequence of pipelined clippers. Efficient clipping algorithms are developed in subsequent chapters.

40 Rasterization In general, 3D objects are kept in 3D representations as long as possible, as they pass through the pipeline. Eventually, after multiple stages of transformations and clipping, the geometry of the remaining primitives (those not clipped out) must be projected into 2D objects. We will see that we can implement this step using 4 x 4 matrices, and thus, also fit it in the pipeline.

41 Finally, our projected objects must be represented as pixels in the frame buffer.
The scan conversion or rasterization process is discussed in subsequent chapters. Because the refreshing of the display is carried out automatically by the hardware, the details are of minor concern to the application programmer.

42 Rasterizer produces a set of fragments for each object
If an object is not clipped out, the appropriate pixels in the frame buffer must be assigned colors Rasterizer produces a set of fragments for each object Fragments are “potential pixels” Have a location in frame bufffer Color and depth attributes Vertex attributes are interpolated over objects by the rasterizer

43 Fragment Processing Fragments are processed to determine the color of the corresponding pixel in the frame buffer Colors can be determined by texture mapping or interpolation of vertex colors Fragments may be blocked by other fragments closer to the camera Hidden-surface removal

44 Performance Characteristics
There are two fundamental different types of processing in our architecture. At the frond end, there is geometric processing, based on processing vertices through the various clippers and transformers. This processing is ideally suited for pipelining, and it usually involves floating-point calculations. Examples: SGI geometry engine; Intel i860. Latency of System: the time for a single datum to pass through the system.

45 Beginning with rasterization and including many features that we discuss later, processing involves a direct manipulation of bits in the frame buffer. This back-end processing is fundamentally different from front-end processing. We implement it more effectively using architectures that have the ability to move blocks of bits quickly.

46 The overall performance of a system is characterized by
how fast we can move geometric entities through the pipeline how many pixels per second we can alter in the frame buffer. Consequently, the fastest graphics workstations are characterized by pipelines at the frond ends and bit processors at the back ends.


Download ppt "Graphics Systems and Models (II)"

Similar presentations


Ads by Google