Download presentation
Presentation is loading. Please wait.
Published byDoreen Walters Modified over 6 years ago
1
Course code: 10CS65 | Computer Graphics and Visualization
Unit-1 Introduction Engineered for Tomorrow Department: Computer Science and Engineering Date :
2
INTRODUCTION Applications of computer graphics A graphics system
Images Physical and synthetic Imaging systems The synthetic camera model The programmer’s interface Graphics architectures Programmable pipelines Performance characteristics Graphics Programming The Sierpinski gasket Programming two-dimensional applications.
3
Overview of what we will cover
Computer Graphics What is it? Overview of what we will cover A Graphics Overview Graphics Theory A Graphics Software System: OpenGL Our approach will be top-down We want you to start writing application programs that generate graphical output as quickly as possible
4
Computer Graphics Computer Graphics deals with all aspects of creating images with a computer Hardware CPU’s GPU Software OpenGL DirectX Applications
5
Computer Graphics Using a computer as a rendering tool for the generation (from models) and manipulation of images is called computer graphics More precisely: image synthesis
6
Applications of computer graphics
The development of Computer Graphics has been driven by the needs of the user community and by the advances in hardware and software. Display of Information Design Simulation & Animation User Interfaces
7
1. Display of Information
8
2. Design
9
3.Simulations: GAMES!
10
4: User Interfaces
11
Applications of computer graphics
Computer-Aided Design for engineering and architectural systems etc. Objects maybe displayed in a wireframe outline form. Multi-window environment is also favored for producing various zooming scales and views. Animations are useful for testing performance. Presentation Graphics : To produce illustrations which summarize various kinds of data. Except 2D, 3D graphics are good tools for reporting more complex data. Computer Art: Painting packages are available. With cordless, pressure-sensitive stylus, artists can produce electronic paintings which simulate different brush strokes, brush widths, and colors. Photorealistic techniques, morphing and animations are very useful in commercial art. For films, 24 frames per second are required. For video monitor, 30 frames per second are required.
12
Application of computer graphics
Entertainment: Motion pictures, Music videos, and TV shows, Computer games Education and Training: Training with computer-generated models of specialized systems such as the training of ship captains and aircraft pilots. Visualization: For analyzing scientific, engineering, medical and business data or behavior. Converting data to visual form can help to understand mass volume of data very efficiently. Image Processing: Image processing is to apply techniques to modify or interpret existing pictures. It is widely used in medical applications. Graphical User Interface: Multiple window, icons, menus allow a computer setup to be utilized more efficiently.
13
A graphics system A Graphics system has 5 main elements: Input Devices
Processor Memory Frame Buffer Output Devices
14
Pixels and the Frame Buffer
A picture is produced as an array (raster) of picture elements (pixels). These pixels are collectively stored in the Frame Buffer. Properties of frame buffer: Resolution – number of pixels in the frame buffer Depth or Precision – number of bits used for each pixel E.g.: 1 bit deep frame buffer allows 2 colors 8 bit deep frame buffer allows 256 colors. A Frame buffer is implemented either with special types of memory chips or it can be a part of system memory. In simple systems the CPU does both normal and graphical processing. Graphics processing - Take specifications of graphical primitives from application program and assign values to the pixels in the frame buffer It is also known as Rasterization or scan conversion.
15
A graphics system Interactive Graphics System
16
Output Devices Various parts of a CRT : 2 types of refresh:
Electron Gun – emits electron beam which strikes the phosphor coating to emit light. Deflection Plates – controls the direction of beam. The output of the computer is converted by digital-to-analog converters o voltages across x & y deflection plates. Refresh Rate – In order to view a flicker free image, the image on the screen has to be retraced by the beam at a high rate (modern systems operate at 85Hz) 2 types of refresh: Noninterlaced display: Pixels are displayed row by row at the refresh rate. Interlaced display: Odd rows and even rows are refreshed alternately.
17
The cathode-ray tube(CRT)
18
Shadow-Mask CRT
19
Shadow-Mask CRT Here, just behind the phosphorus coated face of the CRT, there is a metal plate. The shadow-mask is pierced with small round holes in a triangular pattern. The shadow-mask tube uses three guns, grouped in a triangle or delta responsible for red, green and blue components of the light output of the CRT. The deflection system of the CRT operates on all three electron beams simultaneously, bringing all three to the same point of focus on the shadow-mask. Where The three beams encounter holes in the mask, they pass through and strike the phosphor. The phosphor in tube is laid down very carefully in groups of three spots- one red, one green and one blue- under each hole in the mask, that each spot is stuck only by electrons from the appropriate gun. The effect of the mask is thus to “shadow” the spots of red phosphor from all but the red beam, and likewise for the green and blue phosphor spots. can therefore control the light output in each of the three component colors by modulating the beam current of the corresponding gun.
20
Images: Physical and Synthetic
Computer graphics generates pictures with the aim of: to create realistic images to create images very close to “traditional” imaging methods The Usual Approach: Construct Raster Images Simple 2D Entities Points Lines Polygons Define objects based upon 2D representation. Because such functionality is supported by most present computer graphics systems, we are going to learn to create images here, rather than expand a limited model.
21
Objects and Viewers Image-Formation Process (the Two Entities):
The Object The Viewer The object exists in space independent of any image-formation process, and of any viewer.
22
Graphic – A Projection Model
Projection, shading, lighting models 3D Object Output Image Synthetic Camera
23
What Now! Both the object and the viewer exist in a 3D world. However, the image they define is 2D Image-Formation The Object + the Viewer’s Specifications An Image Future Chapter 2 OpenGL Build Simple Objects. Chapter 9 Interactive Objects Objects Relations w/ Each Other
24
Objects, Viewers & Camera
Camera system object and viewer exist in E3 image is formed in the Human Visual system (HSV) – on the retina In the film plane if a camera is used Object(s) & Viewer(s) in E3 Pictures in E2 Transformation from E3 to E2 projection
25
Light and Images Much information was missing from the preceding picture: We have yet to mention light! If there were no light sources the objects would be dark, and there would be nothing visible in our image. We have not mentioned how color enters the picture. Or, what are the effects of different kinds of surfaces have on the objects.
26
Lights & Images Light Sources: light sources
position monochromatic / color if not used scene would be very dark and flat shadows and reflections - very important for realistic perception geometric optics used for light modeling
27
Imitating real life Taking a more physical approach, we can start with the following arrangement: The details of the interaction between light and the surfaces of the object determine how much light enters the camera.
28
Light properties Light is a form of electromagnetic radiation:
Visible spectrum 350 – 780 nm
29
Trace a Ray! Ray tracing surfaces:
building an imaging model by following light from a source a ray is a semi-infinite line that emanates from a point and “travels” to infinity in a particular direction portion of these infinite rays contributes to the image on the film plane of the camera surfaces: diffusing reflecting refracting
30
Ray Tracing Ray tracing is an image formation technique that is based on these ideas and that can form the basis for producing computer generated images. A different approach must be used: for each pixel intensity must be computed all contributions must be taken into account a ray is “followed” in the opposite direction, when intersect a surface it is split into two rays contribution from light sources and reflection from other resources are counted
31
The Pinhole Camera Looks like this: A Pinhole camera is a box
with a small hole in the center on one side, and the film on the opposite side And you could take pictures with it in the old days
32
Pinhole Camera Box with a small hole film plane z = - d
33
Pinhole Camera point (xp,yp,-d) – projection of the point (x,y,z)
angle of view or field of the camera – angle ideal camera – infinite depth of field
34
The Human Visual System
Our extremely complex visual system has all the components of a physical imaging system, such as a camera or a microscope.
35
Human Visual System - HVS
rods and cones (tyčinky a čípky) excited by electromagnetic energy in the range nm sizes of rods and cones determines the resolution of HVS – our visual acuity the sensors in the human eye do not react uniformly to the light energy at different wavelengths
36
Human Visual System - HVS
Courtesy of
37
Human Visual System different HVS response for single frequency light – red/green/blue relative brightness response at different frequencies this curve is known as Commision Internationale de L’Eclairage (CIE) standard observer curve the curve matches the sensitivity of the monochromatic sensors used in black&white films and video camera most sensitive to GREEN colors
38
Human Visual System three different cones in HVS
blue, green & yellow – often reported as red for compatibility with camera & film
39
Synthetic Camera Model
computer-generated image based on an optical system – Synthetic Camera Model viewer behind the camera can move the back of the camera – change of the distance d i.e. additional flexibility objects and viewer specifications are independent – different functions within a graphics library Imaging system
40
Synthetic Camera Model
The objects specification is independent of the viewer specifications. In a graphics library we would expect separate functions for specifying objects and the viewer. We can compute the image using simple trigonometric calculations a – situation with a camera b – mathematical model – image plane moved in front of the camera center of projection – center of the lens projection plane – film plane
41
Synthetic Camera Model
Not all objects can be seen limit due to viewing angle Solution: Clipping rectangle or clipping window placed inn front of the camera ad b shows the case when the clipping rectangle is shifted aside – only part of the the scene is projected
42
Some Adjustments Symmetry in projections
Move the image plane in front of the lens
43
Constraints Clipping We must also consider the limited size of the image.
44
The Programmer’s Interface
Numerous ways for user interaction with a graphics system using input devices - pads, mouse, keyboards etc. different orientation of coordinate systems canvas versus OpenGL etc.
45
Application Programmer’s Interfaces
What is an API? Why do you want one? API functionality should match the conceptual model Synthetic Camera Model used for APIs like OpenGL, PHIGS, Direct 3D, Java3D, VRML etc.
46
Typical example of “sequential access”
Pen Plotter Model Two basic functions for drawing: moveto(x , y) – pen up lineto(x , y) – pen down moveto (0,0); lineto(1,0);lineto(1,1);lineto(0,1);lineto(0,0); { draws a rectangle } moveto(0,1); lineto(0.5,1.866); lineto(1.5,1.866); lineto(1.5,0.866); lineto(1,0);moveto(0,0); lineto(1.5,1.866); { draws a cube using oblique projection } Typical example of “sequential access”
47
Three-Dimensional APIs -Synthetic Camera Model
If we are to follow the synthetic camera model, we need functions in the API to specify: Objects The Viewer Light Sources Material Properties
48
Objects Objects are defined by points or vertices, line segments, polygons etc. to represent complex objects API primitives are displayed rapidly on the hardware usual API primitives: points line segments polygons text The following code fragment defines a triangle in OpenGL glBegin(GL_POLYGON); glVertex3f(0.0,0.0,0.0); glVertex3f(0.0,1.0,0.0); glVertex3f(0.0,0.0,1.0); glEND();
49
The Viewer Camera specification in APIs:
position – usually center of lens orientation – camera coordinate system in center of lens camera can rotate around those three axis focal length of lens determines the size of the image on the film actually viewing angle film plane - camera has a height and a width
50
Application Programmer’s Interface
Two coordinate systems are used: world coordinates, where the object is defined camera coordinates, where the image is to be produced Transformation for conversion between coordinate systems or gluLookAt(cam_x, cam_y,cam_z, look_at_x, look_at_y, look_at_z,…) glPerspective( field_of_view) Lights – location, strength, color, directionality Material – properties are attributes of objects Observed visual properties of objects are given by material and light properties
51
Lights Light sources can be defined by their location, strength, color, and directionality. API’s provide a set of functions to specify these parameters for each source. Material properties are characteristics, or attributes, of the objects Such properties are usually defined through a series of function calls at the time that each object is defined.
52
Materials Material properties are characteristics, or attributes, of the objects Such properties are usually defined through a series of function calls at the time that each object is defined.
53
Sequence of Images In Chapter 2 , we begin our detailed discussion of the OpenGL API Color Plates 1 through 8 show what is possible with available hardware and a good API, but also they are not difficult to generate.
54
Modeling - Rendering Paradigm
In many applications the modeling is separated from production of an image – rendering (CAD systems, animations etc.) In this case the modeling SW/HW might be different from the renderer the connection between both parts can be simple or highly complex using distributed environments
55
Graphics Architectures
On one side of the API is the application program. On the other is some combination of hardware and software that implements the functionality of the API Researchers have taken various approaches to developing architectures to support graphics APIs
56
Early Systems General Purpose Computers (CPU) Single Processing Unit
In the early days of computer graphics, computers were so slow that refreshing even simple images, containing a few hundred line segments, would burden an expensive computer Early graphics systems – CRT had just basic capability to generate line segments connecting two points vector based with refreshing – length of line segments limited light pen often used for manipulation systems with memory CRT – the whole picture redrawn if changed
57
Display Processors The earliest attempts to build
a special purpose graphics system were concerned primarily with relieving the task of refreshing the display Display processors standard architecture with capabilities to display primitives composition made at the host memory – display list – contains primitives to be displayed.
58
Pipeline Architecture
The major advances in graphics architectures parallel closely the advances in workstations. For computer graphics applications, the most important use of custom VLSI circuits has been in creating pipeline architectures A simple arithmetic pipeline is shown: a + b * c when addition of (b * c) and a is performing new b * c is computed in parallel – pipelining enabled significant speed up similar approach can be used for processing of geometric primitives as well
59
Pipeline If we think in terms of processing the geometry of our objects to obtain an image, we can use the following block diagram: There are 4 major steps in the geometric pipeline: Vertex processing – like scaling, rotations, translation, mirroring, sheering etc. clipping – removal of those parts that are out of the viewing field Fragment processing rasterization homogeneous coordinates and matrix operations geometric transformations are used
60
Transformations Many of the steps in the imaging process can be viewed as transformations between representations of objects in different coordinate systems for example: from the system in which the object was defined to the system of the camera. We can represent each change of coordinate systems by a matrix We can represent successive changes by multiplying (or concatenating) the individual matrices into a single matrix. Will all be discussed in chapter 4!
61
Clipping Why do we Clip? Efficient clipping algorithms are developed in Chapter 7
62
Projection In general three-dimensional objects are kept in three dimensions as long as possible, as they pass through the pipeline. Eventually though, they must be projected into two-dimensional objects. There are various projections that we can implement. We shall see in Chapter 5 that we can implement this step using 4 x 4 matrices, and, thus, also fit it into the pipeline.
63
Rasterization Finally, our projected objects must be represented as pixels in the frame buffer. E.g we have to squash our 3D object into a 2D pixel representation. We discuss this scan-conversion or rasterization process in Chapter 7
64
Performance Characteristics
There are two fundamentally different types of processing. Front end -- geometric processing, based on the processing of vertices ideally suited for pipelining, and usually involves floating-point calculations. Back end -- involves direct manipulation of bits in the frame buffer. Ideally suited for parallel bit processors.
65
Graphics programming OpenGL API will be used for development
API will be sufficient to allow you to program many interesting two- and three-dimensional problems and to familiarize you with the basic graphics concepts. execute without modification on a three-dimensional system. functionality sufficient for writing sophisticated 2D programs without users interaction example of a simple 3D application
66
Typical example of “random access”
Sierpinski Gasket Three vertices are given (x1, y1), (x2, y2), (x3, y3) pick up a random point inside the triangle select one of the three vertices at random find the point halfway between initial point and the randomly selected vertex display this new point by putting some sort of marker, such as a small circle, at its location replace the initial point with this new point return to step 2 Typical example of “random access”
67
Sierpinski Gasket Possible form of your program might be: main ( ) {
initialize_the_system (); for (some_number_of_points ) pt = generate_a_point(); display_the_point(pt); } cleanup() Final program in OpenGL will almost that simple, but slightly different
68
GL specifications Multiple forms for functions
Vertex functions – general forms glVertex[nt|ntv] where: n signifies number of dimensions (2,3,4) t denotes data type t {i,f,d,b,s,ub,us,ui} i – integer (32 bits), f – float (32 bits), d – double (64 bits) b – signed char (8), s – short (16), u – unsigned &{b,s,i} v indicates that variables are specified through the pointer to an array
69
GL specifications OpenGL types
GLfloat, GLint – instead of float, int (integer) data types used in C or Pascal Examples: #define GLfloat float simple change of data types glVertex2i(GLint xi,GLint iy) position specification in 2D glVertex3f(GLfloat x, GLfloat y, GLfloat z) position specification in 3D
70
GL specifications To store 3D vertex using an array GLfloat vertex[3] can be used as glVertex3fv(vertex) using vertices variety of objects is defined. The can be “grouped” together using glBegin(argument); glEnd( ); argument – specifies geometric type that is specified by vertices
71
GL specifications glBegin(GL_LINES); glVertex2f(x1,y1); glVertex2f(x2,y2); glEnd( ); /* specifies a line segment */ glBegin(GL_POINTS); glEnd( ); /* specifies two points */ Others possible arguments will be specified latter Consider a drawing area 500x500, lower left hand corner (0,0) and Sierpinski gasket to be drawn
72
Sierpinski Gasket & GL specifications
typedef GLfloat point2 [2]; /* defines a point as two dimensional array */ void display(void); { /* an arbitrary triangle */ point2 vertices[3]= { {0.0,0.0}, { }, {500.0,0.0}}; static point2 p = {75.0, 50.0}; /* set to any initial point */ int j,k; int rand( ); {* standard random number generator */ for (k=0;k<5000;k++) { j = rand ( )%3; /* pick a random vertex from 0, 1, 2 */ p[0] = (p[0]+ vertices[ j ][0])/2; p[1] = (p[1]+ vertices[ j ][1])/2; /* display the new point */ glBegin(GL_POINTS); glVertex2fv(p); glEnd(); } glFlush(); /* points are to be displayed as soon as possible */
73
Sierpinski Gasket & GL specifications
It is necessary: to write the CORE program to specify color of drawing where on the scene the picture will appear how large image will be how to create an area of the screen for our image – window how much of our infinite pad will appear on the screen how long the image will remain on the screen Those are important issues! – will be solved latter Image generated by the program
74
Image generated by the program
Coordinate System User works in: physical coordinates device (DC)/raster coordinates - canvas style - integer values world coordinates (WC) with transformation definition to device coordinates – float values Window-Viewport transformation Advanced mapping: normalized coordinate system (NDC) introduction WCNDC & NDC WCi Image generated by the program
75
Summary In this chapter we have set the stage for our top-down development of computer graphics. Computer graphics is a method of image formation that should be related to classical methods -- in particular to cameras Our next step is to explore the application side of Computer Graphics programming. E.g Let’s start programming using the OpenGL API!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.