Geometric Objects and Transformations (I)

Slides:



Advertisements
Similar presentations
Today Composing transformations 3D Transformations
Advertisements

1Computer Graphics Building Models John Shearer Culture Lab – space 2
COMP 175 | COMPUTER GRAPHICS Remco Chang1/6103b – Shapes Lecture 03b: Shapes COMP 175: Computer Graphics February 3, 2015.
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Building Models modified by Ray Wisman Ed Angel Professor of Computer Science,
1 Building Models. 2 Objectives Introduce simple data structures for building polygonal models ­Vertex lists ­Edge lists OpenGL vertex arrays.
CS 352: Computer Graphics Chapter 7: The Rendering Pipeline.
Informationsteknologi Wednesday, November 7, 2007Computer Graphics - Class 51 Today’s class Geometric objects and transformations.
 The success of GL lead to OpenGL (1992), a platform-independent API that was  Easy to use  Close enough to the hardware to get excellent performance.
HCI 530 : Seminar (HCI) Damian Schofield.
1 Chapter 5 Viewing. 2 Perspective Projection 3 Parallel Projection.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Geometric Objects and Transformations Geometric Entities Representation vs. Reference System Geometric ADT (Abstract Data Types)
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
Basics of Rendering Pipeline Based Rendering –Objects in the scene are rendered in a sequence of steps that form the Rendering Pipeline. Ray-Tracing –A.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Modeling. Geometric Models  2D and 3D objects  Triangles, quadrilaterals, polygons  Spheres, cones, boxes  Surface characteristics  Color  Texture.
Week 2 - Wednesday CS361.
Graphics Systems and OpenGL. Business of Generating Images Images are made up of pixels.
Image Synthesis Rabie A. Ramadan, PhD 1. 2 About my self Rabie A. Ramadan My website and publications
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Geometric Objects and Transformation
Representation. Objectives Introduce concepts such as dimension and basis Introduce coordinate systems for representing vectors spaces and frames for.
16/5/ :47 UML Computer Graphics Conceptual Model Application Model Application Program Graphics System Output Devices Input Devices API Function.
Computer Graphics Chapter 6 Andreas Savva. 2 Interactive Graphics Graphics provides one of the most natural means of communicating with a computer. Interactive.
CAP 4730 Computer Graphic Methods Prof. Roy Levow Chapter 4.
Image Synthesis Rabie A. Ramadan, PhD 4. 2 Review Questions Q1: What are the two principal tasks required to create an image of a three-dimensional scene?
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
1 Angel: Interactive Computer Graphics5E © Addison- Wesley 2009 Image Formation Fundamental imaging notions Fundamental imaging notions Physical basis.
What are shaders? In the field of computer graphics, a shader is a computer program that runs on the graphics processing unit(GPU) and is used to do shading.
1 Representation. 2 Objectives Introduce concepts such as dimension and basis Introduce coordinate systems for representing vectors spaces and frames.
Computer Graphics I, Fall 2010 Building Models.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Graphics Graphics Korea University kucg.korea.ac.kr Geometric Primitives 고려대학교 컴퓨터 그래픽스 연구실.
4. Geometric Objects and Transformations
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
Chapter 71 Computer Graphics - Chapter 7 From Vertices to Fragments Objectives are: How your program are processed by the system that you are using, Learning.
Unit-4 Geometric Objects and Transformations- I
Angel: Interactive Computer Graphics 5E © Addison-Wesley 2009
- Introduction - Graphics Pipeline
POLYGON MESH Advance Computer Graphics
3D Transformations Source & Courtesy: University of Wisconsin,
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Models and Architectures
Chapter 14 Shading Models.
Models and Architectures
Introduction to Computer Graphics with WebGL
Building Models Ed Angel
Software Rasterization
CSC4820/6820 Computer Graphics Algorithms Ying Zhu Georgia State University Transformations.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Isaac Gang University of Mary Hardin-Baylor
Chapter V Vertex Processing
Lecture 13 Clipping & Scan Conversion
Chapter VII Rasterizer
Angel: Interactive Computer Graphics5E © Addison-Wesley 2009
Angel: Interactive Computer Graphics5E © Addison-Wesley 2009
Building Models Ed Angel Professor Emeritus of Computer Science
Models and Architectures
Introduction to Meshes
Geometric Objects and Transformations (II)
Models and Architectures
Curves and Surfaces (I)
Chapter 14 Shading Models.
Graphics Systems and Models (II)
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Introduction to Meshes
Presentation transcript:

Geometric Objects and Transformations (I) 姜明 北京大学数学科学学院 Based on [EA], Chapter 3. 更新时间2019年4月17日星期三11时55分23秒

Mathematics of Computer Graphics How to represent basic geometric types How to convert between various representations OpenGL functions

Outline Scalars, Points and Vectors 3D Primitives Coordinate System and Frames Modeling a Colored Cube

Not restricted on the same plane….. 3D Primitives In 3D, there are a 3rd degree of freedom for objects. 3D curves 3D Surfaces Volumetric Objects Not restricted on the same plane…..

Complexity and Efficiency The mathematics for 3D objects can become complex. We still want efficient implementations. This requirement restricts the 3D objects that can be supported on existing graphics systems.

Three Features Modern graphics systems do best at rendering triangular, or other flat, polygons. Three features characterize 3D objects that fit well with existing graphics hardware and software The objects are described by their surfaces and can be thought of as being hollow. We need only 2D primitives to model 3D objects, because a surface is a 2D rather than a 3D entity. The objects can be specified through a set of vertices in 3D. We can use a pipeline architecture to process vertices at high rates. The objects either are composed of or can be approximated by flat convex polygons. Planar polygons avoid the interior confusion in 3D. Complex surfaces can be tessellate into triangular polygons.

Coordinate System Three linearly independent vectors define a coordinate system. Any vector can uniquely represent as a linear combination of the three basis vectors. Vectors have direction and magnitude, but lack a position attribute.

Frames A frame consists of an origin and a set of basis vectors. Any vector can be represented uniquely as Any point can be represented uniquely as Both vectors and points are geometric objects, exists independent of the frames. We have to work with their representation in a particular reference system.

Changes of Coordinate Systems In OpenGL, we define our objects using the coordinate system or frame that is natural for the application, which is known as the world or user frame. We need to find out how objects appear to the camera. This is done by the model-view matrix. Hence, we are required to find how the representation of a vector/point changes when we change the coordinate system or frame.

Transformation for Vectors Suppose that and are two bases and Transformation

Homogeneous Coordinates Given a frame , a point P at is represented as, using matrix multiplication The 4D row matrix is called the homogeneous-coordinate representation of the point P in this frame. Any vector can be represented as

Transformation for Frames Suppose that and are two bases and Transformation

Homogeneous Coordinates It provides a unified representation for vectors and points and transformations as matrix multiplications. Modern hardware implements homogeneous-coordinate operations directly, using parallelism to achieve high performance. We have to work in 4D to solve 3D problems within this approach. Main reason for using it is because of perspective viewing transformation .

Working with Representations We work with representations in the world frame in OpenGL. Changes of frames take place when transformation functions are invoked. The transformation matrix can be found easily when we are working with coordinate representations.

Find the Transformation Matrix Changes of representations from an old one to a new one are given by the following equation To find the matrix , let’s consider the inverse problem. The matrix converts from representations in the new frame to that in the original frame.

Hence, we have Therefore, The representation of a new frame in terms of an old frame provides the matrix to convert from representations in the new frame to those in the old one. (new in old => new to old), and vice versa.

Frames in OpenGL There are two frames in OpenGL, the camera frame and the world frame. The model-view matrix positions the world frame relative to the camera frame. It converts the homogeneous-coordinate representations of points and vectors in the world frame to their representations in the camera frame. Camera and world frames: in the default position. After applying model-view matrix

Camera Frame The y-direction: the up direction of the camera. The negative z-direction: the direction the camera is pointing. The x-direction helps form a right-handed frame. Changes of frame are represented by model-view matrices, which converts the representations in the world frame to those in the camera frame.

Example: moving the camera To move the camera away from the objects or move the objects away from the camera, we move the camera frame relative to the world frame we position the world frame relative to the camera frame. world Model-view matrix camera

Modeling a Colored Cube To draw a rotating cube, we need to perform the following tasks Modeling Converting to camera frame Clipping Projecting Removing hidden surfaces Rasterizing Example: cube.exe

Modeling of a Cube There is a number of ways to model it. At the end of the pipeline, the hardware processes as an object consisting of 8 vertices. We use a surface-based model. The cube is modeled with 6 polygons that define its faces, called its facets.

Inward and Outward Faces We have to be careful about the order in which we specify our vertices for a 3D polygon. Each polygon has two faces. We call a face outward facing if the vertices are traversed in a counter wise order when the face is viewed from the outside, determined from its normal direction. In our example it is important to define the order as 0, 3, 2, 1, rather as 0, 1, 2, 3. Each face is defined by 4 ordered vertices.

Vertices of the cube in cube.c: Facets of the cube in cube.c: GLfloat vertices[][3] Facets of the cube in cube.c: void polygon (int a, int b, int c , int d) The cube in cube.c: void colorcube(void)

The Color Cube We assign the colors to the vertices. Colors in cube.c: GLfloat colors[][3] There are many ways to use the colors of the vertices to fill the facets or interpolate across a polygon. The most common method is bilinear interpolation.

Bilinear Interpolation Final program: cube.c, cube.exe

How Color is interpolated? void glHint(GLenum target, GLenum hint); Controls certain aspects of OpenGL behavior. The target parameter indicates which behavior is to be controlled. The hint parameter can be GL_FASTEST to indicate that the most efficient option should be chosen, GL_NICEST to indicate the highest-quality option GL_DONT_CARE to indicate no preference. The interpretation of hints is implementation-dependent; an OpenGL implementation can ignore them entirely. OpenGL Programming Guide, 7th edition, Page 268.

The GL_PERSPECTIVE_CORRECTION_HINT target parameter refers to how color values and texture coordinates are interpolated across a primitive: either linearly in screen space (a relatively simple calculation) or in a perspective-correct manner (which requires more computation). Often, systems perform linear color interpolation because the results, while not technically correct, are visually acceptable; however, in most cases, textures require perspective-correct interpolation to be visually acceptable. Thus, an OpenGL implementation can choose to use this parameter to control the method used for interpolation. OpenGL Programming Guide, 7th edition, Page 268.

Example: two-spheres.c One algorithm that achieves the required behavior (linear interpolation) is to triangulate a polygon (without adding any vertices) and then treat each triangle individually as already discussed. A scan-line rasterizer that linearly interpolates data along each edge and then linearly interpolates data across each horizontal span from edge to edge also satisfies the restrictions. Example: two-spheres.c OpenGL specification 3.1, p. 99.

Vertex Arrays Vertex arrays allows us to define a data structure using vertices and pass this structure to the implementation. When the objects need to be drawn, we can ask OpenGL to traverse the structure with just a few function calls. There are 3 steps in using vertex arrays. Enable the functionality of vertex arrays; Tell OpenGL where and in what format the arrays are; Render the object.

OpenGL allows 6 different types of arrays Vertex, color, color index, normal, texture coordinate and edge flag. Corresponding to 6 items that can be set between a glBegin and glEnd. For the color cube, we only need colors and vertices. We enable the corresponding arrays by glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); The arrays are the same as before: cubev.c

We tell OpenGL where the arrays is glVertexPointer(3, GL_FLOAT, 0, vertices); glColorPointer(3,GL_FLOAT, 0, colors); The 1st 3 arguments state that the elements are 3D vertices and colors stored as floats and that the elements are contiguous (byte offset = 0). Now we have to provide the information in our data structure about the relationship between the vertices and faces of the cube. This is done by specifying an array that holds the 24 ordered vertex indices for the 6 faces GLubyte cubeIndices[] in cubev.c

Final program: cubev.c, cubev.exe We use the following function to render the cube: void glDrawElements( GLenum mode, GLsizei count, GLenum type, const GLvoid *indices ) mode: the type of the element, e.g., line or polygon, …. count: the number of elements to draw type : the data type in the index arrays index: the first index to use One way to render the cube is for (i = 0; i< 6; i++) glDrawElements(GL_POLYGON, 4, GL_UNSIGNED_BYTE, &cubeIndices[4*i]); We can render the cube with the single function call glDrawElements(GL_QUADS, 24, GL_UNSIGNED_BYTE, cubeIndices); Final program: cubev.c, cubev.exe