Introduction to Computer Graphics EEL 5771-001 PPT3: Attributes of Graphics Primitives Karthick Rajendran U3116-6141
Introduction Color and Grey Scale Line Attributes Pen and Brush Options Line Style Curve Attributes Fill-Area Attributes Scan-Line Polygon Fill Scan-Line Fill Methods Wire-Frame Methods Character Attributes Anti-aliasing
Attributes of Graphics Primitives : Introduction Any parameter that affects the way one of the Graphics Primitives to be displayed is referred to as an Attribute Parameter. Some attribute parameters are size, color, style, font, orientation. How to incorporate these attribute options into graphics package? Extend the parameter list associated with each output primitive function to include appropriate attributes. Maintain a system list of current attribute values and use separate functions to set attributes.
Introduction (Ctd.) Few Attributes for various Primitives are shown below
Color and Grey Scale A basic attribute for all primitives is color. Various color and intensity-level options can be made available to a user, depending on the capabilities and design objectives of a particular system. Extend the parameter list associated with each output primitive function to include appropriate attributes. Maintain a system list of current attribute values and use separate functions to set attributes. There are a number of color models that are in common use. – RGB – CYM – HSB. We generally use RGB colors in OpenGL Colors are represented by colors codes which are positive integers. Color information is stored in frame buffer or in separate table and use pixel values as index to the color table.
Color : RGB Colors RGB colors are specified by giving the amount of red green and blue needed to create the color. RGB is an additive color model - like when you mix colored lights. No color is black. – Putting all the colors together makes white. – This model matches what happens in a CRT where each pixel emits some amount of red, green and blue light. • Color values can be given as integers from 0 to 255 or floating point numbers from 0 to 1. RGBA colors have a fourth value that represents transparency
Color : Color Tables A color table is useful when your frame buffer doesn’t have enough bits to support all the colors you want to use. If you need 24 bit color values and only have 8 bits per pixel, you make a table with 256 entries each of which is a 24-bit color. The frame buffer holds an index into the table for each color which only needs 8 bits. By changing between different color tables you can use more than 28 colors in your graphics. Color-Index mode is used in OpenGL to create color tables. The figure illustrates a color look up table.
Color : Color Tables 8 – Bit Color : 8-bit color graphics is a method of storing image information in a computer's memory or in an image file, such that each pixel is represented by one 8-bit byte. The maximum number of colors that can be displayed at any one time is 256.
Color : Color Tables 24 – Bit Color(True Color): True color supports 24-bit for three RGB colors. It provides a method of representing and storing graphical-image information (especially in computer processing) in an RGB color space such that a very large number of colors, shades, and hues can be displayed in an image, such as in high-quality photographic images or complex graphics. Usually, true color is defined to mean 256 shades of red, green, and blue, for a total of 224 or 16,777,216 color variations. The human eye can discriminate up to ten million colors. "True color" can also refer to an RGB display-mode that does not need a color look-up table.
Color : Color look-upTable A color look-up table (CLUT) is a mechanism used to transform a range of input colors into another range of colors. It can be a hardware device built into an imaging system or a software function built into an image processing application. The hardware color look-up table will convert the logical color (pseudo-color) numbers stored in each pixel of video memory into physical colors, normally represented as RGB triplets, that can be displayed on a computer monitor.
Color : Color look-upTable A common example would be a palette of 256 colors; that is, the number of entries is 256, and thus each entry is addressed by an 8-bit pixel value. The 8 bits is known as color depth, bit depth or bits per pixel (bpp). The 256 color palette is shown below:
Gray Scale Level A greyscale is an image in which the value of each pixel is a single sample, that is, it carries only intensity information. Images of this sort, also known as black-and-white, are composed exclusively of shades of gray, varying from black at the weakest intensity to white at the strongest. When an RGB color setting specifies an equal amount of red, green, and blue, the result is some shade of gray. Values close to 0 for the color components produce dark gray, and higher values near 1.0 produce light gray.
Gray Scale Level .Apply for monitor that have no color .Shades of grey (white->light grey->dark grey->black) .Color code mapped onto grayscale codes .2 bits can give 4 level of grayscale .8 bits per pixel will allow 256 combination .Dividing the actual code with 256 will give range of 0 and 1 Ex: Color code in color display is 118 To map to nearest grayscale then 118/256 = 0.45 light gray
Line Attributes The attributes for a line are color, width and style. For wide lines we have to worry about what the ends and intersections look like. Can draw lines in OpenGL using either GL_LINES in glBegin or as the fill mode for a polygon. In addition, we have GL_LINE_STRIP and GL_LINE_LOOP. The current color is used. Types Solid Dotted – very short dash with spacing equal to or greater than dash itself Dashed – displayed by generating an inter-dash spacing. Pixel count for the span and inters-pan length is specified by the mask . Ex. 111100011110001111
Line Attributes Displaying Thick Lines. These thick(wide)lines can be displayed in several ways. Draw multiple 1-pixel wide lines next to each other to get the desired width.
Line Attributes Replace each pixel in a line by either a horizontal or vertical span of the appropriate number of pixels. Use the slope to decide which direction the span goes. Figure out the vertices of a filled rectangle of the right height and width with the line centered inside it.
Line Attributes Use a brush pattern. Ensuring proper ending of thick lines: Thick lines are drawn with (a) butt caps, (b) round caps, and (c) projecting square caps.
Line Attributes For Lines that intersect: With square end caps, the intersection between two wide lines can look messy if we don’t do something to correct it. Most graphics packages allow you to specify a miter join, a round join or a bevel join.
Line Styles Sometimes we want something other than solid lines – dashed or dotted lines for example. One common approach is to specify a pixel mask that indicates how many contiguous pixels are on to make the dashes and how many are off to make the spaces between them. For example, 11110000 would have the same length for the dashes and the space between them. The mask 11111000 would have longer dashes than spaces. hick(wide)lines can be displayed in several ways. Note that when counting pixels like this, the dashes will look different lengths for different orientations. Could also draw a dashed line as a sequence of line segments.
Pen and Brush Options Pen and Brush . The selected “pen” or “brush” determine the way a line will be drawn. . Pens and brushes have size, shape, color and pattern attribute. . Pixel mask is applied in both of them
Pen and Brush Options Pen and Brush : Common Examples
Curved Attributes Curve attributes are basically the same as for lines except different approaches lead to slightly different appearances. Thicker curves can be produced by: 1. Plotting additional pixel 2. Filling the space between two concentric circles. 3. Using thicker pen or brush Using horizontal and vertical pixel spans depending on the slope: (Fig) Circular Arc of width 4 plotted with pixel spans.
Curved Attributes Filling the space between two concentric circles. Use a brush (square in this case):
Curved Attributes Examples of Common Pen shapes
Fill-Area Attributes Three ways to fill an area : 1. with nothing 2. with a solid color 3. with a pattern - either apply the pattern to the region or blend it with the background color.
Fill-Area Attributes Three basic fill styles are: 1. hollow with color border.. interior color is same with background
2. filled with a solid color .. color up to and including the border Fill-Area Attributes 2. filled with a solid color .. color up to and including the border 3. pattern .. control by other table
Fill-Area Attributes Area fill with logic operators.
Introduction to Polygons Different types of Polygons Simple Convex Simple Concave Non-simple : self-intersecting With holes Convex Concave Self-intersecting
Introduction to Polygons Convex A region S is convex iff for any x1 and x2 in S, the straight line segment connecting x1 and x2 is also contained in S. The convex hull of an object S is the smallest H such that S Rendering: Scan-line area fill The algorithm (Hearn, Baker, Carithers, Computer Graphics) Scan-line algorithms • Purpose: given a set of (2D) vertex coordinates for a polygon, fill the area surrounded by the polygon Polygons can be filled with a uniform colour or texture
Scan Line Algorithm - Outline Essential in rendering, i.e. conversion of geometric entities into image pixels Used, for example, in – Display of polygons – Hidden surface removal – Texture mapping For each scan line (each y-coordinate) Compute x coordinates of the intersections of the current scan line with all edges Sort these edge intersections by increasing x value Group the edge intersections by pairs (vertex intersections require special processing) Fill in the pixels on the scan line between pairs of values
Scan Line Algorithm Build edge table Initialise scan-line list 2 Find first non-empty bucket (ymin) in the edge table 3 While the current ymin not greater than topmost y coordinate of the polygon 3.1 merge the edge table for the current ymin with scan-line list, sort on increasing xmin 3.2 fill pixels between pairs of rounded xmin-s 3.3 Remove from the scan-line list the edges whose ymax = current scanline 3.4 increment xmin by the increment (1/m) for remaining edges in the scan list 3.5 re-sort scan line list on increasing xmin 3.6 increment current scanline ymin to give next scanline
Scan Line Polygon Fill Algorithms A standard output primitive in general graphics package is a solid color or patterned polygon area: There are two basic approaches to filling on raster systems. Determine overlap Intervals for scan lines that cross that area. Start from a given interior point and paint outward from this point until we encounter the boundary The first approach is mostly used in general graphics packages, however second approach is used in applications having complex boundaries and interactive painting systems Xk+1,yk+1 Scan Line yk +1 Scan Line yk Xk , yk
Scan Line Polygon Fill Algorithm 10 14 18 24 Interior pixels along a scan line passing through a polygon area For each scan line crossing a polygon are then sorted from left to right, and the corresponding frame buffer positions between each intersection pair are set to the specified color. These intersection points are then sorted from left to right , and the corresponding frame buffer positions between each intersection pair are set to specified color
Scan Line Polygon Fill Algorithm In the given example ( previous slide) , four pixel intersections define stretches from x=10 to x=14 and x=18 to x=24 Some scan-Line intersections at polygon vertices require special handling: A scan Line passing through a vertex intersects two polygon edges at that position, adding two points to the list of intersections for the scan Line In the given example , scan Line y intersects five polygon edges and the scan Line y‘ intersects 4 edges although it also passes through a vertex y‘ correctly identifies internal pixel spans ,but need some extra processing
Scan line Polygon Fill Algorithm One way to resolve this is also to shorten some polygon edges to split those vertices that should be counted as one intersection When the end point y coordinates of the two edges are increasing , the y value of the upper endpoint for the current edge is decreased by 1 When the endpoint y values are monotonically decreasing, we decrease the y coordinate of the upper endpoint of the edge following the current edge
Scan Line Polygon Fill Algorithm (b) Adjusting endpoint values for a polygon, as we process edges in order around the polygon perimeter. The edge currently being processed is indicated as a solid like. In (a), the y coordinate of the upper endpoint of the current edge id decreased by 1. In (b), the y coordinate of the upper end point of the next edge is decreased by 1
Otherwise, the shared vertex represents a local extremum (min. or max Otherwise, the shared vertex represents a local extremum (min. or max.) on the polygon boundary, and the two edge intersections with the scan line passing through that vertex can be added to the intersection list Figure Intersection points along the scan lines that intersect polygon vertices. Scan line y generates an odd number of intersections, but scan line y generates an even number of intersections that can be paired to identify correctly the interior pixel spans.
The scan conversion algorithm works as follows Intersect each scanline with all edges Sort intersections in x Calculate parity of intersections to determine in/out Fill the “in” pixels Special cases to be handled: Horizontal edges should be excluded For vertices lying on scanlines, count twice for a change in slope. Shorten edge by one scanline for no change in slope Coherence between scanlines tells us that Edges that intersect scanline y are likely to intersect y + 1 X changes predictably from scanline y to y + 1
y_upper: last scanline to consider We have 2 data structures: Edge Table and Active Edge Table Traverse Edges to construct an Edge Table Eliminate horizontal edges Add edge to linked-list for the scan line corresponding to the lower vertex. Store the following: y_upper: last scanline to consider x_lower: starting x coordinate for edge 1/m: for incrementing x; compute Construct Active Edge Table during scan conversion. AEL is a linked list of active edges on the current scanline, y. Each active edge line has the following information x_lower: edge’s intersection with current y 1/m: x increment The active edges are kept sorted by x
Scan Line Fill
Scan Line Fill
Scan Line Fill
Scan Line Fill – Running the Algorithm
Scan Line Fill – Running the Algorithm
Scan Line Fill – Running the Algorithm
Scan line Fill Method – Using a Stack – Sample program //The scanline fill algorithm using our own stack routines, faster void floodFillScanlineStack(int x, int y, int newColor, int oldColor) { if(oldColor == newColor) return; emptyStack(); int y1; bool spanLeft, spanRight; if(!push(x, y)) return; while(pop(x, y))
Scan line Fill Method – Using a Stack – Sample Program y1 = y; while(y1 >= 0 && screenBuffer[x][y1] == oldColor) y1--; y1++; spanLeft = spanRight = 0; while(y1 < h && screenBuffer[x][y1] == oldColor ) { screenBuffer[x][y1] = newColor; if(!spanLeft && x > 0 && screenBuffer[x - 1][y1] == oldColor) if(!push(x - 1, y1)) return; spanLeft = 1; } else if(spanLeft && x > 0 && screenBuffer[x - 1][y1] != oldColor) spanLeft = 0;
Scan line Fill Method – Using a Stack – Sample Program if(!spanRight && x < w - 1 && screenBuffer[x + 1][y1] == oldColor) { if(!push(x + 1, y1)) return; spanRight = 1; } else if(spanRight && x < w - 1 && screenBuffer[x + 1][y1] != oldColor) spanRight = 0; y1++;
Wire Frame Methods A wire-frame model is a visual presentation of a three-dimensional (3D) or physical object used in 3D computer graphics. It is created by specifying each edge of the physical object where two mathematically continuous smooth surfaces meet, or by connecting an object's constituent vertices using straight lines or curves. The object is projected onto a display screen by drawing lines at the location of each edge. The term wire frame comes from designers using metal wire to represent the three-dimensional shape of solid objects. Sample rendering of a wire-frame cube, icosahedron, and approximate sphere
Wire Frame Methods The following sets of images show a wireframe version, a wireframe version with hidden line removal, and a solid polygonal representation of the same object.
Character Attributes Curve attributes are basically the same as for lines except different approaches lead to slightly different appearances. Font Style – underline, boldface, italics, outline, shadow plain bold italic underline double underline outline shadow Color Character spacing Height to width ratio can sometimes be manipulated The orientation of the text is specified by defining the up vector
Character Attributes Text can go horizontal or vertically and forward or backward Alignment of the text as a whole relative to some reference point (left, center, right, top, bottom)
Anti Aliasing What is Aliasing and Anti-Aliasing ? Aliasing occurs when a continuous signal is discretized. This distortion of information due to low-frequency sampling (undersampling) is called aliasing. We can improve the appearance of displayed raster lines by applying antialiasing methods that compensate for the undersampling process. Anti-Aliasing Anti-aliasing is the term for methods that reduce the unwanted aliasing artifacts.
Aliasing Aliasing explained through a set of images. The appearance of a texture-mapped triangle depends on the mapping between a triangle outline and a textured map.
Aliasing If the mapping is not one-to-one, significant artifacts can occur. Even when it is nearly one-to-one, artifacts can appear
Jaggies or Stair casing “Jaggies” an informal name for artifacts from poorly representing continuous geometry by a discrete 2D grid of pixels. Jaggies also called Stair case effects are a manifestation of sampling error and loss of information (aliasing of high frequency components by low frequency ones)
Examples of Aliasing If the mapping is not one-to-one, significant artifacts can occur. Even when it is nearly one-to-one, artifacts can appear
Examples of Aliasing
Examples of Aliasing
Examples of Aliasing
Examples of Aliasing
Causes of Aliasing Aliasing is caused by under sampling. If we don’t sample a continuous signal at a high enough rate, we won’t have a good representation of the signal
When Does Spatial Aliasing Occur? During image synthesis: when sampling a continuous (geometric) model to create a raster image, Example: scan converting a line or polygon. Sampling: converting a continuous signal to a discrete signal. During image processing and image synthesis: when resampling a picture, as in image warping or texture mapping. Resampling: sampling a discrete signal at a different sampling rate. Example: “zooming” a picture from n x by ny pixels to sn x by sny pixels s>1: called upsampling or interpolation can lead to blocky appearance if point sampling is used. s<1: called downsampling or decimation can lead to moire patterns and jaggies
Anti - Aliasing : Sampling Representation, sampling, and reconstruction To understand (and reduce) aliasing, we need to understand the process of sampling. Representation in computer graphics: The original signal can be continuous … e.g., equation of a line, description of a scene. Sampling is done by the renderer. Reconstruction is done by the display and the eye.
Anti - Aliasing : Reconstruction Reconstructing a signal from sampled data Reconstruction approximates the signal between sample points The digitized signal must be reconstructed to present a continuous signal to the observer. There are many ways to perform the reconstruction
Anti - Aliasing : Reconstruction Reconstructing a signal from sampled data
Anti - Aliasing : Reconstruction Example A 2d Image with a poor sampling rate
Anti - Aliasing : Reconstruction Example Reconstructing a 2d Image using one of the reconstruction filter
Anti - Aliasing
Anti - Aliasing
Anti – Aliasing : Challenges The analysis of nonuniform sampling and reconstruction remains challenging. In computer graphics The reconstruction filter is mostly determined by properties of the display device and our eyes We don’t have much control over the reconstruction filter Analog filtering avoids high-frequency aliasing, while completely attenuating a frequency band less than two octaves above that, is actually quite a challenge for Sampling. The reconstruction function is mostly out of our control Depends on the physics of the display device Depends on the imaging properties of your eyes Depends on image processing that goes on in the visual cortex of your brain
Anti – Aliasing Computer Graphics System
Anti – Aliasing Anti-aliasing in the computer graphics system Enhance the renderer to help prevent aliasing and reduce aliasing artifacts
Anti – Aliasing : Graphical Objects Anti-aliasing approach depends on the form of the input data Graphical objects e.g., lines, polygons, curved surface patches Images e.g., texture maps, environment maps Graphical objects are generally specified mathematically A line can be specified by its endpoints (x0,y0), (x1,y1) (x(t), y(t)) = (1-t) (x0,y0) + t (x1,y1) A triangle can be specified by its three vertices (x0,y0), (x1,y1), (x2,y2) F0(x,y) > 0 ? F1(x,y) > 0 ? F2(x,y) > 0 ? a sphere can be specified by its radius R and its center point (xc,yc,zc) (x-xc)2 + (y-yc)2 + (z-zc)2 = R2
Anti – Aliasing Graphical objects (e.g., lines, polygons) Graphical objects are generally specified mathematically Input signal may have infinite frequency components e.g., from abrupt transitions from black to white
Anti – Aliasing Graphical objects may have infinite frequency components e.g., across edges of a polygon.
Anti – Aliasing How do you band-limit a piece of geometry? Render the image at a higher sampling rate and then filter the rendered image to reduce aliasing artifacts - Supersampling Apply a filtering to the geometry as we render - Analytic / Coverage-based filtering Use a different mathematical representation so shapes don’t have a sharp transition from outside to inside - Distance-based antialiasing
Anti Aliasing
Anti Aliasing
Anti Aliasing
Anti Aliasing
Anti Aliasing
Anti Aliasing
Anti Aliasing
Anti Aliasing
Anti Aliasing
Anti Aliasing
Area Sampling shade pixels according to the area covered by thickened line this is unweighted area sampling a rough approximation formulated by dividing each pixel into a finer grid of pixels
Unweighted Area Sampling Consider a line as having thickness (all good drawing programs do this) Consider pixels as little squares Fill pixels according to the proportion of their square covered by the line Other variations weigh the contribution according to where in the square the primitive falls 1/8 1/4 .914 1/8 1/4 .914 1/4 1/8 .914 1/4 1/8
Alpha-based Anti-Aliasing Rather than setting the intensity according to coverage, set the The pixel gets the line color, but with <=1 This supports the correct drawing of primitives one on top of the other Draw back to front, and composite each primitive over the existing image Only some hidden surface removal algorithms support it 1/8 1/4 .914 1/8 1/4 .914 1/4 1/8 .914 1/4 1/8
Super-sampling Sample at a higher resolution than required for display, and filter image down Issues of which samples to take, and how to average them 4 to 16 samples per pixel is typical Samples might be on a uniform grid, or randomly positioned, or other variants Number of samples can be adapted
Unweighted Area Sampling primitive cannot affect intensity of pixel if it does not intersect the pixel equal areas cause equal intensity, regardless of distance from pixel center to area unweighted sampling colors two pixels identically when the primitive cuts the same area through the two pixels intuitively, pixel cut through the center should be more heavily weighted than one cut along corner
Weighted Area Sampling weighting function, W(x,y) specifies the contribution of primitive passing through the point (x, y) from pixel center x Intensity W(x,y)
Antialiasing in OpenGL OpenGL calculates a coverage value for each fragment based on the fraction of the pixel square on the screen that it would cover. In RGBA mode, OpenGL multiplies the fragment’s alpha value by its coverage. Resulting alpha value is used to blend the fragment with the corresponding pixel already in the frame buffer.
Antialiasing (Cont.)
Enabling Antialiasing Anti aliasing is enabled using the glEnable() command, We can enable GL_POINT_SMOOTH or GL_LINE_SMOOTH modes. With RGBA mode, you must also enable blending to utilize GL_SRC_ALPHA as the source factor and GL_ONE_MINUS_SRC_ALPHA as the destination factor. Using a destination factor of GL_ONE will make intersection points a little brighter.
Antialiasing Lines init() { glEnable (GL_LINE_SMOOTH); glEnable (GL_BLEND); glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SOURCE_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); draw_lines_here(); }
Antialiasing Example The following example demonstrates the use of anti aliasing techniques using the blending and line smoothing. The program renders a wired dodecahedron and then allows the user to view through the standard model viewing capabilities using the keyboard for axis positioning and the arrow and page up/down keys for incrementing or decrementing eye point positions. Depressing an 'a' will toggle antialiasing. Depressing a b toggles between two different alpha blending settings and depressing an "l" will vary the thickness of the objects lines.
Antialiasing Example Without anti-aliasing:
Antialiasing Example With anti-aliasing:
Antialiasing Example #include <stdio.h> #ifdef __FLAT__ #include <windows.h> #endif #include <gl/glut.h> #include <stdlib.h> static float rotAngle = 0; static GLboolean antialiasFlag = GL_FALSE; GLfloat nearClip, farClip; GLdouble eyex=0.0, eyey=0.0, eyez=15.0; void init() { GLfloat maxObjectSize = 3.0; glClearColor( 0.0, 0.0, 0.0, 0.0); glShadeModel( GL_FLAT); nearClip = 1.0; farClip = nearClip + 80*maxObjectSize; glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glLineWidth(4.5); glEnable( GL_DEPTH_TEST); }
Antialiasing Example void blendFuncCycle( GLvoid) { static int whichBlendFunc = 0; whichBlendFunc = (whichBlendFunc + 1) %2; switch(whichBlendFunc) case 0: glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);break; case 1: glBlendFunc( GL_SRC_ALPHA, GL_ONE); break; default: break; } drawAntialiasedObjects() static GLfloat yellow[] = { 1.0, 1.0, 0.0, 1.0 }; if ( antialiasFlag == GL_TRUE){ glEnable( GL_LINE_SMOOTH); glEnable( GL_BLEND); glPushMatrix(); glColor4fv( yellow ); glScalef( 2.0, 2.0, 2.0 ); glutWireDodecahedron(); glPopMatrix(); if ( antialiasFlag == GL_TRUE ) glDisable( GL_LINE_SMOOTH); glDisable( GL_BLEND);
Antialiasing Example void display(void) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); glLoadIdentity(); gluLookAt(eyex, eyey, eyez, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0); drawAntialiasedObjects(); glFlush(); } void reshape (int width, int height) GLdouble aspect; glViewport( 0, 0, width, height ); aspect = (GLdouble) width / (GLdouble) height; glMatrixMode( GL_PROJECTION); gluPerspective( 45.0, aspect, nearClip, farClip ); glMatrixMode( GL_MODELVIEW ); void setEyePoint(void) eyex=0.0; eyey=0.0; eyez = 15.0; return;
Antialiasing Example void specialkeys( int key, int x, int y ) { switch (key) { case GLUT_KEY_LEFT: eyex--; break; case GLUT_KEY_RIGHT: eyex++; break; case GLUT_KEY_DOWN: eyey--; break; case GLUT_KEY_UP: eyey++; break; case GLUT_KEY_PAGE_UP: eyez++; break; case GLUT_KEY_PAGE_DOWN:eyez--; break; case GLUT_KEY_HOME: setEyePoint(); break; case GLUT_KEY_END: exit(0); break; default: break; } glutPostRedisplay(); keyboard(unsigned char key, int x, int y ) static float lineWidth = 0.5; switch(key) case 'a': case 'A': antialiasFlag = ! antialiasFlag; if (antialiasFlag == GL_TRUE) glEnable( GL_POINT_SMOOTH ); glEnable ( GL_LINE_SMOOTH );
Antialiasing Example else { glDisable( GL_POINT_SMOOTH ); glDisable( GL_LINE_SMOOTH ); } break; case 'b': blendFuncCycle(); case 'l': lineWidth += 0.5; if (lineWidth >= 8.0) lineWidth = 0.5; glLineWidth(lineWidth); case 'x': eyex=12.0; eyey=0.0; eyez=0.0; break; case 'y': eyex=0.05; eyey=12.0; eyez=0.0; break; case 'z': eyex=0.0; eyey=0.0; eyez=12.0; break; case 27: case 'e': case 'E': exit(0); break; default: break; glutPostRedisplay();
Antialiasing Example int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(500, 500); glutCreateWindow( argv[0]); init(); glutReshapeFunc(reshape); glutKeyboardFunc(keyboard); glutDisplayFunc(display); glutSpecialFunc( specialkeys ); glutMainLoop(); return 0; }
https://en.wikipedia.org/wiki/8-bit_color References Introduction - Text book and image : https://root.cern.ch/basic-graphics-primitives http://cs.boisestate.edu/~tcole/cs341/fall04/vg/6ch4.pdf Color & Grey scale https://en.wikipedia.org/wiki/8-bit_color http://dmp.dijonyellow.com/digital_photography/Color.htm https://en.wikipedia.org/wiki/Grayscale http://www.codeproject.com/Questions/327689/how-to-convert-coloured-image-to-grayscale-in-C-or http://www.songho.ca/dsp/luminance/luminance.html https://root.cern.ch/basic-graphics-primitives http://up.edu.ps/ocw/repositories/upinar_data%2022008/upinar_data%2022008/580/slides/ http://www.slideshare.net/1lakshmi1/attributes-of-outputprimitives-44298566
The animations copied from Prof. Harriet Fell’s References line attributes http://cs.boisestate.edu/~tcole/cs341/fall04/vg/6ch4.pdf http://www.cs.utexas.edu/~fussell/courses/cs324e2003/hear50265_ch04.pdf https://www.google.com/search?q=pen+and+brushes+pixel+pattern&es_sm=93&biw=1366&bih=667&source=lnms&tbm=isch&sa=X&ved=0CAYQ_AUoAWoVChMIhNHqifL9xwIV07IeCh21dAXy#tbm=isch&q=pen+and+brushes+pattern+example+graphics http://www.cs.bham.ac.uk/~vvk201/Teach/Graphics/14_ScanLineFill.pdf The animations copied from Prof. Harriet Fell’s lecture slides, College of Computer and Information Science, Northeastern University http://lodev.org/cgtutor/floodfill.html#Scanline_Floodfill_Algorithm_With_Stack https://en.wikipedia.org/wiki/Wire-frame_model http://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/VisibleSurfaces.html
References https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB4QFjAAahUKEwil4bn-w_7HAhWDXR4KHY4yBC0&url=http%3A%2F%2Fpeople.sju.edu%2F~swei%2Fcsc341%2FSlides%2Fpolygonfill.ppt&usg=AFQjCNFbFN5cH5rsPOFicB8vpCPWal5iGA&sig2=EkSxQoJxAddWVQjoVsC08A http://www.cs.tufts.edu/~sarasu/courses/comp175-2009fa/pdf/comp175-08-antialiasing.pdf http://classes.cec.wustl.edu/~cse452/lectures/lect03_image_processing.pdf https://sisu.ut.ee/sites/default/files/imageprocessing/files/aliasing.pdf