Homework 1. A Letter Recognition Task (Frey & Slate, 1991) Instances: –20,000 unique letter images were generated by randomly distorting pixel images.

Slides:



Advertisements
Similar presentations
TWO STEP EQUATIONS 1. SOLVE FOR X 2. DO THE ADDITION STEP FIRST
Advertisements

You have been given a mission and a code. Use the code to complete the mission and you will save the world from obliteration…
Factor each trinomial:
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Slide 1Fig 25-CO, p.762. Slide 2Fig 25-1, p.765 Slide 3Fig 25-2, p.765.
Slide 1Fig 24-CO, p.737. Slide 2Fig 24-1, p.740 Slide 3Fig 24-2, p.741.
Copyright © 2003 Pearson Education, Inc. Slide 3-1 Created by Cheryl M. Hughes The Web Wizards Guide to XML by Cheryl M. Hughes.
Copyright © 2003 Pearson Education, Inc. Slide 5-1 Created by Cheryl M. Hughes The Web Wizards Guide to XML by Cheryl M. Hughes.
Counting On Wendy Pallant 2002.
Tutorial 3 – Creating a Multiple-Page Report
Tutorial 9 – Creating On-Screen Forms Using Advanced Table Techniques
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
0 - 0.
2 pt 3 pt 4 pt 5 pt 1 pt 2 pt 3 pt 4 pt 5 pt 1 pt 2 pt 3 pt 4 pt 5 pt 1 pt 2 pt 3 pt 4 pt 5 pt 1 pt 2 pt 3 pt 4 pt 5 pt 1 pt Time Money AdditionSubtraction.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
MULTIPLYING MONOMIALS TIMES POLYNOMIALS (DISTRIBUTIVE PROPERTY)
MULTIPLICATION EQUATIONS 1. SOLVE FOR X 3. WHAT EVER YOU DO TO ONE SIDE YOU HAVE TO DO TO THE OTHER 2. DIVIDE BY THE NUMBER IN FRONT OF THE VARIABLE.
SUBTRACTING INTEGERS 1. CHANGE THE SUBTRACTION SIGN TO ADDITION
MULT. INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
FACTORING Think Distributive property backwards Work down, Show all steps ax + ay = a(x + y)
Addition Facts
Bridging through 10 Learning objectives:
IMAGE RESIZING & SEAMCARVING CS16: Introduction to Algorithms & Data Structures Thursday, January 23, 14 1.
Photo Composition Study Guide Label each photo with the category that applies to that image.
Variables Conditionals Boolean Expressions Conditional Statements How a program produces different results based on varying circumstances if, else if,
Copyright © Cengage Learning. All rights reserved.
HTML Tags and Their Functions
ABC Technology Project
Unit 8: Presenting Data in Charts, Graphs and Tables
Statistics for Managers Using Microsoft® Excel 5th Edition
XP Tutorial 4New Perspectives on Creating Web Pages with HTML, XHTML, and XML 1 Designing a Web Page with Tables Tutorial 4 Creating a News Page.
Creating Tables in a Web Site
Adobe InDesign CS5 - Illustrated Unit G: Working with Color and Tables.
Working with Web Tables
Chapter 2: Frequency Distributions
Lets play bingo!!. Calculate: MEAN Calculate: MEDIAN
Collin College Credit Exam
Do Now 9/11/09 Take out HW from last night. Take out HW from last night. Text p. 19 #10-36 evens Text p. 19 #10-36 evens Copy HW in your planner. Copy.
Formatting and Editing Skills
R for Classification Jennifer Broughton Shimadzu Research Laboratory Manchester, UK 2 nd May 2013.
Addition 1’s to 20.
25 seconds left…...
Princess Nora University Artificial Intelligence Artificial Neural Network (ANN) 1.
Factoring Grouping (Bust-the-b) Ex. 3x2 + 14x Ex. 6x2 + 7x + 2.
Shifting, Reflecting, and Stretching Graphs
Test B, 100 Subtraction Facts
U2 L5 Quotient Rule QUOTIENT RULE
Recursive & Explicit Formulas
A lesson approach © 2011 The McGraw-Hill Companies, Inc. All rights reserved. a lesson approach Microsoft® PowerPoint 2010 © 2011 The McGraw-Hill Companies,
Week 1.
Warm-up Obj: Read and interpret data displayed in a histogram.
We will resume in: 25 Minutes.
Improved Census Transforms for Resource-Optimized Stereo Vision
Bottoms Up Factoring. Start with the X-box 3-9 Product Sum
Introduction to the Cartesian Plane
Benchmark Series Microsoft Excel 2010 Level 1
Classification Classification Examples
Chapter 4 Pattern Recognition Concepts continued.
Artificial Neural Networks
CS 6825: Binary Image Processing – binary blob metrics
Chapter 4: Pattern Recognition. Classification is a process that assigns a label to an object according to some representation of the object’s properties.
Evaluating Classifiers Reading: T. Fawcett, An introduction to ROC analysis, Sections 1-4, 7 (linked from class website)An introduction to ROC analysis.
Essential components of the implementation are:  Formation of the network and weight initialization routine  Pixel analysis of images for symbol detection.
Evaluating Classifiers. Reading for this topic: T. Fawcett, An introduction to ROC analysis, Sections 1-4, 7 (linked from class website)
Week 4: 6/6 – 6/10 Jeffrey Loppert. This week.. Coded a Histogram of Oriented Gradients (HOG) Feature Extractor Extracted features from positive and negative.
Classification with Perceptrons Reading:
Presentation transcript:

Homework 1

A Letter Recognition Task (Frey & Slate, 1991) Instances: –20,000 unique letter images were generated by randomly distorting pixel images of the 26 uppercase letters from 20 different commercial fonts

A Letter Recognition Task (Frey & Slate, 1991) Instances: –20,000 unique letter images were generated by randomly distorting pixel images of the 26 uppercase letters from 20 different commercial fonts –Each image is ~ 45x45 pixels. Each pixel is either black (“on”) or white (“off).

Features: 16 attributes of a pixel-array image 1. The horizontal position, counting pixels from the left edge of the image, of the center of the smallest rectangular box that can be drawn with all "on" pixels inside the box. 2. The vertical position, counting pixels from the bottom, of the above box. 3. The width, in pixels, of the box. 4. The height, in pixels, of the box. 5. The total number of "on" pixels in the character image. 6. The mean horizontal position of all "on" pixels relative to the center of the box and divided by the width of the box. This feature has a negative value if the image is "left-heavy” as would be the case for the letter L. Etc. See their paper (linked from the class website)

Feature scaling: Each feature is scaled to be in the range 0 − 15 Data format: Each row of letter-recognition.data corresponds to a single instance (i.e., letter image). The first “value” on the row is the letter category. The next 16 values are the feature values. T,2,8,3,5,1,8,13,0,6,6,10,8,0,8,0,8 I,5,12,3,7,2,10,5,5,4,13,3,9,2,8,4,10 D,4,11,6,8,6,10,6,2,6,10,3,7,3,7,3,9 N,7,11,6,6,3,5,9,4,6,4,4,10,6,10,2,8 G,2,1,3,1,1,8,6,6,6,6,5,9,1,7,5,10 S,4,11,5,8,3,8,8,6,9,5,6,6,0,8,9,7 B,4,2,5,4,4,8,7,6,6,7,6,6,2,8,7,10 A,1,1,3,2,1,8,2,2,2,8,2,8,1,6,2,7 J,2,2,4,4,2,10,6,2,6,12,4,8,1,6,1,7 M,11,15,13,9,7,13,2,6,2,12,1,9,8,1,1,8.

Feature scaling: Each feature is scaled to be in the range 0 − 15 Data format: Each row of letter-recognition.data corresponds to a single instance (i.e., letter image). The first “value” on the row is the letter category. The next 16 values are the feature values. T,2,8,3,5,1,8,13,0,6,6,10,8,0,8,0,8 I,5,12,3,7,2,10,5,5,4,13,3,9,2,8,4,10 D,4,11,6,8,6,10,6,2,6,10,3,7,3,7,3,9 N,7,11,6,6,3,5,9,4,6,4,4,10,6,10,2,8 G,2,1,3,1,1,8,6,6,6,6,5,9,1,7,5,10 S,4,11,5,8,3,8,8,6,9,5,6,6,0,8,9,7 B,4,2,5,4,4,8,7,6,6,7,6,6,2,8,7,10 A,1,1,3,2,1,8,2,2,2,8,2,8,1,6,2,7 J,2,2,4,4,2,10,6,2,6,12,4,8,1,6,1,7 M,11,15,13,9,7,13,2,6,2,12,1,9,8,1,1,8. Class Distribution: 789 A 766 B 736 C 805 D 768 E 775 F 773 G 734 H 755 I 747 J 739 K 761 L 792 M 783 N 753 O 803 P 783 Q 758 R 748 S 796 T 813 U 764 V 752 W 787 X 786 Y 734 Z

Your mission: (1) Train 25 perceptrons on A vs. B A vs. C A vs. D etc.

Your mission: (1) Train 25 perceptrons on A vs. B A vs. C A vs. D etc. Use ~½ of the data for training and ~½ for testing.

Your mission: (1) Train 25 perceptrons on A vs. B A vs. C A vs. D etc. Use ~½ of the data for training and ~½ for testing. Train the perceptron using the perceptron learning rule with “stochastic gradient descent”

Your mission: (1) Train 25 perceptrons on A vs. B A vs. C A vs. D etc. Use ~½ of the data for training and ~½ for testing. Train the perceptron using the perceptron learning rule with “stochastic gradient descent” [Note: I haven’t tried this yet with this data set, so I don’t know how well these will do]

Your mission: (1) Train 25 perceptrons on A vs. B A vs. C A vs. D etc. Use ~½ of the data for training and ~½ for testing. Train the perceptron using the perceptron learning rule with “stochastic gradient descent” [Note: I haven’t tried this yet with this data set, so I don’t know how well these will do] (2) For each perceptron, create a “confusion matrix” with the results on the test data

Your mission: (1) Train 25 perceptrons on A vs. B A vs. C A vs. D etc. Use ~½ of the data for training and ~½ for testing. Train the perceptron using the perceptron learning rule with “stochastic gradient descent” [Note: I haven’t tried this yet with this data set, so I don’t know how well these will do] (2) For each perceptron, create a “confusion matrix” with the results on the test data (3) For each perceptron, compute “precision” and “recall”

Your mission: (1) Train 25 perceptrons on A vs. B A vs. C A vs. D etc. Use ~½ of the data for training and ~½ for testing. Train the perceptron using the perceptron learning rule with “stochastic gradient descent” [Note: I haven’t tried this yet with this data set, so I don’t know how well these will do] (2) For each perceptron, create a “confusion matrix” with the results on the test data (3) For each perceptron, compute “precision” and “recall” (4) For each perceptron, create a “ROC” curve

Your mission: (1) Train 25 perceptrons on A vs. B A vs. C A vs. D etc. Use ~½ of the data for training and ~½ for testing. Train the perceptron using the perceptron learning rule with “stochastic gradient descent” [Note: I haven’t tried this yet with this data set, so I don’t know how well these will do] (2) For each perceptron, create a “confusion matrix” with the results on the test data (3) For each perceptron, compute “precision” and “recall” (4) For each perceptron, create a “ROC” curve (5) Optional: do the same tasks with multilayered neural networks, trained with backpropagation

(5) Even More Optional: Train a single neural network with 26 output units to recognize the letter category of an image.