Design of Experiments CHM 585 Chapter 15.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

Supply Decisions.
Artificial Neural Network
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College.
Experimental Design, Response Surface Analysis, and Optimization
UNIVERSITY OF JYVÄSKYLÄ Building NeuroSearch – Intelligent Evolutionary Search Algorithm For Peer-to-Peer Environment Master’s Thesis by Joni Töyrylä
Kostas Kontogiannis E&CE
Artificial Neural Networks
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
1 Introduction to Bio-Inspired Models During the last three decades, several efficient machine learning tools have been inspired in biology and nature:
Using process knowledge to identify uncontrolled variables and control variables as inputs for Process Improvement 1.
Artificial Neural Networks ECE 398BD Instructor: Shobha Vasudevan.
Un Supervised Learning & Self Organizing Maps Learning From Examples
How does the mind process all the information it receives?
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Monté Carlo Simulation MGS 3100 – Chapter 9. Simulation Defined A computer-based model used to run experiments on a real system.  Typically done on a.
Introduction to the design (and analysis) of experiments James M. Curran Department of Statistics, University of Auckland
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Stochastic Algorithms Some of the fastest known algorithms for certain tasks rely on chance Stochastic/Randomized Algorithms Two common variations – Monte.
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
1 Introduction to Neural Networks And Their Applications.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
1 Simulation Scenarios. 2 Computer Based Experiments Systematically planning and conducting scientific studies that change experimental variables together.
CHEE825 Fall 2005J. McLellan1 Nonlinear Empirical Models.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
 Will help you gain knowledge in: ◦ Improving performance characteristics ◦ Reducing costs ◦ Understand regression analysis ◦ Understand relationships.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Big data classification using neural network
Six Sigma Greenbelt Training
Chapter 7. Classification and Prediction
Artificial Neural Networks
Supervised Learning in ANNs
Deep Learning Amin Sobhani.
Fall 2004 Perceptron CS478 - Machine Learning.
Artificial Intelligence (CS 370D)
Real Neurons Cell structures Cell body Dendrites Axon
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Introduction to Neural Networks And Their Applications
IE-432 Design Of Industrial Experiments
Data Mining Practical Machine Learning Tools and Techniques
Computer Science Department Brigham Young University
OVERVIEW OF BIOLOGICAL NEURONS
of the Artificial Neural Networks.
network of simple neuron-like computing elements
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Artificial Intelligence Lecture No. 28
Design Of Experiment Eng. Ibrahim Kuhail.
Computer Vision Lecture 19: Object Recognition III
ARTIFICIAL NEURAL networks.
Introduction to the design (and analysis) of experiments
DESIGN OF EXPERIMENTS by R. C. Baker
Machine Learning.
Presentation transcript:

Design of Experiments CHM 585 Chapter 15

Experimentation is one of the most important methods in the Quality Movement -- the quest for continuous improvement in our products and processes. If you can measure aspects of quality in your product, and if you have factors under your control, then you can perform an experiment to find out which factor settings result in the best quality product.

Experimentation is usually expensive, so you want to get the most information from the least number of runs in the experiment.

One approach when looking for optimal factor settings is to pick starting values and then adjust one factor to see if it helps and keep fiddling with it until the response seems to get better. But even then sometimes it gets worse because of random or external factors. It's hard to tell.

Then, adjusting another factor improves the response more Then, adjusting another factor improves the response more. But after adjusting that factor, the first factor's supposedly optimal setting is no longer optimal because changing the second factor has affected how the first factor works (interaction). But there are dozens of potential factors that might affect the product. Each fiddle with a new control sends you back to adjusting all the other factors.

Hundreds of runs. Little progress Hundreds of runs. Little progress. No comprehensive understanding of the whole process after much effort.

Suppose that you first experiment with Factor1 and make runs 1 through 5 that improve the response. But when you complete run 6, the response is worse. So, for the next run, you backtrack to the value of Factor1 at point 5 and decide that Factor1 = .9 is the best when holding Factor2 constant. Now you hold Factor1 constant and do run 7, but the response is worse. So you experiment with Factor2 and make run 8 for a better response and 9 for worse. You again backtrack and find at point 10 that Factor2 = -.38 is its optimal value, holding Factor1 constant. Have you found the optimum? No. These are all conditional optimum values that depend on holding one factor constant, and do not yield the global optimum.

A more scientific approach to the problem is to reason out all the factors that might affect the response. After choosing the most promising 12 of them for investigation, a computer software program can help select a screening design that needs only 16 runs. Then perform the runs, and enter the responses into the computer. This complete analysis can quickly identify the three most important factors. Next the software performs a response-surface design for 20 runs. When the analysis is done, you have a complete understanding of the response surface, know what the optimal settings are, and what the variability in the process is.

Experimental Designs are used to identify or screen important factors affecting a process, and to develop empirical models of processes. Design of Experiment techniques enable teams to learn about process behavior by running a series of experiments, where a maximum amount of information will be learned, in a minimum number of runs. Tradeoffs as to amount of information gained for number of runs, are known before running the experiments. A typical plant Designed Experiment has 3 factors, each set at two levels - typically the maximum and minimum settings for each of the factors. A Designed Experiment with 3 factors each at 2 levels, is called a 23 factorial experiment (or Taguchi L8 experiment), and requires 8 runs, as follows:

<> run number Factor A Factor B Factor C 1 lo 2 hi 3 4 5 6 7 8

Bromination of acetone

Dithionite reduction

Evolutionary Operation (EVOP) It combines small variable perturbations and numerous replications of every process adjustment with statistical analysis. Every process contains random process fluctuations (noise) typically caused by such factors as raw-material variations, equipment deterioration, and instrument corrections. Because process responses to variable changes and the random noise may occasionally have comparable values, replication allows the effects of the noise to average out so the true effects of the variable changes can be determined

The Biological Neuron The most basic element of the human brain is a specific type of cell, which provides us with the abilities to remember, think, and apply previous experiences to our every action. These cells are known as neurons, each of these neurons can connect with up to 200000 other neurons. The power of the brain comes from the numbers of these basic components and the multiple connections between them.

All natural neurons have four basic components, which are dendrites, soma, axon, and synapses. Basically, a biological neuron receives inputs from other sources, combines them in some way, performs a generally nonlinear operation on the result, and then output the final result.

The brain basically learns from experience The brain basically learns from experience. Neural networks are sometimes called machine learning algorithms, because changing of its connection weights (training) causes the network to learn the solution to a problem. The strength of connection between the neurons is stored as a weight-value for the specific connection. The system learns new knowledge by adjusting these connection weights. The learning ability of a neural network is determined by its architecture and by the algorithmic method chosen for training.