Viscoplastic Models for Polymeric Composite Mentee Chris Rogan Department of Physics Princeton University Princeton, NJ 08544 Mentors Marwan Al-Haik &

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

NEURAL NETWORKS Backpropagation Algorithm
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Neural networks Introduction Fitting neural networks
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Mehran University of Engineering and Technology, Jamshoro Department of Electronic Engineering Neural Networks Feedforward Networks By Dr. Mukhtiar Ali.
Kostas Kontogiannis E&CE
Lecture 13 – Perceptrons Machine Learning March 16, 2010.
Machine Learning Neural Networks
x – independent variable (input)
Artificial Neural Networks ML Paul Scheible.
Radial Basis Functions
Neural Networks Marco Loog.
Back-Propagation Algorithm
Improved BP algorithms ( first order gradient method) 1.BP with momentum 2.Delta- bar- delta 3.Decoupled momentum 4.RProp 5.Adaptive BP 6.Trinary BP 7.BP.
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
ICS 273A UC Irvine Instructor: Max Welling Neural Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Collaborative Filtering Matrix Factorization Approach
Radial Basis Function Networks
A Genetic Algorithms Approach to Feature Subset Selection Problem by Hasan Doğu TAŞKIRAN CS 550 – Machine Learning Workshop Department of Computer Engineering.
Biointelligence Laboratory, Seoul National University
Evolving a Sigma-Pi Network as a Network Simulator by Justin Basilico.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Chapter 9 Neural Network.
Chapter 11 – Neural Networks COMP 540 4/17/2007 Derek Singer.
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Classification / Regression Neural Networks 2
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Neural Networks Design  Motivation  Evolutionary training  Evolutionary design of the architecture.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Multi-Layer Perceptron
Non-Bayes classifiers. Linear discriminants, neural networks.
Akram Bitar and Larry Manevitz Department of Computer Science
Feature selection with Neural Networks Dmitrij Lagutin, T Variable Selection for Regression
Introduction to Neural Networks. Biological neural activity –Each neuron has a body, an axon, and many dendrites Can be in one of the two states: firing.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 8: Adaptive Networks
Learning Neural Networks (NN) Christina Conati UBC
Hand-written character recognition
BACKPROPAGATION (CONTINUED) Hidden unit transfer function usually sigmoid (s-shaped), a smooth curve. Limits the output (activation) unit between 0..1.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Evolutionary Computation Evolving Neural Network Topologies.
Machine Learning Supervised Learning Classification and Regression
Deep Feedforward Networks
第 3 章 神经网络.
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
CSC 578 Neural Networks and Deep Learning
ECE 471/571 - Lecture 17 Back Propagation.
Collaborative Filtering Matrix Factorization Approach
Artificial Intelligence Chapter 3 Neural Networks
Neural Networks Geoff Hulten.
Lecture Notes for Chapter 4 Artificial Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Neural Networks ICS 273A UC Irvine Instructor: Max Welling
Artificial Intelligence Chapter 3 Neural Networks
Neural networks (1) Traditional multi-layer perceptrons
Artificial Intelligence Chapter 3 Neural Networks
COSC 4335: Part2: Other Classification Techniques
Computer Vision Lecture 19: Object Recognition III
Artificial Intelligence Chapter 3 Neural Networks
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

Viscoplastic Models for Polymeric Composite Mentee Chris Rogan Department of Physics Princeton University Princeton, NJ Mentors Marwan Al-Haik & M.Y Hussaini School of Computational Science Florida State University Tallahassee, FL 32306

Part-1 Explicit Model Micromechanical Viscoplastic Model

Explicit Model Viscoplastic Model Proposed by Gates and Sun  t =  e +  p  e =  /E  p = A(  ) n The elastic portion of the strain is determined by Hook’s Law, where E is Young’s Modulus The plastic portion of the strain is represented by this non-linear equation, where A and n are material constants found from experimental data Gates, T.S., Sun, C.T., An elastic/viscoplastic constitutive model for fiber reinforced thermoplastic composites. AIAA Journal 29 (3), 457–463.

Explicit Model d  t /dt = d  e /dt + d  p /dt The total strain rate is composed of elastic and plastic components d  e /dt = (d  /dt)/E d  vp /dt = d  vp ’/dt + d  vp ’’/dt The elastic portion of the strain rate is the elastic component of the strain differentiated with respect to time. The component of the strain rate is further divided into two viscoplastic terms,

Explicit Model d  vp ’’/dt = ((  -  *)/K) 1/m The first component of the plastic strain rate is the plastic strain differentiated with respect to time The second component utilizes the concept of ‘overstress’, or  -  *, where  * is the quasistatic stress and and  is the dynamic stress. K and m are material constants found from experimental data. d  vp’ /dt = A(n)(  ) n-1 (d  /dt)

Tensile Tests Figure 1

Methodology Firstly, the tensile test data (above) was used to determine the material constants A, n and E for each temperature. E was calculated first, fitting the linear portion of the tensile test curve to reflect the elastic component of the equation as shown in Figure 2. Next, the constants A and n were calculated by plotting Log(  ) vs. Log(  -  /E) and extracting n and A from a linear fit as the slope and y-intercept respectively. Figure 3 displays the resulting model’s fit to the experimental data.  =  /E + A(  ) n

A-n Log(  -  /E) = nLog(  ) + LogA Figure 2 Figure 3 Figure 4

Table 1

Load Relaxation Tests The data from the load relaxation tests was used to determine the temperature- dependent material constants K and m. For each temperature, the load relaxation test was conducted at 6 different stress levels, as shown in Figure 4.

Curve Fitting of Load Relaxation Figure 5 Firstly, the data from each different strain level at each temperature was isolated. The noise in the data was eliminated to ensure that the stress is monotonically decreasing, as dictated by the physical model (Figure 5). The data was then fit into two different trends; exponential and polynomial of order 9 functions (Figures 6 and 7).

0 = d  /dt = (d  /dt)/E + ((  -  *)/K) 1/m => Log(-(d  /dt)/E) = (1/m)(Log(  -  *) – (1/m)Log K Figure 7 Figure 6

From the exponential fits the constants K and m were calculated by plotting Log(- (d  /dt)/E) vs. Log(  -  *), and calculating the linear fit, as shown in Figures 8 and 9. The tabulated material constants for each temperature are pictured below. Table 2

 =  */E + A(  *) n Figure 8 Figure 9 For each temperature and strain level, the quasistatic stress was found by solving the above non- linear equation using Newton’s method. The quasistaitc stress values are displayed in Table 1. Table 1

Simulation of Explicit Model -(d  /dt)/E = ((  -  *)/K) 1/m The total strain rate is zero during the load relaxation test, leading to the differential equation above. The explicit model solution was generated by solving this differential equation using the fourth order Runge-Kutta method. Different step-sizes were experimented with, and an example solution is shown in Figure 10. Figure 10

Part 2: Implicit Model Generalizing an Implicit Stress Function Using Neural Networks

Neural Networks (NN) The Implicit Model consists of creating an implicit, generalized stress function, dependent on vectors of temperature, strain level and time data. A generalized neural network and one specific to this model are shown in Figure 11. A neural network consists of nodes connected by links. Each node is a processing element which takes weighted inputs from other nodes, sums them, and then maps this sum with an activation function, the result of which becomes the neuron’s output. This output is then propagated along all the links exiting the neuron to subsequent neurons. Each link has a weight value to which traveling outputs are multiplied.

Procedures for NN Based on the three phases of neural networks functionality (training, validation and testing),the data sets from the load relaxation tests were split into three parts. The data sets for three temperatures were set aside for testing. The other five temperatures were used for training, excluding five specific combinations of temperature and strain levels used for validation.

Pre-processing Before training, the data vectors were put into random order and were normalized by the equation

Training NN Training a feed-forward backpropagating neural network consists of giving the network a vectorized training data set each epoch. Each individual vector’s inputs (temperature, strain level, time) are propagated through the network, and the output is incorporated with the vector’s experimental output in the error equation above. Training the network consists of minimizing this error function in weight’s space, adjusting the network’s weights using unconstrained local optimization methods. An example of a training session’s graph is shown in Figure 12, in this case using a gradient descent method with variable learning rate and momentum terms to minimize the error function. Figure 12

2 Hidden Layers NN The architecture of the neural network is difficult to decide. Research by Hornik et al. (1989) suggests that a network with two hidden layers can approximate any function, although there is no indication as to how many neurons to put in each of the hidden layers. Too many neurons causes ‘overfitting’; the network essentially memorizes the training data and becomes a look-up- table, causing it to perform poorly with the validation and training data that it has not seen before. Too few neurons leads to poor performance for all of the data. Hornik, K., Stinchocombe, M., White, H., Multilayer feedforward networks are universal approximators. Neural Networks, 359–366.

Error Surface Figure 13 shows the resulting mean squared error performance values for neural networks with different numbers of neurons in each hidden layer after 1000 epochs of training. Figure 13

Figure 14 Figure 15 Figures 14 and 15 display similar data, except that only random data points are used in the neuron space and a cubic interpolation is employed in order to distinguish trends in the neuron space. As figure 15 shows, there appears to be a minimum in the area of about 10 neurons in the first hidden layer and 30 in the second. A minimum did in fact occur with a [ ] network.

Genetic Algorithm (GA) Pruning A genetic algorithm was used to try to determine an optimal network architecture. Based on the results of earlier exhaustive methods, a domain from 1 to 15 and 1 to 35 was used for the number of neurons in the first and second hidden layers respectively. A population of random networks in this domain was generated, each network encoded as a binary chromosome. The probability of a particular network’s survival is a linear function of its rank in the population. Stochastic remainder selection without replacement was used in population selection. For crossovers, a two-point crossover of chromosomes’ reduced surrogates was used as shown in Figure 16. Figure 16

GA-Pruning This method allows pruning of not only neurons but links, as each layers of neurons is not necessarily completely connected to the next, and connections between non- adjacent layers is permitted. The genetic algorithm was run with varying parameter values and two different objective functions; one seeking to minimize only the training performance error of the networks and another minimizing both the performance error and the number of neurons and links. Figure 17 displays an optimal network when only the performance error is considered. Figure 18 shows and optimal network when the number of neurons and links was taken into account. Figure 17 Figure 18

GA-Performance Figure 19 shows the results of an exhaustive architecture search in a smaller domain than earlier, the first arrow pointing to a minimum that coincides with the network architecture displayed in Figure 17. Figure 19

Results of NN Implicit Model A network architecture of [ ] was used for the training and testing of the neural networks. Several different minimization algorithms were tested and compared for the training of the network and are listed in Figures 20 and 21. These two figures display the training performance error and gradient over 1000 epochs. Figure 20 Figure 21

Training- Validation & Testing of Final NN Structure Figure 22 shows the testing, validation and training performance for the Gradient Descent algorithm while Figure 23 shows the plot of a linear least squares regression between the experimental data and network outputs for the Polack Ribiere Conjugate Gradient method. Figure 23 Figure 22 Figure 23

Figure 24 displays the final performance of both models compared to the experimental data. The Quasi-Newton BFGS algorithm was used for the Implicit model, as it performed the best. The Implicit model ultimately outperformed the Explicit, and required only the load relaxation data to generate the solution. Figure 24 Comparing Explicit and Implicit Models

Conclusion The Implicit model(NN+GA) ultimately outperformed the Explicit( Gates), and required only the load relaxation data to generate the solution.