LOGO Parallel computing technique for EM modeling makai 天津大学电子信息工程学院 School of Electronic Information Engineering.

Slides:



Advertisements
Similar presentations
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
Advertisements

Parallel Processing with OpenMP
Distributed Systems CS
Introductions to Parallel Programming Using OpenMP
Class CS 775/875, Spring 2011 Amit H. Kumar, OCCS Old Dominion University.
Master/Slave Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
Introduction to MIMD architectures
Summary Background –Why do we need parallel processing? Applications Introduction in algorithms and applications –Methodology to develop efficient parallel.
Introduction CS 524 – High-Performance Computing.
GridFlow: Workflow Management for Grid Computing Kavita Shinde.
Problem-Solving Environments: The Next Level in Software Integration David W. Walker Cardiff University.
4/26/05Han: ELEC72501 Department of Electrical and Computer Engineering Auburn University, AL K.Han Development of Parallel Distributed Computing System.
Compact State Machines for High Performance Pattern Matching Department of Computer Science and Information Engineering National Cheng Kung University,
Parallelization: Conway’s Game of Life. Cellular automata: Important for science Biology – Mapping brain tumor growth Ecology – Interactions of species.
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
Parallel Architectures
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel.
Optimizing Threaded MPI Execution on SMP Clusters Hong Tang and Tao Yang Department of Computer Science University of California, Santa Barbara.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Performance Evaluation of Hybrid MPI/OpenMP Implementation of a Lattice Boltzmann Application on Multicore Systems Department of Computer Science and Engineering,
1 Parallel Computing Basics of Parallel Computers Shared Memory SMP / NUMA Architectures Message Passing Clusters.
Independent Study of Parallel Programming Languages An Independent Study By: Haris Ribic, Computer Science - Theoretical Independent Study Advisor: Professor.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Cloud Computing 1. Outline  Introduction  Evolution  Cloud architecture  Map reduce operation  Platform 2.
A Project Training Seminar on “Server Multi Client Chat”
Adaptive Parallel Sorting Algorithms in STAPL Olga Tkachyshyn, Gabriel Tanase, Nancy M. Amato
Parallelization: Area Under a Curve. AUC: An important task in science Neuroscience – Endocrine levels in the body over time Economics – Discounting:
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Computer Organization David Monismith CS345 Notes to help with the in class assignment.
Example: Sorting on Distributed Computing Environment Apr 20,
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
A Performance Comparison of DSM, PVM, and MPI Paul Werstein Mark Pethick Zhiyi Huang.
Parallel Programming in C for Multiprocessor 多處理機平行程式設計 朱治平 成功大學資訊工程系.
Summary Background –Why do we need parallel processing? Moore’s law. Applications. Introduction in algorithms and applications –Methodology to develop.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Computer Architecture Lecture 26 Past and Future Ralph Grishman November 2015 NYU.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
CS 351/ IT 351 Modeling and Simulation Technologies Review ( ) Dr. Jim Holten.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Lecture 3: Computer Architectures
Parallel Programming in C for Multiprocessor 多處理機平行程式設計 朱治平 成功大學資訊工程系.
Transforming DEVS to Non-Modular Form For Faster Cellular Space Simulation Arizona Center for Integrative Modeling and Simulation Electrical and Computer.
CSci6702 Parallel Computing Andrew Rau-Chaplin
Distributed Real-time Systems- Lecture 01 Cluster Computing Dr. Amitava Gupta Faculty of Informatics & Electrical Engineering University of Rostock, Germany.
上海海事大学信息工程学院 Unit 6 Introduction to Digital Signal Processing exercises.
1 IWS-IEEE.org Visit: Behavioral Modeling of Power Amplifier with Long Term Memory Effects using Recurrent Neural Networks Chuan Zhang, Shuxia Yan, Qi-Jun.
HParC language. Background Shared memory level –Multiple separated shared memory spaces Message passing level-1 –Fast level of k separate message passing.
Background Computer System Architectures Computer System Software.
1/50 University of Turkish Aeronautical Association Computer Engineering Department Ceng 541 Introduction to Parallel Computing Dr. Tansel Dökeroğlu
Dispatching Rule Considering Time-Constraints on Processes for Semiconductor Wafer Fabrication Facility LI Tongji University.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Accelerating K-Means Clustering with Parallel Implementations and GPU Computing Janki Bhimani Miriam Leeser Ningfang Mi
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Big Data A Quick Review on Analytical Tools
Astronomical Data Processing & Workflow Scheduling in cloud
Distributed Systems CS
Guoliang Chen Parallel Computing Guoliang Chen
上 海 理 工 大 学 University Of Shanghai For Science And Technology
Summary Background Introduction in algorithms and applications
Scalable Parallel Interoperable Data Analytics Library
CSE8380 Parallel and Distributed Processing Presentation
A Domain Decomposition Parallel Implementation of an Elasto-viscoplasticCoupled elasto-plastic Fast Fourier Transform Micromechanical Solver with Spectral.
Distributed Systems CS
上 海 理 工 大 学 University Of Shanghai For Science And Technology
The George Washington University
上 海 理 工 大 学 University Of Shanghai For Science And Technology
Presentation transcript:

LOGO Parallel computing technique for EM modeling makai 天津大学电子信息工程学院 School of Electronic Information Engineering

Company name Contents School of Electronic Information Engineering 1 、 background 2 、 Basic theory of parallel computing 3 、 An example 4 、 plan

Company name Background School of Electronic Information Engineering  Arificial neural network (ANN) techniques have been recognized as a powerful tool in electromagnetic(EM)-based modeling and design optimization of microwave passive components. ANN can learn EM responses versus geometrical variables through an automated training process, and the trained ANN can be used as accurate and fast models in the design optimization.

Company name Background School of Electronic Information Engineering “Efficient design optimization of microwave circuits using parallel computational methods,” European Microwave Conference, Amsterdam, the Netherlands, Nov “Parallel automatic model generation technique for microwave modeling,” in IEEE MTT-S Int. Microw. Symp. Dig., Honolulu, Hawaii, June 2007.

Company name School of Electronic Information Engineering  Basic theory of parallel computing

Company name OpenMP Programming Model School of Electronic Information Engineering  Shared Memory Model Uniform Memory AccessNon-Uniform Memory Access  OpenMP is designed for multi processor/core, shared memory machines

Company name MPI(message passing interface) Programming Model School of Electronic Information Engineering distributed memory architecturehybrid distributed shared memory architecture  Distributed Memory Model  MPI was designed for distributed memory architectures.  Basic MPI Concepts:A set of processes executing in parallel. These processes have separate address space.

Company name Parallel computing environment School of Electronic Information Engineering Figure 1 A Windows HPC Server cluster of workstations

Company name Performance Measurement for Parallel Computing School of Electronic Information Engineering  Speedup factor  Efficiency when E is given as a percentage

Company name Parallel Algorithm Design process School of Electronic Information Engineering Problem Mapping Communication Partitioning Agglomeration Figure 2 PCAM process

Company name School of Electronic Information Engineering  An example of parallel computing

Company name An example of parallel computing School of Electronic Information Engineering  Problem description Sum =

Figure 3 Data flow chat

Company name Comparison of the running result School of Electronic Information Engineering CoreTime(s)SpeedupEfficiency % % % % % % % Table1 Parallel on one computer

Company name Comparison of the running result School of Electronic Information Engineering CoreTime(s)SpeedupEfficiency % % % % % % % Table2 Parallel on two computers

Company name Comparison of the running result School of Electronic Information Engineering CoreTime(s)SpeedupEfficiency % % % % % % % % Table3 Parallel on three computers

LOGO 天津大学电子信息工程学院 School of Electronic Information Engineering