PhD Lunchtime Seminars “are weekly lunch meetings aimed at bringing together the students, researchers and professors in our and other departments to discuss.

Slides:



Advertisements
Similar presentations
Operating Systems Components of OS
Advertisements

Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 12 Introduction to ASP.NET.
Introduction to .NET Framework
.NET Framework Overview
Tahir Nawaz Introduction to.NET Framework. .NET – What Is It? Software platform Language neutral In other words:.NET is not a language (Runtime and a.
History of.Net Introduced by Microsoft Earlier technology was VC++ and VB VC++ comes with so many library and VB was so easy to use and not flexible to.
Parallel Systems Parallel Systems Tools Dr. Guy Tel-Zur.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
Modern Compiler Internal Representations Silvius Rus 1/23/2002.
.Net Overview Giuseppe Attardi Università di Pisa.
Overview of the.NET Framework. What is the.NET Framework A new computing platform designed to simplify application development A consistent object-oriented.
Deino MPI Installation The famous “cpi.c” Profiling
Java for High Performance Computing Jordi Garcia Almiñana 14 de Octubre de 1998 de la era post-internet.
Antonio Cisternino & Diego Colombo VisualStorms Tools Another Brick in the Robot... Università degli Studi di Pisa.
Communication in Distributed Systems –Part 2
VB in Context Michael B. Spring Department of Information Science and Telecommunications University of Pittsburgh Pittsburgh, Pa 15260
Introducing the Common Language Runtime for.NET. The Common Language Runtime The Common Language Runtime (CLR) The Common Language Runtime (CLR) –Execution.
Introducing the Common Language Runtime. The Common Language Runtime The Common Language Runtime (CLR) The Common Language Runtime (CLR) –Execution engine.
Efficient Runtime Code Generation in Rotor Antonio Cisternino 23/4/2003 Università di Pisa Supported by Microsoft Research grant.
Chapter 10 Application Development. Chapter Goals Describe the application development process and the role of methodologies, models and tools Compare.
1 Lecture 4: Distributed-memory Computing with PVM/MPI.
A Free sample background from © 2001 By Default!Slide 1.NET Overview BY: Pinkesh Desai.
Effective C# 50 Specific Way to Improve Your C# Item 50 Scott68.Chang.
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
Parallel Programming in Java with Shared Memory Directives.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
MPI3 Hybrid Proposal Description
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 8/29/05CS360 Windows Programming Professor Shereen Khoja.
The Metadata System1. 2 Introduction Metadata is data that describes data. Traditionally, metadata has been found in language- specific files (e.g. C/C++
JVM And CLR Dan Agar April 16, Outline Java and.NET Design Philosophies Overview of Virtual Machines Technical Look at JVM and CLR Comparison of.
.NET Overview. 2 Objectives Introduce.NET –overview –languages –libraries –development and execution model Examine simple C# program.
Session 1 - Introduction and Data Access Layer
Introduction to.NET Framework. .NET – What Is It? Software platform Language neutral In other words:.NET is not a language (Runtime and a library for.
Introduction to .NET Framework
High level & Low level language High level programming languages are more structured, are closer to spoken language and are more intuitive than low level.
Compiling and Executing Code in.Net Microsoft Intermediate Language and Common Language Runtime.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Hybrid MPI and OpenMP Parallel Programming
tom perkins1 XML Web Services -.NET FRAMEWORK – Part 1 CHAPTER 1.1 – 1.3.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Programming Languages
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Core Java Introduction Byju Veedu Ness Technologies httpdownload.oracle.com/javase/tutorial/getStarted/intro/definition.html.
1 Introduction to Parallel Programming with Single and Multiple GPUs Frank Mueller
 Programming - the process of creating computer programs.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
ICFEM 2002, Shanghai Reasoning about Hardware and Software Memory Models Abhik Roychoudhury School of Computing National University of Singapore.
CSE 160 – Lecture 16 MPI Concepts, Topology and Synchronization.
How to execute Program structure Variables name, keywords, binding, scope, lifetime Data types – type system – primitives, strings, arrays, hashes – pointers/references.
The Execution System1. 2 Introduction Managed code and managed data qualify code or data that executes in cooperation with the execution engine The execution.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming - Exercises Dr. Xiao Qin Auburn University
RealTimeSystems Lab Jong-Koo, Lim
Sung-Dong Kim, Dept. of Computer Engineering, Hansung University Java - Introduction.
Introduction to the Microsoft.NET Framework Chris Wastchak Student Ambassador to Microsoft.
Computer System Structures
MPI Message Passing Interface
Special Jobs: MPI Alessandro Costa INAF Catania
2.1. Compilers and Interpreters
CS360 Windows Programming
Chapter 1 IDE and Tools for Developing CLR-based Programs
CS 584.
Introduction to parallelism and the Message Passing Interface
Hybrid MPI and OpenMP Parallel Programming
Programming Parallel Computers
Presentation transcript:

PhD Lunchtime Seminars “are weekly lunch meetings aimed at bringing together the students, researchers and professors in our and other departments to discuss our work”

Dittamo Cristian, Università di Pisa, 27 marzo 2007 Particular[ly] sharp[y] Cristian Dittamo didi

AgendaAgenda What are the problems? A possible solution Future works Dittamo Cristian, Università di Pisa, 27 marzo 2007

What are the problems? Dittamo Cristian, Università di Pisa, 27 marzo 2007 Code tangling Non-functional aspect not contribute to define the computation but how it is performed ex: Parallel aspects Knowledge on parallel programming Models, compiler, libraries Axiom nr.0 Axiom nr.0: programmers are lazy Hw evolution Moore’s law

Hardware evolution Dittamo Cristian, Università di Pisa, 27 marzo 2007

Parallel programming approaches Dittamo Cristian, Università di Pisa, 27 marzo 2007  Parallel language  HPF, High performance Fortran shared memory, SPMD, data parallel programming shared memory, SPMD, data parallel programming  POP-C++ an extension of C++, integrating distributed objects, several remote method invocations semantics, and resource requirements an extension of C++, integrating distributed objects, several remote method invocations semantics, and resource requirements  Parallel compiler  SUIF, Stanford University, Automatically translates sequential C/Fortran programs into parallel C code, shared & distributed memory  Parallel libraries  MPI Message passing, SPMD/MIMD Message passing, SPMD/MIMD  OpenMP Shared memory, SPMD, Intel C++ compiler, GNU gcc v.4 Shared memory, SPMD, Intel C++ compiler, GNU gcc v.4  Skeleton  programmers qualitatively express parallelism at the source level, instantiating and composing a set of pre-defined parallelism exploitation patterns/skeletons at the source level, instantiating and composing a set of pre-defined parallelism exploitation patterns/skeletons

Example – MPI (1/2) Dittamo Cristian, Università di Pisa, 27 marzo 2007 #include "mpi.h" #include #include double f(double); double f(double a ) { return (4.0 / (1.0 + a*a)); } int main(int argc,char *argv[]) { //... variable declaration … char processor_name[MPI_MAX_PROCESSOR_NAME]; char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc,&argv); MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Comm_rank(MPI_COMM_WORLD,&myid); n = atoi(argv[1]); n = atoi(argv[1]); fprintf(stdout,"Process %d of %d is on %s\n", myid, numprocs, processor_name); fprintf(stdout,"Process %d of %d is on %s\n", myid, numprocs, processor_name); fflush(stdout); fflush(stdout); MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); h = 1.0 / (double) n; h = 1.0 / (double) n; sum = 0.0; sum = 0.0; for (i = myid + 1; i <= n; i += numprocs) { for (i = myid + 1; i <= n; i += numprocs) { x = h * ((double)i - 0.5); sum += f(x); } mypi = h * sum; mypi = h * sum;

Example – MPI (2/2) Dittamo Cristian, Università di Pisa, 27 marzo 2007 MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) { if (myid == 0) { printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); fflush(stdout); } MPI_Finalize(); MPI_Finalize(); return 0; return 0;}

Example – OpenMP (1/1) Dittamo Cristian, Università di Pisa, 27 marzo 2007 #include #include double f(double); double f(double a ) { return (4.0 / (1.0 + a*a)); } int main(int argc,char *argv[]) { int n, i; double pi, h, sum, x; n = atoi(argv[1]); n = atoi(argv[1]); fprintf(stdout,"Process %d of %d is on %s\n", myid, numprocs, processor_name); fprintf(stdout,"Process %d of %d is on %s\n", myid, numprocs, processor_name); fflush(stdout); fflush(stdout); h = 1.0 / n; sum = 0.0; #pragma omp parallel for private(x) reduction(+:sum) for (i = 1; i <= n; i++) { x = h * (i - 0.5); sum += (4.0 / (1.0 + x*x)); } pi = h * sum; printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); fflush(stdout);}

AgendaAgenda What are the problems? A possible solution Future works Dittamo Cristian, Università di Pisa, 27 marzo 2007

A possible solution Dittamo Cristian, Università di Pisa, 27 marzo 2007 Particular Particular [a]C#[a]C# Code Bricks Parallelmodels Virtual machine [ CLR / Mono / SSCLI (Rotor) ] multi-coremulti-core multi- processor multi- computer uni- processor CLIFile Operating System [ Windows, Linux ]

Without Virtual Machine Dittamo Cristian, Università di Pisa, 27 marzo 2007 Applications OS HW RT1 RT2 RT3 app1 app2 app3

A new layer Dittamo Cristian, Università di Pisa, 27 marzo 2007 Applications CLR / Mono / Rotor OS HW app1 app2 app3

.NET.NET Goal is to support program interoperability at all levels Two essential elements of the platform are: The Common Language Runtime (CLR) A class library (Framework) which complements the CLR Main services supported by.NET are: A common type system A garbage collection system An execution environment Dittamo Cristian, Università di Pisa, 27 marzo 2007

Common Language Interface Dittamo Cristian, Università di Pisa, 27 marzo 2007 CLI The Common Type System (CTS) provides a rich type system that supports the types and operations found in many programming languages. The Common Language Specification (CLS) It describes what compiler implementers must do in order for their languages to integrate well with other languages Metadata The CLI uses metadata to describe and reference the types defined by the CTS. Metadata is stored (that is, persisted) in a way that is independent of any particular programming language. The Virtual Execution System (VES) The VES is responsible for loading and running programs written for the CLI Ref. ECMA 335

Virtual Execution System Dittamo Cristian, Università di Pisa, 27 marzo 2007 provides the services needed to execute managed code and data, using the metadata to connect separately generated modules together at runtime (late binding). CLI machine model uses an evaluation stack instructions that copy values from memory to the evaluation stack are “loads”; instructions that copy values from the stack back to memory are “stores”. implements and enforces the CTS model loads and runs programs written for the CLI VES Ref. ECMA 335

Compiling source code Dittamo Cristian, Università di Pisa, 27 marzo 2007 Ref. “Applied.NET Framework programming”, J.Richter Compiler for: C#, Visual Basic, JScript, Alice, APL, COBOL, Component Pascal, Eiffel, Fortran, Haskell, Mercury, ML, Mondrian, Oberon, Perl, Python, RPG, Scheme, and Smalltal

ExecutionExecution Dittamo Cristian, Università di Pisa, 27 marzo 2007 native code Ref. “Applied.NET Framework programming”, J.Richter

How I am made Dittamo Cristian, Università di Pisa, 27 marzo 2007  Reflection is the ability of a program of access to a description of itself  A system may support reflection at different levels:  from simple information on types (C++ RTTI) to reflecting the entire structure of the program  another dimension of reflection is if a program is allowed to read or change itself  CLR supports an extensible model of reflection at type- system level  CLI files contain definition of types annotated with their description (metadata)

ReflectionReflection Dittamo Cristian, Università di Pisa, 27 marzo 2007  CLI = Data + Metadata  Metadata are static and cannot be changed at runtime thus the only overhead is in terms of space  Metadata are crucial to support dynamic loading as well as other core services (i.e. remoting, serialization, reflection, and so on)  A program can access metadata using the reflection API

Custom annotations Dittamo Cristian, Università di Pisa, 27 marzo 2007  CLR (and C#) allows to extend metadata with custom information  The abstraction provided are custom attributes  Each element of the type system can be labeled with attributes  These attributes are attached to metadata and can be searched using Reflection API  Programmer can annotate a program with these information and another program can exploit that to manage it  Ex: Web services

CodeBricksCodeBricks Dittamo Cristian, Università di Pisa, 27 marzo 2007  Library that provides:  a technique suitable to express staged computations, targeted to modern execution environment ( JVM / CLR ); Code fragments are introduced in programs as first class values, that can be composed by means of an operator called Bind Code fragments are introduced in programs as first class values, that can be composed by means of an operator called Bind  meta-programming support in the runtime environment  an execution model for program based on the mechanism  CLIFile reader:  provide an abstract view of the IL code as an array  Ref. “Multi-stage and Meta-programming Support in Strongly Typed Execution Engines” A.Cisternino, tesi di dottorato

[a]C#[a]C# Dittamo Cristian, Università di Pisa, 27 marzo 2007  [a]C# extends the original language by allowing the use of custom attributes inside a method body  How can a tool retrieve annotations from an assembly?  The [a]C# run-time provides operations to manipulate the CIL instructions within the scope of annotations:  Extrusion: is used to extrude the annotation by generating a new method whose body and arguments are respectively the annotated code and the free variables of the annotation;  Injection: is used to insert code immediately before and after an annotation;  Replacement: is used to replace the annotated code with the specified code

Dittamo Cristian, Università di Pisa, 27 marzo 2007 Example of [a]C# code public void MyMethod { [Annotation1] { [Annotation1] { int i = 0, j = 1; int i = 0, j = 1; [Annotation2] { [Annotation2] { int z = 2; int z = 2; i = j + z; i = j + z; } }}

Dittamo Cristian, Università di Pisa, 27 marzo 2007 [a]C# acsc.custom instance void Annotation1::.ctor.custom instance void Annotation2::.ctor.locals init (int32 V0, int32 V1, int32 V2) nop ldc.i4.0 call void [acsrun]ACS.Annotation::Begin(int32) nop ldc.i4.0 stloc V0 ldc.i4.0 stloc V1 ldc.i4.1 call void [acsrun]ACS.Annotation::Begin(int32) nop ldc.i4.2 stloc V2 ldloc V1 ldloc V2 Add stloc V0 ldc.i4.1 call void [acsrun]ACS.Annotation::End(int32) nop ldc.i4.0 call void [acsrun]ACS.Annotation::End(int32) nop ret public void MyMethod { [Annotation1] { [Annotation1] { int i = 0, int i = 0, j = 1; j = 1; [Annotation2] { [Annotation2] { int z = 2; int z = 2; i = j + z; i = j + z; } }}

 IL code analysis  Find out annotations  Rewriting  Two version Multi-threaded Multi-threaded Multi-process Multi-process Dittamo Cristian, Università di Pisa, 27 marzo 2007.custom instance void Annotation1::.ctor.custom instance void Annotation2::.ctor.locals init (int32 V0, int32 V1, int32 V2) nop ldc.i4.0 call void [acsrun]ACS.Annotation::Begin(int32) nop ldc.i4.0 stloc V0 ldc.i4.0 stloc V1 ldc.i4.1 call void [acsrun]ACS.Annotation::Begin(int32) nop ldc.i4.2 stloc V2 ldloc V1 ldloc V2 Add stloc V0 ldc.i4.1 call void [acsrun]ACS.Annotation::End(int32) nop ldc.i4.0 call void [acsrun]ACS.Annotation::End(int32) nop ret.method public instance void MyMethod() nop ldarg.0 callvirt instance void Master0() ret.method public instance void Master0().locals init (int32 V0, int32 V1, class state0 V2) nop ldc.i4.0 stloc V0 ldc.i4.0 stloc V1 newobj instance void state0::.ctor() stloc.s V2 ldloc.s V2 Ldloc V0 stfld float64 state0::s1 ldloc.s V2 Ldloc V1 stfld float64 state0::s2 ldarg.0 Ldftn instance void Worker_0(object)... ret Particular

ArchitectureArchitecture Dittamo Cristian, Università di Pisa, 27 marzo 2007 File.acs acsc File.exe ParallelAnnotation.dll Analyser File.exe ParallelVersion.dll

Example of parallelization public void Mandelbrot (Complex z1, Complex z2, int xsteps, int ysteps) { // variables declaration and initialization [ Parallel ] { int block = ystep / number_of_worker; for (int count = 0; count < number_of_worker; count++) { int start = block * count; [ Process ] { for (int i = 0; i < xsteps ; i++) { for (int j = start ; j < start + block ; j++) { // Draw the Mandelbrot fractal }}}}} Dittamo Cristian, Università di Pisa, 27 marzo 2007

Example of parallelization C:>C:\Partizione_D\Projects\codebricks\ACS\test\CParallel\ac sc.exe /out:.\bin\Debug\Mandelbrot.exe /r:C:\Partizione_D\Projects\codebricks\ACS\test\CParallel\CP arallelAnnotation\bin\Debug\CParallelAnnotation.dll MandelGraph.acs Program.cs Form1.cs Form1.Designer.cs CQueue.cs Dialog.cs acsc Mandelbrot Parallel - processes Mandelbrot.exeSequential+ Parallel - threads MandelbrotParallel.dll Rem_MandelbrotParallel Server0.exe + Rem_MandelbrotParallel Server0.config Rem_MandelbrotParallel Server1.exe + Rem_MandelbrotParallel Server1.config off cls echo echo Compile Mandelbrot fractal renderer project echo cd C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot call make2.bat echo...DONE echo. echo echo Generate Parallel version using threads echo cd C:\Partizione_D\Projects\codebricks\ACS\test\CParallel\Analyser\bin\Debug Analyser.exe -a C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot\bin\Debug\Mandelbrot.exe -m genMandel -t thread echo...DONE echo. echo echo Verify assembly echo peverify C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot\bin\Debug\MandelbrotParallel.dll off cls echo echo Compile Mandelbrot fractal renderer project echo cd C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot call make2.bat echo DONE echo. echo echo Generate Parallel version using processes echo cd C:\Partizione_D\Projects\codebricks\ACS\test\CParallel\Analyser\bin\Debug Analyser.exe -a C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot\bin\Debug\Mandelbrot.exe -m genMandel -t process hosts.txt echo DONE echo. echo echo Verify assembly echo peverify C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot\bin\Debug\MandelbrotParallel.dll echo DONE Analyser

Dittamo Cristian, Università di Pisa, 27 marzo 2007 ConclusionConclusion  Parallel Code generation of a annotated sequential programs  Programmer driven  Good result  Advantages:  Cross platforms  Trasnformations at binary level  Debugging

AgendaAgenda What are the problems? A possible solution Future works Dittamo Cristian, Università di Pisa, 27 marzo 2007

Future works Dittamo Cristian, Università di Pisa, 27 marzo 2007  Implementation  scheduler  more parallel models  communication  synchronization  Formal specification  bytecode rewriting