Presentation is loading. Please wait.

Presentation is loading. Please wait.

PhD Lunchtime Seminars “are weekly lunch meetings aimed at bringing together the students, researchers and professors in our and other departments to discuss.

Similar presentations


Presentation on theme: "PhD Lunchtime Seminars “are weekly lunch meetings aimed at bringing together the students, researchers and professors in our and other departments to discuss."— Presentation transcript:

1 PhD Lunchtime Seminars “are weekly lunch meetings aimed at bringing together the students, researchers and professors in our and other departments to discuss our work”http://www.di.unipi.it/~nicotra/wiki/index.php/Main_Page

2 Dittamo Cristian, Università di Pisa, 27 marzo 2007 Particular[ly] sharp[y] Cristian Dittamo didi

3 AgendaAgenda What are the problems? A possible solution Future works Dittamo Cristian, Università di Pisa, 27 marzo 2007

4 What are the problems? Dittamo Cristian, Università di Pisa, 27 marzo 2007 Code tangling Non-functional aspect not contribute to define the computation but how it is performed ex: Parallel aspects Knowledge on parallel programming Models, compiler, libraries Axiom nr.0 Axiom nr.0: programmers are lazy Hw evolution Moore’s law

5 Hardware evolution Dittamo Cristian, Università di Pisa, 27 marzo 2007

6 Parallel programming approaches Dittamo Cristian, Università di Pisa, 27 marzo 2007  Parallel language  HPF, High performance Fortran shared memory, SPMD, data parallel programming shared memory, SPMD, data parallel programming  POP-C++ an extension of C++, integrating distributed objects, several remote method invocations semantics, and resource requirements an extension of C++, integrating distributed objects, several remote method invocations semantics, and resource requirements  Parallel compiler  SUIF, Stanford University, Automatically translates sequential C/Fortran programs into parallel C code, shared & distributed memory  Parallel libraries  MPI Message passing, SPMD/MIMD Message passing, SPMD/MIMD  OpenMP Shared memory, SPMD, Intel C++ compiler, GNU gcc v.4 Shared memory, SPMD, Intel C++ compiler, GNU gcc v.4  Skeleton  programmers qualitatively express parallelism at the source level, instantiating and composing a set of pre-defined parallelism exploitation patterns/skeletons at the source level, instantiating and composing a set of pre-defined parallelism exploitation patterns/skeletons

7 Example – MPI (1/2) Dittamo Cristian, Università di Pisa, 27 marzo 2007 #include "mpi.h" #include #include double f(double); double f(double a ) { return (4.0 / (1.0 + a*a)); } int main(int argc,char *argv[]) { //... variable declaration … char processor_name[MPI_MAX_PROCESSOR_NAME]; char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc,&argv); MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Comm_rank(MPI_COMM_WORLD,&myid); n = atoi(argv[1]); n = atoi(argv[1]); fprintf(stdout,"Process %d of %d is on %s\n", myid, numprocs, processor_name); fprintf(stdout,"Process %d of %d is on %s\n", myid, numprocs, processor_name); fflush(stdout); fflush(stdout); MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); h = 1.0 / (double) n; h = 1.0 / (double) n; sum = 0.0; sum = 0.0; for (i = myid + 1; i <= n; i += numprocs) { for (i = myid + 1; i <= n; i += numprocs) { x = h * ((double)i - 0.5); sum += f(x); } mypi = h * sum; mypi = h * sum;

8 Example – MPI (2/2) Dittamo Cristian, Università di Pisa, 27 marzo 2007 MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) { if (myid == 0) { printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); fflush(stdout); } MPI_Finalize(); MPI_Finalize(); return 0; return 0;}

9 Example – OpenMP (1/1) Dittamo Cristian, Università di Pisa, 27 marzo 2007 #include #include double f(double); double f(double a ) { return (4.0 / (1.0 + a*a)); } int main(int argc,char *argv[]) { int n, i; double pi, h, sum, x; n = atoi(argv[1]); n = atoi(argv[1]); fprintf(stdout,"Process %d of %d is on %s\n", myid, numprocs, processor_name); fprintf(stdout,"Process %d of %d is on %s\n", myid, numprocs, processor_name); fflush(stdout); fflush(stdout); h = 1.0 / n; sum = 0.0; #pragma omp parallel for private(x) reduction(+:sum) for (i = 1; i <= n; i++) { x = h * (i - 0.5); sum += (4.0 / (1.0 + x*x)); } pi = h * sum; printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); fflush(stdout);}

10 AgendaAgenda What are the problems? A possible solution Future works Dittamo Cristian, Università di Pisa, 27 marzo 2007

11 A possible solution Dittamo Cristian, Università di Pisa, 27 marzo 2007 Particular Particular [a]C#[a]C# Code Bricks Parallelmodels Virtual machine [ CLR / Mono / SSCLI (Rotor) ] multi-coremulti-core multi- processor multi- computer uni- processor CLIFile Operating System [ Windows, Linux ]

12 Without Virtual Machine Dittamo Cristian, Università di Pisa, 27 marzo 2007 Applications OS HW RT1 RT2 RT3 app1 app2 app3

13 A new layer Dittamo Cristian, Università di Pisa, 27 marzo 2007 Applications CLR / Mono / Rotor OS HW app1 app2 app3

14 .NET.NET Goal is to support program interoperability at all levels Two essential elements of the platform are: The Common Language Runtime (CLR) A class library (Framework) which complements the CLR Main services supported by.NET are: A common type system A garbage collection system An execution environment Dittamo Cristian, Università di Pisa, 27 marzo 2007

15 Common Language Interface Dittamo Cristian, Università di Pisa, 27 marzo 2007 CLI The Common Type System (CTS) provides a rich type system that supports the types and operations found in many programming languages. The Common Language Specification (CLS) It describes what compiler implementers must do in order for their languages to integrate well with other languages Metadata The CLI uses metadata to describe and reference the types defined by the CTS. Metadata is stored (that is, persisted) in a way that is independent of any particular programming language. The Virtual Execution System (VES) The VES is responsible for loading and running programs written for the CLI Ref. ECMA 335

16 Virtual Execution System Dittamo Cristian, Università di Pisa, 27 marzo 2007 provides the services needed to execute managed code and data, using the metadata to connect separately generated modules together at runtime (late binding). CLI machine model uses an evaluation stack instructions that copy values from memory to the evaluation stack are “loads”; instructions that copy values from the stack back to memory are “stores”. implements and enforces the CTS model loads and runs programs written for the CLI VES Ref. ECMA 335

17 Compiling source code Dittamo Cristian, Università di Pisa, 27 marzo 2007 Ref. “Applied.NET Framework programming”, J.Richter Compiler for: C#, Visual Basic, JScript, Alice, APL, COBOL, Component Pascal, Eiffel, Fortran, Haskell, Mercury, ML, Mondrian, Oberon, Perl, Python, RPG, Scheme, and Smalltal

18 ExecutionExecution Dittamo Cristian, Università di Pisa, 27 marzo 2007 native code Ref. “Applied.NET Framework programming”, J.Richter

19 How I am made Dittamo Cristian, Università di Pisa, 27 marzo 2007  Reflection is the ability of a program of access to a description of itself  A system may support reflection at different levels:  from simple information on types (C++ RTTI) to reflecting the entire structure of the program  another dimension of reflection is if a program is allowed to read or change itself  CLR supports an extensible model of reflection at type- system level  CLI files contain definition of types annotated with their description (metadata)

20 ReflectionReflection Dittamo Cristian, Università di Pisa, 27 marzo 2007  CLI = Data + Metadata  Metadata are static and cannot be changed at runtime thus the only overhead is in terms of space  Metadata are crucial to support dynamic loading as well as other core services (i.e. remoting, serialization, reflection, and so on)  A program can access metadata using the reflection API

21 Custom annotations Dittamo Cristian, Università di Pisa, 27 marzo 2007  CLR (and C#) allows to extend metadata with custom information  The abstraction provided are custom attributes  Each element of the type system can be labeled with attributes  These attributes are attached to metadata and can be searched using Reflection API  Programmer can annotate a program with these information and another program can exploit that to manage it  Ex: Web services

22 CodeBricksCodeBricks Dittamo Cristian, Università di Pisa, 27 marzo 2007  Library that provides:  a technique suitable to express staged computations, targeted to modern execution environment ( JVM / CLR ); Code fragments are introduced in programs as first class values, that can be composed by means of an operator called Bind Code fragments are introduced in programs as first class values, that can be composed by means of an operator called Bind  meta-programming support in the runtime environment  an execution model for program based on the mechanism  CLIFile reader:  provide an abstract view of the IL code as an array  Ref. “Multi-stage and Meta-programming Support in Strongly Typed Execution Engines” A.Cisternino, tesi di dottorato

23 [a]C#[a]C# Dittamo Cristian, Università di Pisa, 27 marzo 2007  [a]C# extends the original language by allowing the use of custom attributes inside a method body  How can a tool retrieve annotations from an assembly?  The [a]C# run-time provides operations to manipulate the CIL instructions within the scope of annotations:  Extrusion: is used to extrude the annotation by generating a new method whose body and arguments are respectively the annotated code and the free variables of the annotation;  Injection: is used to insert code immediately before and after an annotation;  Replacement: is used to replace the annotated code with the specified code

24 Dittamo Cristian, Università di Pisa, 27 marzo 2007 Example of [a]C# code public void MyMethod { [Annotation1] { [Annotation1] { int i = 0, j = 1; int i = 0, j = 1; [Annotation2] { [Annotation2] { int z = 2; int z = 2; i = j + z; i = j + z; } }}

25 Dittamo Cristian, Università di Pisa, 27 marzo 2007 [a]C# acsc.custom instance void Annotation1::.ctor.custom instance void Annotation2::.ctor.locals init (int32 V0, int32 V1, int32 V2) nop ldc.i4.0 call void [acsrun]ACS.Annotation::Begin(int32) nop ldc.i4.0 stloc V0 ldc.i4.0 stloc V1 ldc.i4.1 call void [acsrun]ACS.Annotation::Begin(int32) nop ldc.i4.2 stloc V2 ldloc V1 ldloc V2 Add stloc V0 ldc.i4.1 call void [acsrun]ACS.Annotation::End(int32) nop ldc.i4.0 call void [acsrun]ACS.Annotation::End(int32) nop ret public void MyMethod { [Annotation1] { [Annotation1] { int i = 0, int i = 0, j = 1; j = 1; [Annotation2] { [Annotation2] { int z = 2; int z = 2; i = j + z; i = j + z; } }}

26  IL code analysis  Find out annotations  Rewriting  Two version Multi-threaded Multi-threaded Multi-process Multi-process Dittamo Cristian, Università di Pisa, 27 marzo 2007.custom instance void Annotation1::.ctor.custom instance void Annotation2::.ctor.locals init (int32 V0, int32 V1, int32 V2) nop ldc.i4.0 call void [acsrun]ACS.Annotation::Begin(int32) nop ldc.i4.0 stloc V0 ldc.i4.0 stloc V1 ldc.i4.1 call void [acsrun]ACS.Annotation::Begin(int32) nop ldc.i4.2 stloc V2 ldloc V1 ldloc V2 Add stloc V0 ldc.i4.1 call void [acsrun]ACS.Annotation::End(int32) nop ldc.i4.0 call void [acsrun]ACS.Annotation::End(int32) nop ret.method public instance void MyMethod() nop ldarg.0 callvirt instance void Master0() ret.method public instance void Master0().locals init (int32 V0, int32 V1, class state0 V2) nop ldc.i4.0 stloc V0 ldc.i4.0 stloc V1 newobj instance void state0::.ctor() stloc.s V2 ldloc.s V2 Ldloc V0 stfld float64 state0::s1 ldloc.s V2 Ldloc V1 stfld float64 state0::s2 ldarg.0 Ldftn instance void Worker_0(object)... ret Particular

27 ArchitectureArchitecture Dittamo Cristian, Università di Pisa, 27 marzo 2007 File.acs acsc File.exe ParallelAnnotation.dll Analyser File.exe ParallelVersion.dll

28 Example of parallelization public void Mandelbrot (Complex z1, Complex z2, int xsteps, int ysteps) { // variables declaration and initialization [ Parallel ] { int block = ystep / number_of_worker; for (int count = 0; count < number_of_worker; count++) { int start = block * count; [ Process ] { for (int i = 0; i < xsteps ; i++) { for (int j = start ; j < start + block ; j++) { // Draw the Mandelbrot fractal }}}}} Dittamo Cristian, Università di Pisa, 27 marzo 2007

29 Example of parallelization C:>C:\Partizione_D\Projects\codebricks\ACS\test\CParallel\ac sc.exe /out:.\bin\Debug\Mandelbrot.exe /r:C:\Partizione_D\Projects\codebricks\ACS\test\CParallel\CP arallelAnnotation\bin\Debug\CParallelAnnotation.dll MandelGraph.acs Program.cs Form1.cs Form1.Designer.cs CQueue.cs Dialog.cs acsc Mandelbrot Parallel - processes Mandelbrot.exeSequential+ Parallel - threads MandelbrotParallel.dll Rem_MandelbrotParallel Server0.exe + Rem_MandelbrotParallel Server0.config Rem_MandelbrotParallel Server1.exe + Rem_MandelbrotParallel Server1.config MandelbrotParallel.dll @echo off cls echo --------------------------------------------- echo Compile Mandelbrot fractal renderer project echo --------------------------------------------- cd C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot call make2.bat echo...DONE echo. echo --------------------------------------------- echo Generate Parallel version using threads echo --------------------------------------------- cd C:\Partizione_D\Projects\codebricks\ACS\test\CParallel\Analyser\bin\Debug Analyser.exe -a C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot\bin\Debug\Mandelbrot.exe -m genMandel -t thread echo...DONE echo. echo --------------------------------------------- echo Verify assembly echo --------------------------------------------- peverify C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot\bin\Debug\MandelbrotParallel.dll echo...DONE @echo off cls echo --------------------------------------------- echo Compile Mandelbrot fractal renderer project echo --------------------------------------------- cd C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot call make2.bat echo DONE echo. echo --------------------------------------------- echo Generate Parallel version using processes echo --------------------------------------------- cd C:\Partizione_D\Projects\codebricks\ACS\test\CParallel\Analyser\bin\Debug Analyser.exe -a C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot\bin\Debug\Mandelbrot.exe -m genMandel -t process hosts.txt echo DONE echo. echo --------------------------------------------- echo Verify assembly echo --------------------------------------------- peverify C:\Partizione_D\Projects\MandelbrotPar\Mandelbrot\bin\Debug\MandelbrotParallel.dll echo DONE Analyser

30 Dittamo Cristian, Università di Pisa, 27 marzo 2007 ConclusionConclusion  Parallel Code generation of a annotated sequential programs  Programmer driven  Good result  Advantages:  Cross platforms  Trasnformations at binary level  Debugging

31 AgendaAgenda What are the problems? A possible solution Future works Dittamo Cristian, Università di Pisa, 27 marzo 2007

32 Future works Dittamo Cristian, Università di Pisa, 27 marzo 2007  Implementation  scheduler  more parallel models  communication  synchronization  Formal specification  bytecode rewriting


Download ppt "PhD Lunchtime Seminars “are weekly lunch meetings aimed at bringing together the students, researchers and professors in our and other departments to discuss."

Similar presentations


Ads by Google