Presentation is loading. Please wait.

Presentation is loading. Please wait.

Slide-1 HPEC-SI MITRE AFRL MIT Lincoln Laboratory www.hpec-si.org High Performance Embedded Computing Software Initiative (HPEC-SI) This work is sponsored.

Similar presentations


Presentation on theme: "Slide-1 HPEC-SI MITRE AFRL MIT Lincoln Laboratory www.hpec-si.org High Performance Embedded Computing Software Initiative (HPEC-SI) This work is sponsored."— Presentation transcript:

1 Slide-1 HPEC-SI MITRE AFRL MIT Lincoln Laboratory www.hpec-si.org High Performance Embedded Computing Software Initiative (HPEC-SI) This work is sponsored by the High Performance Computing Modernization Office under Air Force Contract F19628-00-C- 0002. Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the United States Government. Dr. Jeremy Kepner MIT Lincoln Laboratory

2 Slide-2 www.hpec-si.org MITRE AFRL Lincoln DoD Need Program Structure Outline Introduction Software Standards Parallel VSIPL++ Future Challenges Summary

3 Slide-3 www.hpec-si.org MITRE AFRL Lincoln Enhanced Tactical Radar Correlator (ETRAC) Overview - High Performance Embedded Computing (HPEC) Initiative HPEC Software Initiative Programs Demonstration Development Applied Research DARPA ASARS-2 Shared memory server Embedded multi- processor Challenge: Transition advanced software technology and practices into major defense acquisition programs Common Imagery Processor (CIP)

4 Slide-4 www.hpec-si.org MITRE AFRL Lincoln Why Is DoD Concerned with Embedded Software? Source: “HPEC Market Study” March 2001 Estimated DoD expenditures for embedded signal and image processing hardware and software ($B) COTS acquisition practices have shifted the burden from “point design” hardware to “point design” software Software costs for embedded systems could be reduced by one-third with improved programming models, methodologies, and standards COTS acquisition practices have shifted the burden from “point design” hardware to “point design” software Software costs for embedded systems could be reduced by one-third with improved programming models, methodologies, and standards

5 Slide-5 www.hpec-si.org MITRE AFRL Lincoln NSSN AEGIS Rivet Joint Standard Missile Predator Global Hawk U-2 JSTARS MSAT-Air P-3/APS-137 F-16 MK-48 Torpedo Issues with Current HPEC Development Inadequacy of Software Practices & Standards Today – Embedded Software Is: Not portable Not scalable Difficult to develop Expensive to maintain Today – Embedded Software Is: Not portable Not scalable Difficult to develop Expensive to maintain System Development/Acquisition Stages 4 Years Program Milestones System Tech. Development System Field Demonstration Engineering/ manufacturing Development Insertion to Military Asset Signal Processor Evolution 1st gen. 2nd gen. 3rd gen. 4th gen. 5th gen. 6th gen. High Performance Embedded Computing pervasive through DoD applications –Airborne Radar Insertion program 85% software rewrite for each hardware platform –Missile common processor Processor board costs < $100k Software development costs > $100M –Torpedo upgrade Two software re-writes required after changes in hardware design

6 Slide-6 www.hpec-si.org MITRE AFRL Lincoln Evolution of Software Support Towards “Write Once, Run Anywhere/Anysize” 1990 Application Vendor Software Vendor SW DoD software development COTS development Application software has traditionally been tied to the hardware Support “Write Once, Run Anywhere/Anysize” 20052000 Middleware Vendor Software Application Middleware Application Middleware Application Middleware Many acquisition programs are developing stove-piped middleware “standards” Open software standards can provide portability, performance, and productivity benefits Vendor Software Application Middleware Embedded SW Standards Middleware

7 Slide-7 www.hpec-si.org MITRE AFRL Lincoln Overall Initiative Goals & Impact Performance (1.5x) Portability (3x) Productivity (3x) HPEC Software Initiative Demonstrate Develop Prototype Object Oriented Open Standards Interoperable & Scalable Portability: reduction in lines-of-code to change port/scale to new system Productivity: reduction in overall lines-of- code Performance:computation and communication benchmarks Program Goals Develop and integrate software technologies for embedded parallel systems to address portability, productivity, and performance Engage acquisition community to promote technology insertion Deliver quantifiable benefits Program Goals Develop and integrate software technologies for embedded parallel systems to address portability, productivity, and performance Engage acquisition community to promote technology insertion Deliver quantifiable benefits

8 Slide-8 www.hpec-si.org MITRE AFRL Lincoln HPEC-SI Path to Success Reduces software cost & schedule Enables rapid COTS insertion Improves cross-program interoperability Basis for improved capabilities Benefit to DoD Programs Reduces software complexity & risk Easier comparisons/more competition Increased functionality Benefit to DoD Contractors Lower software barrier to entry Reduced software maintenance costs Evolution of open standards Benefit to Embedded Vendors HPEC Software Initiative builds on Proven technology Business models Better software practices HPEC Software Initiative builds on Proven technology Business models Better software practices

9 Slide-9 www.hpec-si.org MITRE AFRL Lincoln Organization Demonstration Dr. Keith Bromley SPAWAR Dr. Richard Games MITRE Dr. Jeremy Kepner, MIT/LL Mr. Brian Sroka MITRE Mr. Ron Williams MITRE... Government Lead Dr. Rich Linderman AFRL Technical Advisory Board Dr. Rich Linderman AFRL Dr. Richard Games MITRE Mr. John Grosh OSD Mr. Bob Graybill DARPA/ITO Dr. Keith Bromley SPAWAR Dr. Mark Richards GTRI Dr. Jeremy Kepner MIT/LL Executive Committee Dr. Charles Holland PADUSD(S+T) RADM Paul Sullivan N77 Development Dr. James Lebak MIT/LL Dr. Mark Richards GTRI Mr. Dan Campbell GTRI Mr. Ken Cain MERCURY Mr. Randy Judd SPAWAR... Applied Research Mr. Bob Bond MIT/LL Mr. Ken Flowers MERCURY Dr. Spaanenburg PENTUM Mr. Dennis Cottel SPAWAR Capt. Bergmann AFRL Dr. Tony Skjellum MPISoft... Advanced Research Mr. Bob Graybill DARPA Partnership with ODUSD(S&T), Government Labs, FFRDCs, Universities, Contractors, Vendors and DoD programs Over 100 participants from over 20 organizations Partnership with ODUSD(S&T), Government Labs, FFRDCs, Universities, Contractors, Vendors and DoD programs Over 100 participants from over 20 organizations

10 Slide-10 www.hpec-si.org MITRE AFRL Lincoln Standards Overview Future Standards Outline Introduction Software Standards Parallel VSIPL++ Future Challenges Summary

11 Slide-11 www.hpec-si.org MITRE AFRL Lincoln P0P1P2P3 Node Controller Parallel Embedded Processor System Controller Consoles Other Computers Control Communication: CORBA, HP-CORBA Data Communication: MPI, MPI/RT, DRI Computation: VSIPL VSIPL++, ||VSIPL++ Definitions VSIPL = Vector, Signal, and Image Processing Library ||VSIPL++ = Parallel Object Oriented VSIPL MPI = Message-passing interface MPI/RT = MPI real-time DRI = Data Re-org Interface CORBA = Common Object Request Broker Architecture HP-CORBA = High Performance CORBA Emergence of Component Standards HPEC Initiative - Builds on completed research and existing standards and libraries

12 Slide-12 www.hpec-si.org MITRE AFRL Lincoln The Path to Parallel VSIPL++ Demonstrate insertions into fielded systems (e.g., CIP) Demonstrate 3x portability High-level code abstraction Reduce code size 3x Unified embedded computation/ communication standard Demonstrate scalability Demonstration: Existing Standards Phase 1 Phase 2 Phase 3 Time Development: Object-Oriented Standards Applied Research: Unified Comp/Comm Lib Demonstration: Object-Oriented Standards Applied Research: Fault tolerance Demonstration: Unified Comp/Comm Lib Development: Fault tolerance Applied Research: Self-optimization Development: Unified Comp/Comm Lib Functionality VSIPL++ prototype Parallel VSIPL++ prototype VSIPL MPI VSIPL++ Parallel VSIPL++ (world’s first parallel object oriented standard) First demo successfully completed VSIPL++ v0.5 spec completed VSIPL++ v0.1 code available Parallel VSIPL++ spec in progress High performance C++ demonstrated

13 Slide-13 www.hpec-si.org MITRE AFRL Lincoln Working Group Technical Scope Development VSIPL++ -MAPPING (task/pipeline parallel) -Reconfiguration (for fault tolerance) -Threads -Reliability/Availability -Data Permutation (DRI functionality) -Tools (profiles, timers,...) -Quality of Service -MAPPING (data parallelism) -Early binding (computations) -Compatibility (backward/forward) -Local Knowledge (accessing local data) -Extensibility (adding new functions) -Remote Procedure Calls (CORBA) -C++ Compiler Support -Test Suite -Adoption Incentives (vendor, integrator) Applied Research Parallel VSIPL++

14 Slide-14 www.hpec-si.org MITRE AFRL Lincoln VSIPL (Vector, Signal, and Image Processing Library) MPI (Message Passing Interface) VSIPL++ (Object Oriented) v0.1 Spec v0.1 Code v0.5 Spec & Code v1.0 Spec & Code Parallel VSIPL++ v0.1 Spec v0.1 Code v0.5 Spec & Code v1.0 Spec & Code Fault Tolerance/ Self Optimizing Software Task Name FY01FY02FY03FY04FY05FY06FY07FY08 NearMidLong Overall Technical Tasks and Schedule Applied Research Development Demonstrate CIP Demo 2 Demo 3Demo 4 Demo 5Demo 6

15 Slide-15 www.hpec-si.org MITRE AFRL Lincoln HPEC-SI Goals 1st Demo Achievements Performance Goal 1.5x Achieved 2x Achieved 10x+ Goal 3x Portability Achieved 6x* Goal 3x Productivity HPEC Software Initiative Demonstrate Develop Prototype Object Oriented Open Standards Interoperable & Scalable Portability: zero code changes required Productivity: DRI code 6x smaller vs MPI (est*) Performance: 3x reduced cost or form factor

16 Slide-16 www.hpec-si.org MITRE AFRL Lincoln Technical Basis Examples Outline Introduction Software Standards Parallel VSIPL++ Future Challenges Summary

17 Slide-17 www.hpec-si.org MITRE AFRL Lincoln Parallel Computer Parallel Pipeline Beamform X OUT = w *X IN Detect X OUT = |X IN |>c Filter X OUT = FIR(X IN ) Signal Processing Algorithm Mapping Data Parallel within stages Task/Pipeline Parallel across stages Data Parallel within stages Task/Pipeline Parallel across stages

18 Slide-18 www.hpec-si.org MITRE AFRL Lincoln Types of Parallelism Input FIR FIlters FIR FIlters Scheduler Detector 2 Detector 2 Detector 1 Detector 1 Beam- former 2 Beam- former 2 Beam- former 1 Beam- former 1 Task Parallel Pipeline Round Robin Data Parallel

19 Slide-19 www.hpec-si.org MITRE AFRL Lincoln Algorithm and hardware mapping are linked Resulting code is non-scalable and non-portable Algorithm and hardware mapping are linked Resulting code is non-scalable and non-portable Current Approach to Parallel Code Algorithm + Mapping Code Proc 1 Proc 2 Proc 2 Stage 1 Proc 3 Proc 4 Proc 4 Stage 2 while(!done) { if ( rank()==1 || rank()==2 ) stage1 (); else if ( rank()==3 || rank()==4 ) stage2(); } Proc 5 Proc 5 Proc 6 Proc 6 while(!done) { if ( rank()==1 || rank()==2 ) stage1(); else if ( rank()==3 || rank()==4) || rank()==5 || rank==6 ) stage2(); }

20 Slide-20 www.hpec-si.org MITRE AFRL Lincoln Scalable Approach Single Processor Mapping Multi Processor Mapping A = B + C #include void addVectors(aMap, bMap, cMap) { Vector > a(‘a’, aMap, LENGTH); Vector > b(‘b’, bMap, LENGTH); Vector > c(‘c’, cMap, LENGTH); b = 1; c = 2; a=b+c; } A = B + C Single processor and multi-processor code are the same Maps can be changed without changing software High level code is compact Single processor and multi-processor code are the same Maps can be changed without changing software High level code is compact Lincoln Parallel Vector Library (PVL)

21 Slide-21 www.hpec-si.org MITRE AFRL Lincoln C++ Expression Templates and PETE A=B+C*D BinaryNode<OpAssign, Vector, BinaryNode<OpAdd, Vector BinaryNode<OpMultiply, Vector, Vector >>> Expression Templates Expression Expression TypeParse Tree B+CA Main Operator + Operator = + B& C& 1. Pass B and C references to operator + 4. Pass expression tree reference to operator 2. Create expression parse tree 3. Return expression parse tree 5. Calculate result and perform assignment copy & copy B&, C& Parse trees, not vectors, created Expression Templates enhance performance by allowing temporary variables to be avoided

22 Slide-22 www.hpec-si.org MITRE AFRL Lincoln PETE Linux Cluster Experiments A=B+CA=B+C*DA=B+C*D/E+fft(F) Vector Length 0.6 0.8 0.9 1 1.2 0.7 1.1 0.8 0.9 1.1 1 0.9 1 1.1 1.2 1.3 32 128512 2048 Vector Length Relative Execution Time 8192 32768 131072 8 Relative Execution Time 1.2 32 128512 20488192 32768 131072 8 32 128512 20488192 32768 131072 8 PVL with VSIPL has a small overhead PVL with PETE can surpass VSIPL PVL with VSIPL has a small overhead PVL with PETE can surpass VSIPL

23 Slide-23 www.hpec-si.org MITRE AFRL Lincoln PowerPC AltiVec Experiments Results Hand coded loop achieves good performance, but is problem specific and low level Optimized VSIPL performs well for simple expressions, worse for more complex expressions PETE style array operators perform almost as well as the hand-coded loop and are general, can be composed, and are high-level AltiVec loopVSIPL (vendor optimized)PETE with AltiVec C For loop Direct use of AltiVec extensions Assumes unit stride Assumes vector alignment C AltiVec aware VSIPro Core Lite (www.mpi-softtech.com) No multiply-add Cannot assume unit stride Cannot assume vector alignment C++ PETE operators Indirect use of AltiVec extensions Assumes unit stride Assumes vector alignment A=B+C*D A=B+CA=B+C*D+E*F A=B+C*D+E/F Software Technology

24 Slide-24 www.hpec-si.org MITRE AFRL Lincoln Technical Basis Examples Outline Introduction Software Standards Parallel VSIPL++ Future Challenges Summary

25 Slide-25 www.hpec-si.org MITRE AFRL Lincoln A = sin(A) + 2 * B; Generated code (no temporaries) for (index i = 0; i < A.size(); ++i) A.put(i, sin(A.get(i)) + 2 * B.get(i)); Apply inlining to transform to for (index i = 0; i < A.size(); ++i) Ablock[i] = sin(Ablock[i]) + 2 * Bblock[i]; Apply more inlining to transform to T* Bp = &(Bblock[0]); T* Aend = &(Ablock[A.size()]); for (T* Ap = &(Ablock[0]); Ap < pend; ++Ap, ++Bp) *Ap = fmadd (2, *Bp, sin(*Ap)); Or apply PowerPC AltiVec extensions Each step can be automatically generated Optimization level whatever vendor desires Each step can be automatically generated Optimization level whatever vendor desires

26 Slide-26 www.hpec-si.org MITRE AFRL Lincoln BLAS zherk Routine BLAS = Basic Linear Algebra Subprograms Hermitian matrix M: conjug(M) = M t zherk performs a rank-k update of Hermitian matrix C: C    A  conjug(A) t +   C VSIPL code A = vsip_cmcreate_d(10,15,VSIP_ROW,MEM_NONE); C = vsip_cmcreate_d(10,10,VSIP_ROW,MEM_NONE); tmp = vsip_cmcreate_d(10,10,VSIP_ROW,MEM_NONE); vsip_cmprodh_d(A,A,tmp); /* A*conjug(A) t */ vsip_rscmmul_d(alpha,tmp,tmp);/*  *A*conjug(A) t */ vsip_rscmmul_d(beta,C,C); /*  *C */ vsip_cmadd_d(tmp,C,C); /*  *A*conjug(A) t +  *C */ vsip_cblockdestroy(vsip_cmdestroy_d(tmp)); vsip_cblockdestroy(vsip_cmdestroy_d(C)); vsip_cblockdestroy(vsip_cmdestroy_d(A)); VSIPL++ code (also parallel) Matrix > A(10,15); Matrix > C(10,10); C = alpha * prodh(A,A) + beta * C;

27 Slide-27 www.hpec-si.org MITRE AFRL Lincoln Simple Filtering Application int main () { using namespace vsip; const length ROWS = 64; const length COLS = 4096; vsipl v; FFT, complex, FORWARD, 0, MULTIPLE, alg_hint ()> forward_fft (Domain (ROWS,COLS), 1.0); FFT, complex, INVERSE, 0, MULTIPLE, alg_hint ()> inverse_fft (Domain (ROWS,COLS), 1.0); const Matrix > weights (load_weights (ROWS, COLS)); try { while (1) output (inverse_fft (forward_fft (input ()) * weights)); } catch (std::runtime_error) { // Successfully caught access outside domain. }

28 Slide-28 www.hpec-si.org MITRE AFRL Lincoln Explicit Parallel Filter #include using namespace VSIPL; const int ROWS = 64; const int COLS = 4096; int main (int argc, char **argv) { Matrix > W (ROWS, COLS, "WMap"); // weights matrix Matrix > X (ROWS, COLS, "WMap"); // input matrix load_weights (W) try { while (1) { input (X); // some input function Y = IFFT ( mul (FFT(X), W)); output (Y); // some output function } catch (Exception &e) {cerr << e << endl}; }

29 Slide-29 www.hpec-si.org MITRE AFRL Lincoln Multi-Stage Filter (main) using namespace vsip; const length ROWS = 64; const length COLS = 4096; int main (int argc, char **argv) { sample_low_pass_filter > LPF(); sample_beamform > BF(); sample_matched_filter > MF(); try { while (1) output (MF(BF(LPF(input ())))); } catch (std::runtime_error) { // Successfully caught access outside domain. }

30 Slide-30 www.hpec-si.org MITRE AFRL Lincoln Multi-Stage Filter (low pass filter) template class sample_low_pass_filter { public: sample_low_pass_filter() : FIR1_(load_w1 (W1_LENGTH), FIR1_LENGTH), FIR2_(load_w2 (W2_LENGTH), FIR2_LENGTH) { } Matrix operator () (const Matrix & Input) { Matrix output(ROWS, COLS); for (index row=0; row<ROWS; row++) output.row(row) = FIR2_(FIR1_(Input.row(row)).second).second; return output; } private: FIR FIR1_; FIR FIR2_; }

31 Slide-31 www.hpec-si.org MITRE AFRL Lincoln Multi-Stage Filter (beam former) template class sample_beamform { public: sample_beamform() : W3_(load_w3 (ROWS,COLS)) { } Matrix operator () (const Matrix & Input) const { return W3_ * Input; } private: const Matrix W3_; }

32 Slide-32 www.hpec-si.org MITRE AFRL Lincoln Multi-Stage Filter (matched filter) template class sample_matched_filter { public: matched_filter() : W4_(load_w4 (ROWS,COLS)), forward_fft_ (Domain (ROWS,COLS), 1.0), inverse_fft_ (Domain (ROWS,COLS), 1.0) {} Matrix operator () (const Matrix & Input) const { return inverse_fft_ (forward_fft_ (Input) * W4_); } private: const Matrix W4_; FFT, complex, complex, FORWARD, 0, MULTIPLE, alg_hint()> forward_fft_; FFT, complex, complex, INVERSE, 0, MULTIPLE, alg_hint()> inverse_fft_; }

33 Slide-33 www.hpec-si.org MITRE AFRL Lincoln Fault Tolerance Self Optimization High Level Languages Outline Introduction Software Standards Parallel VSIPL++ Future Challenges Summary

34 Slide-34 www.hpec-si.org MITRE AFRL Lincoln Dynamic Mapping for Fault Tolerance Parallel Processor Spare Failure Input Task X IN Map 0 Nodes: 0,1 Map 1 Nodes: 0,2 Output Task Map 2 Nodes: 1,3 X OUT Switching processors is accomplished by switching maps No change to algorithm required Switching processors is accomplished by switching maps No change to algorithm required

35 Slide-35 www.hpec-si.org MITRE AFRL Lincoln Dynamic Mapping Performance Results Data Size Relative Time Good dynamic mapping performance is possible

36 Slide-36 www.hpec-si.org MITRE AFRL Lincoln Optimal Mapping of Complex Algorithms Input X IN Low Pass Filter X IN W1W1 W1W1 FIR1 X OUT W2W2 W2W2 FIR2 Beamform X IN W3W3 W3W3 mult X OUT Matched Filter X IN W4W4 W4W4 FFT IFFT X OUT Workstation Embedded Multi-computer PowerPC Cluster Embedded Board Intel Cluster Application Hardware Different Optimal Maps Need to automate process of mapping algorithm to hardware

37 Slide-37 www.hpec-si.org MITRE AFRL Lincoln Self-optimizing Software for Signal Processing Find –Min(latency | #CPU) –Max(throughput | #CPU) S3P selects correct optimal mapping Excellent agreement between S3P predicted and achieved latencies and throughputs Find –Min(latency | #CPU) –Max(throughput | #CPU) S3P selects correct optimal mapping Excellent agreement between S3P predicted and achieved latencies and throughputs #CPU Latency (seconds) Large (48x128K)Small (48x4K) Throughput (frames/sec) 4 5 6 7 8 25 20 15 10 0.25 0.20 0.15 0.10 1.5 1.0 0.5 5.0 4.0 3.0 2.0 4 5 6 7 8 Problem Size #CPU 1-1-1-1 1-1-2-1 1-2-2-1 1-2-2-21-3-2-2 1-1-2-1 1-1-2-2 1-2-2-2 1-3-2-2 1-1-1-2 1-1-2-2 1-2-2-2 1-2-2-3 1-1-2-1 1-2-2-1 1-2-2-2 2-2-2-2

38 Slide-38 www.hpec-si.org MITRE AFRL Lincoln High Level Languages DoD Sensor Processing High Performance Matlab Applications Parallel Matlab Toolbox DoD Mission Planning Scientific Simulation Commercial Applications User Interface Hardware Interface Parallel Computing Hardware Parallel Matlab need has been identified HPCMO (OSU) Required user interface has been demonstrated Matlab*P (MIT/LCS) PVL (MIT/LL) Required hardware interface has been demonstrated MatlabMPI (MIT/LL) Parallel Matlab Toolbox can now be realized

39 Slide-39 www.hpec-si.org MITRE AFRL Lincoln MatlabMPI deployment (speedup) Maui –Image filtering benchmark (300x on 304 cpus) Lincoln –Signal Processing (7.8x on 8 cpus) –Radar simulations (7.5x on 8 cpus) –Hyperspectral (2.9x on 3 cpus) MIT –LCS Beowulf (11x Gflops on 9 duals) –AI Lab face recognition (10x on 8 duals) Other –Ohio St. EM Simulations –ARL SAR Image Enhancement –Wash U Hearing Aid Simulations –So. Ill. Benchmarking –JHU Digital Beamforming –ISL Radar simulation –URI Heart modeling Rapidly growing MatlabMPI user base demonstrates need for parallel matlab Demonstrated scaling to 300 processors Rapidly growing MatlabMPI user base demonstrates need for parallel matlab Demonstrated scaling to 300 processors Number of Processors Performance (Gigaflops) Image Filtering on IBM SP at Maui Computing Center

40 Slide-40 www.hpec-si.org MITRE AFRL Lincoln Summary HPEC-SI Expected benefit –Open software libraries, programming models, and standards that provide portability (3x), productivity (3x), and performance (1.5x) benefits to multiple DoD programs Invitation to Participate –DoD Program offices with Signal/Image Processing needs –Academic and Government Researchers interested in high performance embedded computing –Contact: KEPNER@LL.MIT.EDU

41 Slide-41 www.hpec-si.org MITRE AFRL Lincoln The Links High Performance Embedded Computing Workshop http://www.ll.mit.edu/HPEC High Performance Embedded Computing Software Initiative http://www.hpec-si.org/ Vector, Signal, and Image Processing Library http://www.vsipl.org/ MPI Software Technologies, Inc. http://www.mpi-softtech.com/ Data Reorganization Initiative http://www.data-re.org/ CodeSourcery, LLC http://www.codesourcery.com/ MatlabMPI http://www.ll.mit.edu/MatlabMPI http://www.ll.mit.edu/HPEC http://www.hpec-si.org/ http://www.vsipl.org/ http://www.mpi-softtech.com/ http://www.data-re.org/ http://www.codesourcery.com/ http://www.ll.mit.edu/MatlabMPI


Download ppt "Slide-1 HPEC-SI MITRE AFRL MIT Lincoln Laboratory www.hpec-si.org High Performance Embedded Computing Software Initiative (HPEC-SI) This work is sponsored."

Similar presentations


Ads by Google