On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

Setting up Small Grid Testbed
CROSSGRID WP41 Valencia Testbed Site: IFIC (Instituto de Física Corpuscular) CSIC-Valencia ICMoL (Instituto de Ciencia Molecular) UV-Valencia 28/08/2002.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Beowulf Supercomputer System Lee, Jung won CS843.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
NPACI Panel on Clusters David E. Culler Computer Science Division University of California, Berkeley
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
5 Nov 2001CGW'01 CrossGrid Testbed Node at ACC CYFRONET AGH Andrzej Ozieblo, Krzysztof Gawel, Marek Pogoda 5 Nov 2001.
MASPLAS ’02 Creating A Virtual Computing Facility Ravi Patchigolla Chris Clarke Lu Marino 8th Annual Mid-Atlantic Student Workshop On Programming Languages.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Cluster Computing Applications Project Parallelizing BLAST Research Alliance of Minorities.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
1 Lecture 7: Part 2: Message Passing Multicomputers (Distributed Memory Machines)
… when you will open a computer We hope you will not look like …
Tomographic mammography parallelization Juemin Zhang (NU) Tao Wu (MGH) Waleed Meleis (NU) David Kaeli (NU)
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Executing OpenMP Programs Mitesh Meswani. Presentation Outline Introduction to OpenMP Machine Architectures Shared Memory (SMP) Distributed Memory MPI.
Michigan Grid Testbed Report Shawn McKee University of Michigan UTA US ATLAS Testbed Meeting April 4, 2002.
Budapest 2006 Grid Activities in Ukraine Nataliya Kussul Space Research Institute NASU-NSAU, Ukraine WGISS 21, Budapest 2006.
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Supercomputing Center CFD Grid Research in N*Grid Project KISTI Supercomputing Center Chun-ho Sung.
Sandor Acs 05/07/
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
HPC system for Meteorological research at HUS Meeting the challenges Nguyen Trung Kien Hanoi University of Science Melbourne, December 11 th, 2012 High.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Experiences with the Globus Toolkit on AIX and deploying the Large Scale Air Pollution Model as a grid service Ashish Thandavan Advanced Computing and.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
The Distributed Data Interface in GAMESS Brett M. Bode, Michael W. Schmidt, Graham D. Fletcher, and Mark S. Gordon Ames Laboratory-USDOE, Iowa State University.
Parallel Algorithm for Multiple Genome Alignment Using Multiple Clusters Nova Ahmed, Yi Pan, Art Vandenberg Georgia State University SURA Cyberinfrastructure.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Running Mantevo Benchmark on a Bare-metal Server Mohammad H. Mofrad January 28, 2016
Vocational Education Competence Centre DAUGAVPILS TECHNICAL SCHOOL DAUGAVPILS TEHNIKUMS PROFESIONĀLĀS IZGLĪTĪBAS KOMPETENCES CENTRS.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Introduction.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
11 October 2000Iain A Bertram - Lancaster University1 Lancaster Computing Facility zStatus yVendor for Facility Chosen: Workstations UK yPurchase Contract.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Brief introduction about “Grid at LNS”
Computer Hardware.
Lithuanian GRID – LitGRID Infrastructure, activities, applications
UK GridPP Tier-1/A Centre at CLRC
Computing careers in the real world Or “I have my degree, now what?”
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Cluster Computers.
Presentation transcript:

On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory dr. Dalius Mažeika VGTU, Parallel Computing Laboratory 2 nd NGN workshop, Estonian Academy of Science, Tallinn 19 – 21 January 2005

Outline  Brief history of high performance computing at VGTU  Parallel Computing Laboratory (PCL)  Computational resources at VGTU  PCL research activities  GRID activities at VGTU  Future plans

Brief history  first PC cluster (2 PC) was build using Linux. PVM.  High Performance Computing project for was supported by Lithuanian Government and IBM.  IBM SP2 was installed and IBM RS/6000 cluster was build.  first PhD theses on parallel computing was defended and Toolkit for sequence code parallelization was developed.  Parallel computing Laboratory at VGTU was established.  PC cluster (20 CPUs) was build (R peak = 28 GFlops)  PC cluster expansion till 36 CPUs (R peak = 130 GFlops)  GRID testbed (VGTU-KUT-BGM)

Parallel computing laboratory (PCL) Main activities Maintenance and development of computing systems Offering free access to computational facilities to all Lithuanian universities and research institutes Consulting, expertise, training and educational activities Research in the area of high-performance computers and parallel algorithms Staff - 5 persons (1 professor, 3 doctors, 1 programmer)

Parallel computing systems at VGTU IBM SP named Daumantas –4 Thin nodes with High Performance Switch –Specification of the Node: RISC POWER2 120 MHz processor 128MB RAM 4,5 GB SCSI-2 HDD 110 MB/s Enhanced Switch Adapter 155 Mb/s ATM adapter 36.4 GB SSA disks array AIX v POE v.2.4 (MPI implementation from IBM)

Parallel computing systems at VGTU Self made PC cluster named Vilkas (36 CPU) –Specifications of 16 nodes Intel Pentium GHz Prescott HT 1 GB 400 MHz DDRAM PC GB HDD SATA Gigabit Ethernet NIC –Specifications of the 10 SMP nodes Dual Intel Tualatin Pentium III 1.4 GHz L2 512KB 1 GB DDRAM 266 MHz 80 GB HDD ATA/133 Gigabit Ethernet NIC –Total: 36 ( ) CPU 20 GB RAM 5.6 TB HDD Peak performance Rpeak = 130 Gflop/s

PCL research activities Development of Parallel Algorithms for industrial problems modelling –Nonlinear optics problem –Multiphase flow in porous media –Parallel Discrete Element Method for flows of granular materials. Development parallelization tools –Master – slave automatic parallelization toolkit –Tool for parallelization of Branch and Bound algorithms –Parallel C++ Arrays toolkit

VGTU GRID Build using Globus Toolkit Motivation –basic block in most GRID’s middleware –to get experience building from source configuring services step by step –possibility to include AIX, Windows machines –to test web services based middleware Web Services vs. Pre-WS services Plans: Condor-G, MPICH-G2, GENIUS

GRID testbed VGTU 36CPU PC cluster (GT 3.2) KUT 12CPU PC cluster (GT 3.2) BGM 22CPU Itanium2 (NPACKage GT 2.2.4) C&A 1Gbps

Future plans Participate in LithuanianGRID, forming BalticGRID, NorduGRID Extend computational resourses using EU funds for Lithuanian studies and science infrastructure Extend activities and staff of PCL

Thank you for attention…