Lunarc history 1986 - 1988 IBM 3090 150/VF 1988 - 1991 IBM 3090 170S/VF 1991 - 1997 Workstations, IBM RS/6000 1994 – 1997 IBM SP2, 8 processors. 1998 Origin.

Slides:



Advertisements
Similar presentations
WGISS #19 Plenary, CONAE, Cordoba, Argentina, March 2005 Cluster and Grid Project: Status & Update Pakorn Apaphant Geo-Informatics and Space Technology.
Advertisements

IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Beowulf Supercomputer System Lee, Jung won CS843.
LUNARC, Lund UniversityLSCS 2002 Transparent access to finite element applications using grid and web technology J. Lindemann P.A. Wernberg and G. Sandberg.
Monitoring and performance measurement in Production Grid Environments David Wallom.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
UNICORE UNiform Interface to COmputing REsources Olga Alexandrova, TITE 3 Daniela Grudinschi, TITE 3.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Adapted from “Cooling Systems” – CTAE Information Technology Essentials PROFITT Curriculum.
HPCC Mid-Morning Break Powertools Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
VMware vCenter Server Module 4.

ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
C O L L E G E O F E N G I N E E R I N G CSU PDI 2010 Thin Clients as Desktop Computers Mark R. Ritschard Director, Engineering Network Services College.
Computing Environment in Chinese Academy of Sciences Dr. Xue-bin Dr. Zhonghua Supercomputing Center Computer Network.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Processor Development The following slides track three developments in microprocessors since Clock Speed – the speed at which the processor can carry.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
How to get started on cees Mandy SEP Style. Resources Cees-clusters SEP-reserved disk20TB SEP reserved node35 (currently 25) Default max node149 (8 cores.
National Computational Science National Center for Supercomputing Applications National Computational Science NCSA-IPG Collaboration Projects Overview.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
UNB ACRL: Current Infrastructure, Programs, and Plans Virendra Bhavsar Professor and Director, Advanced Computational Research Laboratory (ACRL) Faculty.
NML Bioinformatics Service— Licensed Bioinformatics Tools High-throughput Data Analysis Literature Study Data Mining Functional Genomics Analysis Vector.
The GRID and the Linux Farm at the RCF HEPIX – Amsterdam HEPIX – Amsterdam May 19-23, 2003 May 19-23, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind, A.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Grid Infrastructure group (Charlotte): Barry Wilkinson Jeremy Villalobos Nikul Suthar Keyur Sheth Department of Computer Science UNC-Charlotte March 16,
Institute For Digital Research and Education Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
The GRID and the Linux Farm at the RCF CHEP 2003 – San Diego CHEP 2003 – San Diego March 27, 2003 March 27, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind,
John Matrow, System Administrator/Trainer. Short History HiPeCC created April 1999 Purchased 16p 300Mhz SGI Origin 2000 April 2001: Added 8p 250Mhz.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
Ad Hoc VO Akylbek Zhumabayev Images. Node Discovery vs. Registration VO Node Resource User discover register Resource.
Installation of Storage Foundation for Windows High Availability 5.1 SP2 1 Daniel Schnack Principle Technical Support Engineer.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
The Gateway Computational Web Portal Marlon Pierce Indiana University March 15, 2002.
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Tutorial How-To install Nuxeo 5.3 on Virtual Ubuntu 9.04 Jean Marie PASCAL
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
Gateways security Aashish Sharma Security Engineer National Center for Supercomputing Applications (NCSA) University of Illinois at Urbana-Champaign.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
The LGI Pilot job portal EGI Technical Forum 20 September 2011 Jan Just Keijser Willem van Engen Mark Somers.
IPPP Grid Cluster Phil Roffe David Ambrose-Griffith.
Advanced Computing Facility Introduction
Workstations & Thin Clients
Brief introduction about “Grid at LNS”
GRID COMPUTING.
HPC usage and software packages
Cluster / Grid Status Update
OUHEP STATUS Hardware OUHEP0, 2x Athlon 1GHz, 2 GB, 800GB RAID
NGS computation services: APIs and Parallel Jobs
USF Health Informatics Institute (HII)
HII Technical Infrastructure
Advanced Computing Facility Introduction
Cornell Theory Center Cornell Theory Center (CTC) is a high-performance computing and interdisciplinary research center at Cornell.
The Design of a Grid Computing System for Drug Discovery and Design
Introduction to High Performance Computing Using Sapelo2 at GACRC
The Neuronix HPC Cluster:
Introduction to research computing using Condor
Presentation transcript:

Lunarc history IBM /VF IBM S/VF Workstations, IBM RS/ – 1997 IBM SP2, 8 processors Origin 2000, 46 processors, R Origin 2000, 100 processors, R12000, 300 Mhz 2000 Origin 2000, 116 processors, R12000, 300 Mhz 2000 Beowulf Cluster, cluster with 40 AMD 1.1 GHz cpus of the Origin 2000 processors were relocated to NSC A 64 processor cluster. AMD Athlon (WhenIm64) processors added (Toto7). Intel P GHz

Current hardware Husmodern, cluster –32 nodes, 1,1GHz AMD Athlon, WhenIm64/Toto7, clusters –65 noder, AMD 1900+, –128 noder, P GHz, –Fileserver, login nodes etc Ask, SGI Origin 2000 –48 nodes, R12000, 300 MHz, 12Gb

Current hardware

About Lunarc Current staff –1.3 fte Future Administration –2.5 fte ( minimum, depending on contract formulations)

Current users Core groups –Theoretical chemistry, Physical Chemistry2, Structural Mechanics Other large users –Fluid Mechanics, Fire safety engineering, Physics New groups –Inflamational Research, Biophysical Chemistry, Astronomy

Current users

Lunarc web User registration System information System usage Job submission ?

Using clusters Log in –Use ssh, unix tools etc mkdir proj sftp/scp vi/joe submit script –Submit script documentation Queue management –qsub script Transfer result files back –sftp/scp For many, this is a straightforward process, but why do we get so many questions??

Web portal for our clusters

Good knowledge about local circumstances Traditional users -> clusters -> grids User interface Grid of clusters