Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tier 3 and Computing Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi.

Similar presentations


Presentation on theme: "Tier 3 and Computing Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi."— Presentation transcript:

1 Tier 3 and Computing facility @ Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi

2 HPC facility in the department  32 node HP blade based cluster  Class C  BL460  Intel E5450 processor (2 X quad core), 3 GHz, 80 Watts  32 GB RAM  12 X 450 GB storage element  Gigabit + Infiniband connectivity  We can use good part of it  MOAB for and torque for cluster management, job scheduling, resource management.  Not connected to 10 mbps mpls

3 Cluster components

4 Cluster Components

5 Tier 3 status  We have tendered for very similar systems  Rack mount 1U instead of blades  Same processor configuration as the department cluster  from SUN or HP (DL 160 G5 or X4150)  Few nodes but dedicated will be connected to mpls  Similar amount of storage  In advanced stage of purchase  In installation and operation we will gain from our experience with the existing cluster.

6 GRID connectivity status  The existing 2Mbps direct link was upgraded to 10 Mbps in December ‘08  Dr. Kirti Ranjan asked for demonstration of the bandwidth through real data transfer  2 nd week of March ERNET demonstrated upto 4Mbps link speed by connecting to CDAC, Mumbai (using Infovista) (Kirti/Sushil ran the tests on the DU side)  Mr. Dhekne has commented that while the link gives us possibility of a pipe (or VPN) upto the ERNET PoP the actual transfer rate can depend on server speed, packet route (no. of hops), overall backbone capacity.  Mr. Dhekne also pointed out that till february ‘09 the TIFR CERN link had no “GEANT peering” which meant long packet routes. ERNET says there is no bottleneck  We would like to know about any other test results from other institutes

7 Lucas Taylor CHEP 2009, Prague7 CMS Centres Worldwide : A New Collaborative Infrastructure Co-location of people in CMS Centres  A CMS Centre @ My Institute is a highly-visible local CMS focal point  Status and monitoring displays to follow CMS operations  Computing consoles for students, postdocs and faculty to work together  Physical co-location of people  Video links to CERN and other institutes  Virtual co-location of people  Outreach displays LHC @ FNAL CMS Centre @ DESY

8 Lucas Taylor CHEP 2009, Prague8 CMS Centres Worldwide : A New Collaborative Infrastructure CMS Centres Worldwide A New Collaborative Infrastructure Lucas Taylor, Northeastern University Erik Gottschalk, Fermilab

9 Extra Slides


Download ppt "Tier 3 and Computing Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi."

Similar presentations


Ads by Google