Download presentation
Presentation is loading. Please wait.
Published byBlaze Perry Modified over 9 years ago
1
slide 1 Experiences with NMI R2 Grids Software at Michigan Shawn McKee April 8, 2003 Internet2 Spring Meeting
2
slide 2 Outline Apps tested at Michigan A little about our environment and motivations Experiences for each application
3
slide 3 Grid Components Tested at Michigan Globus Condor-G NWS KX509 GSI OpenSSH GridConfig
4
slide 4 MGRID – www.mgrid.umich.edu A center to develop, deploy, and sustain an institutional grid at Michigan Many groups across the University participate in compute/data/network-intensive research grants – increasingly Grid is the solution ATLAS, NPACI, NEESGrid, Visible Human, NFSv4, NMI MGRID allows work on common infrastructure instead of custom solutions
5
slide 5 MGRID: Goals Provide participating units knowledge, support and a framework to deploy Grid technologies Provide test bench for existing and emerging Grid technologies Coordinate activities within the national Grid community (GGF, GlobusWorld, …) Provide a context for the University to invest in computational and other Grid resources
6
slide 6 Tier 1 Tier2 Center Online System Offline Farm, CERN Computer Ctr ~25 TIPS BNL Center France Italy UK Institute Institute ~0.25TIPS Workstations ~100 MBytes/sec 100 - 1000 Mbits/sec Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec ~2.5 Gbits/sec Tier2 Center ~2.5 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 CERN/Outside Resource Ratio ~1:2 Tier0/( Tier1)/( Tier2) ~1:1:1 Data Grids for High Energy Physics
7
slide 7 ATLAS Grid Testbed (US) 10 sites University groups: BU, IU, UM, NM, OU, SMU, UTAUniversity groups: BU, IU, UM, NM, OU, SMU, UTA Labs: ANL, BNL, LBNLLabs: ANL, BNL, LBNL 15-20 users All sites: Globus & Condor AFS, ATLAS software release Dedicated resources Accounts for most users on all machines Applications: Monte Carlo production w/ legacy code Athena controlled Monte Carlo
8
slide 8 Globus Experiences We had already been using Globus since V1.1.3 for our work on the US ATLAS testbed The NMI release was nice because of the GPT packaging which made installation trivial. There were some issues with configuration and coexistence: Had to create a separate NMI gatekeeper to not impact our production grid users No major issues found…Globus just worked
9
slide 9 Condor-G Condor was already in use at our site and in our testbed. Condor-G installed over existing Condor installations produced some problems: Part of the difficulty was not understanding the details of the difference between Condor and Condor-G A file ($LOG/.schedd_address) was owned by root rather than the condor user and this “broke” Condor-G. Resolved via the testbed support list
10
slide 10 Network Weather Service (NWS) Installation was trivial via GPT (server/client bundles) Interesting product for us. We have done significant work with monitoring. NWS advantages: Easy to automate network testing, once you understand the config details Prediction of future value of resources is fairly unique and potentially useful for grid scheduling NWS disadvantages: Difficult user interface (relatively obscure syntax to access measured/predicted data)
11
slide 11 KX509 This application was developed at Michigan and was used at a testbed level until recently Michigan is a Kerberos site MGRID wants to use KX509 for all “certificates” within campus. We were unable to get KX509 to work University-wide at Michigan… Problem was a bad “CREN” root certificate complicated by insufficient error checking/handling in the Globus code. Should have a “fixed” root certificate shortly…
12
slide 12 GSI OpenSSH Useful program to extend functionality of PKI to OpenSSH. Allows “automatic” interactive login to proxy holders based upon Globus mapfile entries Simple to install---In principle a superset of OpenSSH on the server end We had a problem with a conflict in dynamic libraries which it installs on a non-NMI host
13
slide 13 GridConfig We tested GridConfig to determine how useful such a system would be for our needs General impression was that this is a potentially useful tool. Would like to see: Improved config checking capability More awareness of application interactions and config dependencies between applications
14
slide 14 Conclusions Applications were all easy to install via GPT Configuration details are still not that easy, but tools like GridConfig should help in the long term We hope to do much more detailed testing with future releases and are already planning to build applications and our environment for MGRID on the NMI release
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.