Download presentation
Presentation is loading. Please wait.
Published byCathleen Reynolds Modified over 9 years ago
1
Planning the LCG Fabric at CERN openlab TCO Workshop November 11 th 2003 Tony.Cass@ CERN.ch
2
Tony.Cass@ CERN.ch 2 Fabric Area Overview Infrastructure Electricity, Cooling, Space Infrastructure Electricity, Cooling, Space Network Batch system (LSF, CPU server) Batch system (LSF, CPU server) Storage system (AFS, CASTOR, disk server) Storage system (AFS, CASTOR, disk server) Purchase, Hardware selection, Resource planning Purchase, Hardware selection, Resource planning Installation Configuration + monitoring Fault tolerance Installation Configuration + monitoring Fault tolerance Prototype, Testbeds Benchmarks, R&D, Architecture Benchmarks, R&D, Architecture Automation, Operation, Control Coupling of components through hardware and software GRID services !?
3
Tony.Cass@ CERN.ch 3 Agenda Building Fabric Batch Subsystem Storage subsystem Installation and Configuration Monitoring and control Hardware Purchase
4
Tony.Cass@ CERN.ch 4 Agenda Building Fabric Batch Subsystem Storage subsystem Installation and Configuration Monitoring and control Hardware Purchase
5
Tony.Cass@ CERN.ch 5 Building Fabric — I B513 was constructed in the early 1970s and the machine room infrastructure has evolved slowly over time. –Like the eye, the result is often not ideal…
6
Tony.Cass@ CERN.ch 6 Current Machine Room Layout Problem: Normabarres run one way, services run the other…. Services
7
Tony.Cass@ CERN.ch 7 Building Fabric — I B513 was constructed in the early 1970s and the machine room infrastructure has evolved slowly over time. –Like the eye, the result is often not ideal… With the preparations for LHC we have the opportunity to remodel the infrastructure.
8
Tony.Cass@ CERN.ch 8 528 box PCs 105kW 1440 1U PCs 288kW 324 disk servers 120kW(?) Future Machine Room Layout 18m double rows of racks 12 shelf units or 36 19” racks 9m double rows of racks for critical servers Aligned normabarres
9
Tony.Cass@ CERN.ch 9 Building Fabric — I B513 was constructed in the early 1970s and the machine room infrastructure has evolved slowly over time. –Like the eye, the result is often not ideal… With the preparations for LHC we have the opportunity to remodel the infrastructure. –Arrange services in clear groupings associated with power and network connections. »Clarity for general operations plus ease of service restart should there be any power failure. –Isolate critical infrastructure such as networking, mail and home directory services. –Clear monitoring of planned power distribution system. Just “good housekeeping”, but we expect to reap the benefits during LHC operation.
10
Tony.Cass@ CERN.ch 10 Building Fabric — II Beyond good housekeeping, though, there are building fabric issues that are intimately related with recurrent equipment purchase. –Raw power: We can support a maximum equipment load of 2.5MW. Does the recurrent additional cost of blade systems avoid investment in additional power capacity? –Power efficiency: Early PCs had power factors of ~0.7 and generated high levels of 3 rd harmonics. Fortunately, we now see power factors of 0.95 or better, avoiding the need to install filters in the PDUs. Will this continue? –Many sites need to install 1U or 2U rack mounted systems for space reasons. This is not a concern for us at present but may become so eventually. »There is a link here to the previous point: the small power supplies for 1U systems often have poor power factors.
11
Tony.Cass@ CERN.ch 11 Agenda Building Fabric Batch Subsystem Storage subsystem Installation and Configuration Monitoring and control Hardware Purchase
12
Tony.Cass@ CERN.ch 12 Fabric Architecture Level of complexity Batch system, load balancing, Control software, Hierarchical Storage Systems Hardware Software CPU Physical and logical coupling Disk PC Storage tray, NAS server, SAN element Storage tray, NAS server, SAN element Motherboard, backplane, Bus, integrating devices (memory,Power supply, controller,..) Operating system, driver Network (Ethernet, fibre channel, Myrinet, ….) Hubs, switches, routers Cluster World wide cluster Grid middleware Wide area network
13
Tony.Cass@ CERN.ch 13
14
Tony.Cass@ CERN.ch 14
15
Tony.Cass@ CERN.ch 15 Batch Subsystem Looking purely at batch system issues, TCO is reduced as the efficiency of node usage increases. What are the dependencies? –The load characteristics –The batch scheduler –Chip technology –Processors/box –The operating system –Others?
16
Tony.Cass@ CERN.ch 16 Batch Subsystem Looking purely at batch system issues, TCO is reduced as the efficiency of node usage increases. What are the dependencies? –The load characteristics »Not much we in IT can do here! –The batch scheduler –Chip technology –Processors/box –The operating system –Others?
17
Tony.Cass@ CERN.ch 17 Batch Subsystem Looking purely at batch system issues, TCO is reduced as the efficiency of node usage increases. What are the dependencies? –The load characteristics –The batch scheduler »LSF is pretty good here, fortunately. –Chip technology –Processors/box –The operating system –Others?
18
Tony.Cass@ CERN.ch 18 Batch Subsystem Looking purely at batch system issues, TCO is reduced as the efficiency of node usage increases. What are the dependencies? –The load characteristics –The batch scheduler –Chip technology »Take hyperthreading, for example. Tests have shown that, for HEP codes at least, hyperthreading wastes 20% of the system performance running two tasks on a dual processor machine. There are no clear benefits to running with hyperthreading enabled when running three tasks. What is the outlook here? –Processors/box –The operating system –Others?
19
Tony.Cass@ CERN.ch 19 Batch Subsystem Looking purely at batch system issues, TCO is reduced as the efficiency of node usage increases. What are the dependencies? –The load characteristics –The batch scheduler –Chip technology –Processors/box »At present, a single 100baseT NIC would support the I/O load of a quad processor CPU server. Quad processor boxes would halve the cost of networking infrastructure—but they come at a hefty price premium (XEON MP vs XEON DP, heftier chassis, …). What is the outlook here? u And total system memory becomes an issue. –The operating system –Others?
20
Tony.Cass@ CERN.ch 20 Batch Subsystem Looking purely at batch system issues, TCO is reduced as the efficiency of node usage increases. What are the dependencies? –The load characteristics –The batch scheduler –Chip technology –Processors/box –The operating system »Linux is getting better, but things such as processor affinity would be nice. u Relationship to hyperthreading… –Others?
21
Tony.Cass@ CERN.ch 21 Batch Subsystem Looking purely at batch system issues, TCO is reduced as the efficiency of node usage increases. What are the dependencies? –The load characteristics –The batch scheduler –Chip technology –Processors/box –The operating system –Others?
22
Tony.Cass@ CERN.ch 22 Agenda Building Fabric Batch Subsystem Storage subsystem Installation and Configuration Monitoring and control Hardware Purchase
23
Tony.Cass@ CERN.ch 23 Storage subsystem Processors “desktop+” node == CPU server CPU server + larger case + 6*2 disks == Disk server CPU server + Fiber Channel Interface + tape drive == Tape server Simple building blocks:
24
Tony.Cass@ CERN.ch 24
25
Tony.Cass@ CERN.ch 25
26
Tony.Cass@ CERN.ch 26 Storage subsystem — Disk Storage TCO: Maximise available online capacity within fixed budget (material & personnel). –IDE based disk servers are much cheaper than high end SAN servers. But are we spending too much time on maintenance? »Yes, at present, but we need to analyse carefully the reasons for the current load. u Complexities of Linux drivers seem under control, but numbers have exploded. And are some problems related to batch of hardware? –Where is the optimum? Switching to fibre channel disks would reduce capacity by factor of ~5. »Naively, buy, say, 10% extra systems to cover failures. Sadly, this is not as simple as for CPU servers; active data on the servers must be reloaded elsewhere. »Always have duplicate data? => purchase 2x required space. Still cheaper than SAN? How does this relate to …
27
Tony.Cass@ CERN.ch 27 Storage System — Tapes The first TCO question is “Do we need them?” Disk storage costs are dropping…
28
Tony.Cass@ CERN.ch 28 Disk Price/Performance Evolution
29
Tony.Cass@ CERN.ch 29 Storage System — Tapes The first TCO question is “Do we need them?” Disk storage costs dropping… But –Disk servers need system administrators, idle tapes sitting in a tape silo don’t. –With disk only solution, we need storage for at least twice the total data volume to ensure no data loss. –Server lifetime of 3-5 years; data must be copied periodically. »Also an issue for tape, but the lifetime of a disk server is probably still less than the lifetime of a given tape media format. Assumption today is that tape storage will be required.
30
Tony.Cass@ CERN.ch 30 Storage System — Tapes Tape robotics is easy. –Bigger means better cost/slot.
31
Tony.Cass@ CERN.ch 31
32
Tony.Cass@ CERN.ch 32 Storage System — Tapes Tape robotics is easy. –Bigger means better cost/slot. Tape drives: High end vs LTO –TCO issue: LTO drives are cheaper than high end IBM and STK drives, but are they reliable enough for our use? »c.f. the IDE disk server area. Real problem, though is tape media. –Vast portion of the data is accessed rarely but must be stored for long period. Strong pressure to select a solution that minimises an overall cost dominated by tape media.
33
Tony.Cass@ CERN.ch 33 Storage System — Managed Storage Should CERN build or buy software systems? How to measure the value of a software system? –Initial cost: »Build: Staff time to create required functionality »Buy: Initial purchase cost of system as delivered plus staff time to install and figure for CERN. –Ongoing cost »Build: Staff time to maintain system and add extra functionality »Buy: License/maintenance cost plus staff time to track releases. u Extra functionality that we consider useful may or may not arrive. Choice: –Batch system: Buy LSF. –Managed storage system: Build CASTOR. Use this model as we move on to consider system management software.
34
Tony.Cass@ CERN.ch 34 Agenda Building Fabric Batch Subsystem Storage subsystem Installation and Configuration Monitoring and control Hardware Purchase
35
Tony.Cass@ CERN.ch 35 Installation and Configuration Reproducibility and guaranteed homogeneity of system configuration is a clear method to minimise ongoing system management costs. A management framework is required that can cope with the numbers of systems we expect. We faced the same issues as we moved from mainframes to RISC systems. Vendor solutions offered then were linked to hardware—so we developed our own solution. Is a vendor framework acceptable if we have a homogeneous park of Linux systems? –Being honest, why have we built our own again?
36
Tony.Cass@ CERN.ch 36 Installation and Configuration Installation and configuration is only part of the overall computer centre management:
37
Tony.Cass@ CERN.ch 37 ELFms architecture Node Configuration System Monitoring System Installation System Fault Mgmt System
38
Tony.Cass@ CERN.ch 38 Installation and Configuration Installation and configuration is only part of the overall computer centre management: Systems provided by vendors cannot (yet) be integrated into such an overall framework. And there is still a tendency to differentiate products on the basis of management software, not raw hardware performance. –This is a problem for us as we cannot ensure we always buy brand X rack mounted servers or blade systems. –In short, life is not so different from the RISC system era.
39
Tony.Cass@ CERN.ch 39 Agenda Building Fabric Batch Subsystem Storage subsystem Installation and Configuration Monitoring and control Hardware Purchase
40
Tony.Cass@ CERN.ch 40 Monitoring and Control Assuming that there are clear interfaces, why not integrate a commercial monitoring package into our overall architecture? Two reasons: –No commercial package meets (met) our requirements in terms of, say, long term data storage and access for analysis. »This could be considered self serving: we produce requirements that justify a build rather than buy decision. –Experience has show, repeatedly, that monitoring frameworks require effort to install and maintain, but don’t deliver the sensors we require. »Vendors haven’t heard of LSF, let alone AFS. »A good reason!
41
Tony.Cass@ CERN.ch 41 Hardware Management System A specific example of the integration problem. Workflows must interface to local procedures for, e.g., LAN address allocation. Can we integrate a vendor solution? Do complete solutions exist?
42
Tony.Cass@ CERN.ch 42 Console Management Done poorly now:
43
Tony.Cass@ CERN.ch 43 We will do better: TCO issue: Do the benefits of a single console management system outweigh costs of developing our own? How do we integrate vendor supplied racks of preinstalled systems? Console Management
44
Tony.Cass@ CERN.ch 44 Agenda Building Fabric Batch Subsystem Storage subsystem Installation and Configuration Monitoring and control Hardware Purchase
45
Tony.Cass@ CERN.ch 45 Hardware Purchase The issue at hand: How do we work within our purchasing procedures to purchase equipment that minimises our total cost of ownership? At present, we eliminate vast areas of the multi- dimensional space by assuming we will rely on ELFms for system management and Castor for data management. Simplified[!!!] view: –CPU: White box vs 1U vs blades; install or ready packaged –Disk: IDE vs SAN; level of vendor integration HELP! Can we benefit from management software that comes with ready built racks of equipment in a multi-vendor environment?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.