Download presentation
Presentation is loading. Please wait.
1
Building on the BIRN Workshop BIRN Systems Architecture Overview Philip Papadopoulos – BIRN CC, Systems Architect
2
Key Systems Challenges Large-scale data is distributed on a National Scale How do you easily locate what you want? How do you translate it to what your SW tools understand? Where do you analyze it? How do you move it efficiently? How do you secure it to properly limit and log access? The underlying software systems are complex How effectively can this complexity be hidden? Software technology continually evolves and BIRN must adapt Goal: provide a systems “cookie-cutter” for adding new, secured, resources to form a federation
3
BIRN CC A View of BIRN Federated Data MRI Images Mouse DB-B EM Images Access Mouse DB-D Histology Access Mouse DB-C 2 Ph. Img Access Mouse DB-A EM Images ? Give me an index of all DAT- KO Striatum Images Federated data may be in a variety of representations databases image files simulation files flat text files …
4
BIRN Data Components Per Site Raw, processed and annotated data products Access policy for locally-held data (authorization) Distributed Distributed File System Location by file name Simple Data Federation (BIRN Virtual Datagrid – SRB) Location indexed by content. Collection management Integration across databases (Mediator) Translation among data vocabulary Authentication and Encryption Grid Security Infrastructure (GSI) as a base
5
BIRN Core Software Infrastructure Distributed Resources BIRN builds on evolving community standards for middleware Adds significant new capability in allowing disparate data to be queried Integrates domain- specific tools to be aware of a much larger data and resource space Utilizes commodity hardware and Internet2 for baseline connectivity
6
Hardware and Software Provisioning BIRN Coordinating Center Provide some BIRN-wide services Portal server, meta data catalog, Public Key Infrastructure PKI certificate server All can be replicated for resiliency and performance Network and remote-resource monitoring Software Integration and complete software suite BIRN testbed resources A BIRN “Rack” Access control Data storage Networking monitoring Existing resources form a solid backbone for extension
7
Getting Started: BIRN Rack Assembly Initial rack of equipment was prescribed, but designed from commodity components Treated as a cluster of specialized configurations BIRN CC, assembled, tested, shipped, and installed N2400 NAS 1 - 10 TB Cisco 4006 DL380 Grid POP DL380 - Network Stats GigE Net Probe APC UPS DL380 NCRR
8
Monitoring what is out there Baseline System Monitoring using ganglia (http://metabirn.nbirn.net)http://metabirn.nbirn.net Gives a live snapshot of standard system metrics CPU Load Network Load Disk Utilization Also need to monitor data utilization (currently 2.6M files, 2.1 Terabytes)
9
Provisioning a consistent set of Software Rocks Cluster Toolkit provides and automated baseline configuration for all racks and CC services Downloadable CD Set to build Linux systems Configuration is easily transferred to new HW Uses the NSF Middleware Infrastructure (NMI) software for common grid services 100’s of clusters (over 40TF aggregate deployed worldwide) We’ve created a BIRN “roll” CD that integrates BIRN domain tools (e.g. 3DSlicer, LONI Pipeline, FreeSurfer) Database (Oracle) and SRB Configuration Commodity hardware and our Linux-based SW Stack allows us to replicate “cookie cutter style”
10
Tough enough to keep on ticking Brigham and Women’s Surgical Planning Lab tried to “flood” a BIRN rack. The rack was OK.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.