Presentation is loading. Please wait.

Presentation is loading. Please wait.

Capabilities and Process Application Integration.

Similar presentations


Presentation on theme: "Capabilities and Process Application Integration."— Presentation transcript:

1 Capabilities and Process Application Integration

2 Goals Explain the importance of a single software stack for all BIRN servers Describe software integration, deployment and upgrade processes (in random order) Discuss possible extensions and improvements

3 BCC Software Integration Responsibilities System Architecture Development and Integration –System software –Application software Deployment –Reliability –Scalability –Consistency –Flexibility

4 BIRN-CC Responsibilities (cont’d) Maintenance and Support for a Continually Growing Community –24/7 production quality grid –Monitoring –Troubleshooting –Help Desk –Training

5 Applications/Servers BIRN 1.0 –GPOP –GComp –NAS –SRB –Oracle BIRN 2.0 added –Nagios –Condor –Scientific Applications BIRN 3.0 added –More Sciapps –Oracle –Postgres –Tomcat –Gridsphere –HID –Mediator –XNAT (in progress) –Xwiki (in progress)

6 Sources for BIRN Software Source code, RPMs and tarballs from: –Participant CVS/SVN repositories –Participant websites –Opensource community Configuration instructions –Make scripts –Hand edit config files –Website instructions –BIRN knowledge-base

7 Currently integrated on GComp AFNI AIR Brains2 Caret Freesurfer FSL LDDMM LONI Mipav Slicer

8 If Done by Hand at 25+ BIRN Sites CHAOS!

9 The BIRN Software Stack to the Rescue Every application/server gets –Same software version –Same software configuration –Same security Fewer manhours spent –Debugging –Propagating fixes Synergy

10 RPMs - a Cornerstone in the Path to Sanity Create users Unpack tarballs Modify configuration files Adjust firewalls Set up databases Start/stop services Make changes based on “zone”, “site”, etc.

11 The RPM Database RPM database on each server knows… –What has been installed –What functionality is available from where Examples [root@ncmir-gcomp ~]# rpm -qa | grep slicer slicer-2.5.1-0 [root@ncmir-gcomp ~]# rpm -q --requires slicer /usr/local/bin/tclsh libGL.so.1 [root@ncmir-gcomp ~]# rpm -q --whatprovides libGL.so.1 xorg-x11-Mesa-libGL-6.8.2-1.EL.13.25.1

12 Specific Examples for BIRN Sciapps 3DSlicer –RPM from tarball Brains2 –Installs RPM from site –Creates symbolic link Caret –RPM from.zip file –Permissions corrected LDDMM –RPM created from tarball –Environment variables set MIPAV –RPM from spec file and tarball provided by site –Verification script [root@ncmir-gcomp ~]# for i in afni air brains2 caret freesurfer fsl lddmm loni mipav slicer ; do rpm -qa | grep -i $i ; done AFNI-2006_03_21_1314-0LONI_Pipeline_Client-1-0 caret-5.50-1LONI_Pipeline_Server-1-0 freesurfer-3.0.2-0mipav-1.59-1 fsl-3.3.5-0slicer-2.5.1-0 lddmm-1.0.1-3brains2-RedHatEnterpriseWS4-20060327-032445

13 Alternative to up2date from RedHat Used for automated software updates Relies on RPM database YUM (Yellowdog Updater, Modified) [root@ncmir-gcomp ~]# for i in afni air brains2 caret freesurfer fsl lddmm loni mipav slicer ; do yum list | grep -i $i ; done AFNI.i3862006_03_21_1314-0installed brains2-RedHatEnterpriseWS4.i386 20060327-032445 installed caret.i3865.50-1installed freesurfer.i3863.0.2-0installed fsl.i3863.3.5-0installed lddmm.i3861.0.1-4installed LONI_Pipeline_Client.i3861-0installed LONI_Pipeline_Server.i3861-0installed mipav.i3861.59-1installed slicer.i3862.5.1-0installed slicer.i3862.6-0birncentral-dev-

14 YUM-my features Specific packages can be excluded temporarily Packages can be “installonly”, never updated Repository is same as that used for installation of new servers

15 Development & Integration –CVS & SRB for version control facilitates collaborative development –Automated build and deployment mechanisms for repeatability and scalability –Flexible development environment to meet changing requirements –Continually improving infrastructure in terms of repeatability, tracking, monitoring, automation, and scalability Software Standards

16 Features of Software Build Process “Normal” source code from CVS Large source, especially binary, from SRB Automatic tagging during build Tags match RPM version info Updates automatically added to YUM

17

18 Integrating/Updating via RPM Commit RPM to BIRN CVS repository Where matters –Which roll (sciapps, freesurfer, …) –Which architecture(s) (i386, noarch, ia64, …) Basename can’t change If integrating, add nodes and graph files Example for brains2 –BIRN/rocks/src/roll/sciapps/RPMS/i386/brains2- RedHatEnterpriseWS4-20060327-032445.i386.rpm –BIRN/rocks/src/roll/sciapps/nodes/brains2.xml –BIRN/rocks/src/roll/sciapps/graphs/default/sciapps.xml

19 Updating/Integrating from CVS/SVN/SRB Import/Update/Upload source –CVS/SVN source repositories –Large tarballs, especially binaries, into SRB Update BIRN CVS files –Makefile –Version.mk –*.spec.in Examples –Slicer for tarball in SRB –Gridsphere for CVS/SVN

20 Updating BIRN CVS files Create/Update Makefile –Checks out/Downloads source –Prepares for build –Has build rules for spec.in file, if not in spec.in file Create/Update version.mk –Sets up variables used in building and tagging –Has version # to track source changes –Has revision # to track spec.in changes –Important for YUM updating

21 Updating BIRN CVS files (cont’d) Create/Update *.spec.in –Standard boilerplate/template to start from –Pre-, install, Post- sections –Sets up requirements for building and running –http://fedora.redhat.com/docs/drafts/rpm-guide-en/

22 Software Deployment Paradigm Deployment of software at different levels of maturity to independent, isolated areas –Development Supports short-term specialized testing (network issues, database upgrades, etc.) Integrate testbed software prior to beta testing –Staging Provides area for complete, end-to-end testing Allows demonstration of latest software development efforts –Prior to production deployment –Without disruption to production systems –Production Allows reliable, repeatable, consistent infrastructure for collaboration of neuroscience researchers

23

24 BIRN Uses Rocks Efficient, Reliable, Repeatable, Scalable Deployment Systems can be reinstalled, reconfigured or duplicated quickly and easily Consistent software installation and configuration without sacrificing flexibility Standardized software delivery process Server functionality is organized into rolls –Reusable –Generic or specific Feedback incorporated in new Rocks releases by the SDSC Rocks team

25 Kicking Off Server Installation Choose rolls Partition & format the disk (/export is preserved) Enter network information & hostname Select operating environment (dev/stage/prod) Select timezone Enter root password Walk away

26 What is Rocks? Server management software Cluster management software Server functionality is organized into rolls –Reusable –Generic or specific

27 What the Blazes is a Roll?! A roll is a complete, reusable “set” of software (RPMs) needed to fulfill the functionality definition of the roll Examples CentOSProvide RedHat O/S ganglia or nagiosMonitoring tools condor or srbGeneric condor / SRB birncondor or birnsrbBIRN specific modifications sciappsBIRN specific neuroscience tools

28 Server Functionality Defined by Roll Selection BIRN Gridsphere Portal roll list includes: –CentOS, birn, area51, nagios, etc. –java (to compile gridsphere code) –tomcat (portlet container) –gridsphere (basic) gridsphere and gridportlets RPMs –birnportal (adds BIRN specific functions) birnportal, birnportlets, gama RPMs –srbportlets (adds SRB specific functions) –postgres (database)

29 Contacts –birn-systems@nbirn.netbirn-systems@nbirn.net –bcc-systems@nbirn.netbcc-systems@nbirn.net –Vicky Rowley, vrowley@ucsd.eduvrowley@ucsd.edu –Kennon Kwok, kkwok@ncmir.ucsd.edukkwok@ncmir.ucsd.edu THE END


Download ppt "Capabilities and Process Application Integration."

Similar presentations


Ads by Google