ALICE GSI Compile Cluster

Slides:



Advertisements
Similar presentations
ITCR Success through Innovation iTCR Success through Innovation CiTRs DECADE Strategy ä DECADE vision integrated electronic customer access.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
AppManager 7: Deep Technical Dive Tim Sedlack & Michi Schniebel Sr. Product Managers.
16/9/2004Features of the new CASTOR1 Alice offline week, 16/9/2004 Olof Bärring, CERN.
1 Feel free to contact us at
2004, Jei Nessus A Vulnerability Assessment tool A Security Scanner Information Networking Security and Assurance Lab National Chung Cheng University
Automated Test Case Generation for the Stress Testing of Multimedia Systems By Jian Zhang and S.C. Cheung Presentation By Wytt Lusanandana.
QlikView in the Enterprise BI Stack. Sample Architecture – Metadata Integration This architecture shows the use of a custom metadata database as a data.
Study of Server Clustering Technology By Thao Pham and James Horton For CS526, Dr. Chow.
Statistics of CAF usage, Interaction with the GRID Marco MEONI CERN - Offline Week –
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
Monitoring Scale-Out with the MySQL Enterprise Monitor Andy Bang Lead Software Engineer MySQL-Sun, Enterprise Tools Team Wednesday, April 16, :15.
PROOF - Parallel ROOT Facility Kilian Schwarz Robert Manteufel Carsten Preuß GSI Bring the KB to the PB not the PB to the KB.
MAC Address IP Addressing DHCP Client DHCP Server Scope Exclusion Range Reservations Netsh.
Why load testing? Application insights.
AliEn uses bbFTP for the file transfers. Every FTD runs a server, and all the others FTD can connect and authenticate to it using certificates. bbFTP implements.
Submitted by: Shailendra Kumar Sharma 06EYTCS049.
Guideline: How to build AMSS source code? History: 01/02/ Make Draft 05/02/2010 – Release /02/2010 – Updated.
1 Alice DAQ Configuration DB
Dynamic Resource Monitoring and Allocation in a virtualized environment.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved Architectures.
PROOF Cluster Management in ALICE Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
N EWS OF M ON ALISA SITE MONITORING
BIGTOP Configuration and Wrapper Executable. Motivation Overall, even with bigtop, still lack the feel of one product family Upstream modules have inconsistent.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
H.G.Essel: Go4 - J. Adamczewski, M. Al-Turany, D. Bertini, H.G.Essel, S.Linev CHEP 2003 GSI Online Offline Object Oriented Go4.
Cluster Software Overview
Adding SubtractingMultiplyingDividingMiscellaneous.
ISCSI. iSCSI Terms An iSCSI initiator is something that requests disk blocks, aka a client An iSCSI target is something that provides disk blocks, aka.
SPI NIGHTLIES Alex Hodgkins. SPI nightlies  Build and test various software projects each night  Provide a nightlies summary page that displays all.
Processors with Hyper-Threading and AliRoot performance Jiří Chudoba FZÚ, Prague.
Some Design Idea of Red5 Clustering Scalable –Server’s capacity is enlarged when more hardwares are added Failover –Client will not notice the server node.
AliEn2 and GSI batch farm/disks/tape Current status Kilian Schwarz.
Status of AliEn2 Services ALICE offline week Latchezar Betev Geneva, June 01, 2005.
Dynamic staging to a CAF cluster Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
ETOC in AliRoot José LO Alice Offline meeting 10/08/2006.
QlikView Architecture Overview
CASTOR in SC Operational aspects Vladimír Bahyl CERN IT-FIO 3 2.
From VMware to Proxmox Federico Calzolari Scuola Normale Superiore - INFN Pisa.
Cofax Scalability Document Version Scaling Cofax in General The scalability of Cofax is directly related to the system software, hardware and network.
AAF tips and tricks Arsen Hayrapetyan Yerevan Physics Institute, Armenia.
Remote access to Castor data for data analysis Kilian Schwarz.
VGrADS and GridSolve Asim YarKhan Jack Dongarra, Zhiao Shi, Fengguang Song Innovative Computing Laboratory University of Tennessee VGrADS Workshop – September.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Sandbox Setup 2-Node Cluster. ©2015 Couchbase Inc. 2 What are the Pre-requisites for the Setup  Have at least an Intel i3 or AMD equivalent processor.
Only a Newsgroup account If possible tell Thunderbird that you want to create a Newsgroup account. If you must tell it to be an client be careful.
Valencia Cluster status Valencia Cluster status —— Gang Qin Nov
Go4 v2.2 Status & Overview CHEP 2003
CS5100 Advanced Computer Architecture
Elastic Computing Resource Management Based on HTCondor
High Availability Linux (HA Linux)
Report PROOF session ALICE Offline FAIR Grid Workshop #1
Status of the CERN Analysis Facility
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
AES on GPU using CUDA Choi dae soon.
GSIAF "CAF" experience at GSI
Load Balancing: List Scheduling
Meng Cao, Xiangqing Sun, Ziyue Chen May 28th, 2014
أنماط الإدارة المدرسية وتفويض السلطة الدكتور أشرف الصايغ
Support for ”interactive batch”
Adding with 9’s.
Adding with 10’s.
Adding ____ + 10.
Load Balancing: List Scheduling
THE GROWTH ENVIRONMENT.
Condor-G: An Update.
Presentation transcript:

ALICE GSI Compile Cluster Kilian Schwarz

icecream compile cluster

setup (permanent location still needed) start scheduler and monitor on server (currently lxb485: LSF7 test machine, not used in production) scheduler -d icemon & ... start client daemons on Wns (currently lxb480- lxb485: LSF7 test machines) if they should take load, the daemons should be started as „root“ lxb483:~# whoami root lxb483:~# iceccd -s lxb485 -d

alice icecream compile cluster

icecream usage go to any machine and start your environment cd /misc/cbmsoft/tools/icecc/sbin ./iceccd -s lxb485.gsi.de -d export PATH=/misc/cbmsoft/tools/icecc/bin:$PATH ==> which gcc /misc/cbmsoft/tools/icecc/bin/gcc eventually: cd /misc/cbmsoft/tools/icecc-0.9.2/client ./icecc-create-env /usr/bin/gcc /usr/bin/g++ mv f46ba338131fd19eaea5c67998223c91.tar.gz lxb255.tar.gz export export ICECC_VERSION=/tmp/lxb255.tar.gz

cd $ALICE_ROOT gmake -j100 compile all of AliRoot in 3 minutes icecream usage (2) cd $ALICE_ROOT gmake -j100 compile all of AliRoot in 3 minutes

cluster growth the compile cluster can grow dynamically feel free to add more resources as long as you have root rights on your machine all architectures (Debian Sarge, Etch, 32/64, Mac, ...) can be added permanent location for the main servers to be found