Download presentation
Presentation is loading. Please wait.
Published byCleopatra Ward Modified over 9 years ago
1
Cluster Computing - GCB1 Cluster Computing Javier Delgado Grid-Enabledment of Scientific Applications Professor S. Masoud Sadjadi
2
Cluster Computing - GCB2
3
3
4
4 Essence of a Beowulf Hardware One head/master node (Several) compute nodes Interconnection modality (e.g. ethernet) Software Parallel Programming Infrastructure Scheduler (optional) Monitoring application (optional)
5
Cluster Computing - GCB5 Scheduling Multiple users fighting for resources = bad Don't allow them to do so directly Computer users are greedy Let the system allocate resources Users like to know job status without having to keep an open session
6
Cluster Computing - GCB6 Cluster Solutions Do-it-yourself (DIY) OSCAR Rocks Pelican HPC (formerly Parallel Knoppix) Microsoft Windows CCE OpenMosix (closed March 2008) Clustermatic (no activity since 2005)
7
Cluster Computing - GCB7 DIY Cluster Advantages Control Learning Experience Disadvantages Control Administration
8
Cluster Computing - GCB8 DIY-Cluster How-To Outline Hardware Requirements Head Node Deployment Core Software Requirements Cluster-specific Software Configuration Adding compute nodes
9
Cluster Computing - GCB9 Hardware Requirements Several commodity computers: cpu/motherboard memory ethernet card hard drive (recommended, in most cases) Network switch Cables, etc.
10
Cluster Computing - GCB10 Software Requirements – Head node Core system system logger, core utilities, mail, etc. Linux Kernel Network Filesystem (NFS) server support Additional Packages Secure Shell (SSH) server iptables (firewall) nfs-utils portmap Network Time Protocol (NTP)
11
Cluster Computing - GCB11 Software Requirements – Head node Additional Packages (cont.) inetd/xinetd – For FTP, globus, etc. Message Passing Interface (MPI) package Scheduler – PBS, SGE, Condor, etc. Ganglia – Simplified Cluster “Health” Logging dependency: Apache Web Server
12
Cluster Computing - GCB12 Initial Configuration Share /home directory Configure firewall rules Configure networking Configure SSH Create compute node image
13
Cluster Computing - GCB13 Building the Cluster Install compute node image on the compute node Manually PXE Boot (pxelinux, etherboot, etc.) RedHat Kickstart etc. Configure host name, NFS, etc.... for each node!
14
Cluster Computing - GCB14 Maintainance Software updates in head node require update in compute node Failed nodes must be temporarily removed from head node configuration files
15
Cluster Computing - GCB15 Building the Cluster But what if my boss wants a 200-node cluster? Monster.com OR come up with your own automation scheme OR Use OSCAR or Rocks
16
Cluster Computing - GCB16 Cluster Solutions Do-it-yourself (DIY) OSCAR Rocks Pelican HPC (formerly Parallel Knoppix) Microsoft Windows CCE OpenMosix (closed March 2008) Clustermatic (no activity since 2005)
17
Cluster Computing - GCB17 OSCAR Open Source Cluster Application Resources Fully-integrated software bundle to ease deployment and management of a cluster Provides Management Wizard Command-line tools System Installation Suite
18
Cluster Computing - GCB18 Overview of Process Install OSCAR-approved Linux distribution Install OSCAR distribution Create node image(s) Add nodes Start computing
19
Cluster Computing - GCB19 OSCAR Management Wizard Download/install/remove OSCAR packages Build a cluster image Add/remove cluster nodes Configure networking Reimage or test a node with the Network Boot Manager
20
Cluster Computing - GCB20 OSCAR Command Line tools Everything the Wizard offers yume Update node packages C3 - The Cluster Command Control Tools provide cluster-wide versions of common commands Concurrent execution example 1: copy a file from the head node to all visualization nodes example 2: execute a script on all compute nodes
21
Cluster Computing - GCB21 C3 List of Commands cexec: execution of any standard command on all cluster nodes ckill: terminates a user specified process cget: retrieves files or directories from all cluster nodes cpush: distribute files or directories to all cluster nodes cpushimage: update the system image on all cluster nodes using an image captured by the SystemImager tool
22
Cluster Computing - GCB22 List of Commands (cont.) crm: remove files or directories cshutdown: shutdown or restart all cluster nodes cnum: returns a node range number based on node name cname: returns node names based on node ranges clist: returns all clusters and their type in a configuration file
23
Cluster Computing - GCB23 Example c3 configuration # /etc/c3.conf ## # describes cluster configuration ## cluster gcb { gcb.fiu.edu #head node dead placeholder #change command line to 1 indexing compute-0-[0-8] #first set of nodes exclude 5 #offline node in the range (killed by J. Figueroa) } -------
24
Cluster Computing - GCB24 OPIUM The OSCAR Password Installer and User Management Synchronize user accounts Set up passwordless SSH Periodically check for changes in passwords
25
Cluster Computing - GCB25 SIS System Installation Suite Installs Linux systems over a network Image-based Allows different images for different nodes Nodes can be booted from network, floppy, or CD.
26
Cluster Computing - GCB26 Cluster Solutions Do-it-yourself (DIY) OSCAR Rocks Pelican HPC (formerly Parallel Knoppix) Microsoft Windows CCE OpenMosix (closed March 2008) Clustermatic (no activity since 2005)
27
Cluster Computing - GCB27 Rocks Disadvantages Tight-coupling of software Highly-automated Advantages Highly-automated... But also flexible
28
Cluster Computing - GCB28 Rocks The following 25 slides are property of UC Regants
29
Cluster Computing - GCB29
30
Cluster Computing - GCB30
31
Cluster Computing - GCB31
32
Cluster Computing - GCB32
33
Cluster Computing - GCB33
34
Cluster Computing - GCB34
35
Cluster Computing - GCB35
36
Cluster Computing - GCB36
37
Cluster Computing - GCB37
38
Cluster Computing - GCB38
39
Cluster Computing - GCB39
40
Cluster Computing - GCB40
41
Cluster Computing - GCB41
42
Cluster Computing - GCB42
43
Cluster Computing - GCB43
44
Cluster Computing - GCB44
45
Cluster Computing - GCB45 Determine number of nodes
46
Cluster Computing - GCB46
47
Cluster Computing - GCB47
48
Cluster Computing - GCB48
49
Cluster Computing - GCB49
50
Cluster Computing - GCB50
51
Cluster Computing - GCB51
52
Cluster Computing - GCB52 Rocks Installation Simulation Slides courtesy of David Villegas and Dany Guevara
53
Cluster Computing - GCB53
54
Cluster Computing - GCB54
55
Cluster Computing - GCB55
56
Cluster Computing - GCB56
57
Cluster Computing - GCB57
58
Cluster Computing - GCB58
59
Cluster Computing - GCB59
60
Cluster Computing - GCB60
61
Cluster Computing - GCB61
62
Cluster Computing - GCB62
63
Cluster Computing - GCB63
64
Cluster Computing - GCB64
65
Cluster Computing - GCB65
66
Cluster Computing - GCB66
67
Cluster Computing - GCB67
68
Cluster Computing - GCB68 Installation of Compute Nodes Log into Frontend node as root At the command line run: > insert-ethers
69
Cluster Computing - GCB69
70
Cluster Computing - GCB70
71
Cluster Computing - GCB71 Installation of Compute Nodes Turn on the compute node Select to PXE boot or insert Rocks CD and boot off of it
72
Cluster Computing - GCB72
73
Cluster Computing - GCB73
74
Cluster Computing - GCB74
75
Cluster Computing - GCB75 Cluster Administration Command-line tools Image generation Cluster Troubleshooting User Management
76
Cluster Computing - GCB76 Command Line Tools Cluster-fork – execute command on nodes (serially) Cluster-kill – kill a process on all nodes Cluster-probe – get information about cluster status Cluster-ps – query nodes for a running process by name
77
Cluster Computing - GCB77 Image Generation Basis: Redhat Kickstart file plus XML flexibility and dynamic stuff (i.e. support for “macros”) Image Location: /export/home/install Customization: rolls and extend-compute.xml Command: rocks-dist
78
Cluster Computing - GCB78 Image Generation Source: http://www.rocksclusters.org/rocksapalooza/2007/dev-session1.pdf
79
Cluster Computing - GCB79 Example Goal: Make a regular node a visualization node Procedure Figure out what packages to install Determine what configuration files to modify Modify extend-compute.xml accordingly (Re-)deploy nodes
80
Cluster Computing - GCB80 Figure out Packages X-Windows Related X, fonts, display manager Display wall XDMX, Chromium, SAGE
81
Cluster Computing - GCB81 Modify Config Files X configuration xorg.conf Xinitrc Display Manager Configuration
82
Cluster Computing - GCB82 User Management Rocks Directory: /var/411 Common configuration files: Autofs-related /etc/group, /etc/passwd, /etc/shadow /etc/services, /etc/rpc All encrypted Helper Command rocks-user-sync
83
Cluster Computing - GCB83 Start Computing Rocks is now installed Choose an MPI runtime MPICH OpenMPI LAM-MPI Start compiling and executing
84
Cluster Computing - GCB84 Pelican HPC LiveCD for instant cluster creation Advantages Easy to use A lot of built-in software Disadvantages Not persistent Difficult to add software
85
Cluster Computing - GCB85 Microsoft Solutions Windows Server 2003 Compute Cluster Edition (CCE) Microsoft Compute Cluster pack (CCP) Microsoft MPI (based on MPICH2) Microsoft Scheduler
86
Cluster Computing - GCB86 Microsoft CCE Advantages Using Remote Installation Services (RIS), compute nodes can be added by simply turning it on May be better for those familiar with Microsoft Environment Disadvantages Expensive Only for 64-bit architectures Proprietary Limited Application base
87
Cluster Computing - GCB87 References http://pareto.uab.es/mcreel/PelicanHPC/ http://pareto.uab.es/mcreel/ParallelKnoppix/ http://www.gentoo.org/doc/en/hpc-howto.xml http://www.clustermatic.org http://www.microsoft.com/windowsserver2003/c cs/default.aspx http://www.microsoft.com/windowsserver2003/c cs/default.aspx http://www.redhat.com/docs/manuals/linux/RHL -9-Manual/ref-guide/ch-nfs.html portmap man page http://www.rocksclusters.org/rocksapalooza http://www.gentoo.org/doc/en/diskless- howto.xml http://www.gentoo.org/doc/en/diskless- howto.xml http://www.openclustergroup.org
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.