Download presentation
Presentation is loading. Please wait.
Published byShon Little Modified over 9 years ago
2
Beowulf Cluster Jon Green Jay Hutchinson Scott Hussey Mentor: Hongchi Shi
3
Goals and Objectives To design and build a successful Beowulf cluster Test its abilities Prove its validity as a computing environment for solving real world problems
4
What is a Beowulf Cluster ? Vague concept –What type of hardware? –Size thresholds Used to describe a group of networked computing nodes that work towards a common task.
5
Background – The Name and History In 1994 Thomas Sterling and Don Becker, working at Goddard Space Flight Center,built a cluster consisting of 16 486 PCs connected by channel bonded Ethernet. The name Beowulf was derived from the old English story.
6
Low latency communication network Low cost Diminishing returns Application suitability Hardware specialization The Gargleblaster Aspects of designing a Beowulf Cluster
7
GIVE US AN A !
8
Background – Cost Effectiveness Historically Beowulf Clusters have used: –open source Unix distributions e.g. Linux –Low cost off-the-shelf computers e.g. PC’s –Low cost network components e.g. 10/100 Mbit Ethernet
9
Cost associated with Super Computing Standard Supercomputer: $10,000/GFLOPS –U.S. Dept of Energy - ASCI Project Beowulf Supercomputer: <1/10 of the Cost. –KLAT2 costs $650/GFLOPS
10
Our Hardware. 10 Indy client nodes R4600 133MHZ MIPS Processor 96 Mbytes RAM 10 Mbit Ethernet 1 Intel architected gateway Allows for multiple network interfaces Gives us access to cluster for external networks
11
Our Hardware ethernet switch –Improves network performance
12
Our Software Linux-MIPS Distribution –Kernel 2.2.x PVM – Parallel Virtual Machine GCC – GNU Compiler Collection
13
Constraints 1.Money 1.Beowulf philosophy is low cost 2.We are college students 2.Familiarity with operating environment 1.IRIX is not a standard Unix 2.Linux is a common Beowulf environment 3.Indy does not support > 10 Mbit Ethernet
14
Alternate Solution #1 Dr. Shi’s NT machines Disadvantages – Cost of NT Small number of nodes Can not be dedicated to cluster
15
Alternate Solution #2 Personal Machines Disadvantages – Not homogeneous Inconvenient
16
Alternate Solution #3 Dr. Pal’s O2’s Disadvantages – Cost of IRIX Distaste for IRIX Can not be dedicated to cluster
17
Testing Methods Network latency CPU Performance Scalability of design (ex. 4 nodes vs. 8 nodes)
18
Testing Methods Parallel applications –AI –NASA Parallel Benchmarks –Texture mapping onto elevation maps
19
Schedule End of semester –Fully configured OS on nodes –Network infrastructure complete End of Feb. –Finish developing parallel applications –Finish developing testing tools End of March –Have all data from tests gathered and compiled
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.