DETER Testbed Status Kevin Lahey (ISI) Anthony D. Joseph (UCB) January 31, 2006.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

MUNIS Platform Migration Project WELCOME. Agenda Introductions Tyler Cloud Overview Munis New Features Questions.
1 © 2004, Cisco Systems, Inc. All rights reserved. CCNA 3 v3.1 Module 6 Switch Configuration.
High Availability 24 hours a day, 7 days a week, 365 days a year… Vik Nagjee Product Manager, Core Technologies InterSystems Corporation.
Managing Your Network Environment © 2004 Cisco Systems, Inc. All rights reserved. Managing Cisco IOS Devices INTRO v2.0—9-1.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Highly Available Central Services An Intelligent Router Approach Thomas Finnern Thorsten Witt DESY/IT.
1 Web Server Administration Chapter 2 Preparing For Server Installation.
Avoid DCOM and Tunnel Across Firewalls and Networks Presenters: Kevin Rutherford, Senior Applications Engineer Colin Winchester, VP Operations  OPC DA.
Installing and Maintaining ISA Server. Planning an ISA Server Deployment Understand the current network infrastructure Review company security policies.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
© 2010 VMware Inc. All rights reserved VMware ESX and ESXi Module 3.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Frangipani: A Scalable Distributed File System C. A. Thekkath, T. Mann, and E. K. Lee Systems Research Center Digital Equipment Corporation.
Virtual IP Network Windows Server 2012 Windows 08 Dual Subnets.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Garrett Drown Tianyi Xing Group #4 CSE548 – Advanced Computer Network Security.
Windows Server MIS 424 Professor Sandvig. Overview Role of servers Performance Requirements Server Hardware Software Windows Server IIS.
© 2003, Cisco Systems, Inc. All rights reserved. CSIDS 4.0—16-1 Chapter 16 Enterprise Intrusion Detection System Monitoring and Reporting.
Server Load Balancing. Introduction Why is load balancing of servers needed? If there is only one web server responding to all the incoming HTTP requests.
Module 3: Preparing for Cluster Service Installation.
Module 13: Configuring Availability of Network Resources and Content.
Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
Zak Lowman Shaquille Wilkins. $10,000 Budget Server  Hardware HP ProLiant ML 100 G6  Intel Xeon X3430 Processor (4 core, 2.40 GHz)  2GB DDR3 RAM 
University of Illinois at Urbana-Champaign NCSA Supercluster Administration NT Cluster Group Computing and Communications Division NCSA Avneesh Pant
Best Western Green Bay CHEMS 2013 SYSTEM ARCHITECTURE.
SDN Dev Group, Week 2 Aaron GemberAditya Akella University of Wisconsin-Madison 1 Wisconsin Testbed; Design Considerations.
Objective  CEO of a small company  Create a small office network  $10,000 and $20,000 Budget  Three servers (workstations)  Firewall device  Switch.
Module 4: Planning, Optimizing, and Troubleshooting DHCP
1 Web Server Administration Chapter 2 Preparing For Server Installation.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Block1 Wrapping Your Nugget Around Distributed Processing.
Module 11: Implementing ISA Server 2004 Enterprise Edition.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Sandor Acs 05/07/
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Computer Systems Lab The University of Wisconsin - Madison Department of Computer Sciences Linux Clusters David Thompson
1 Week #10Business Continuity Backing Up Data Configuring Shadow Copies Providing Server and Service Availability.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
DETER Testbed Breakout Final Summary of Priorities Feb 1, 2006.
Switch Features Most enterprise-capable switches have a number of features that make the switch attractive for large organizations. The following is a.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 Based upon slides from Jay Lepreau, Utah Emulab Introduction Shiv Kalyanaraman
Weekly Report By: Devin Trejo Week of June 21, 2015-> June 28, 2015.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Cisco PIX Firewall Family
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
Akhyari Nasir.  Router-on-a-stick is a type of router configuration in which a single physical interface routes traffic between.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
Database CNAF Barbara Martelli Rome, April 4 st 2006.
Automating Installations by Using the Microsoft Windows 2000 Setup Manager Create setup scripts simply and easily. Create and modify answer files and UDFs.
CS 283Computer Networks Spring 2013 Instructor: Yuan Xue.
@Yuan Xue CS 283Computer Networks Spring 2011 Instructor: Yuan Xue.
How to design IT infrastructure in Etere system. Server SQL.
Deterlab Tutorial CS 285 Network Security. What is Deterlab? Deterlab is a security-enhanced experimental infrastructure (based on Emulab) that supports.
SMOOTHWALL FIREWALL By Nitheish Kumarr. INTRODUCTION  Smooth wall Express is a Linux based firewall produced by the Smooth wall Open Source Project Team.
1 © 2004, Cisco Systems, Inc. All rights reserved. CCNA 2 v3.1 Module 2 Introduction to Routers.
Bentley Systems, Incorporated
Napatech Acceleration Platform
DETER Testbed Breakout
OpenStack Ani Bicaku 18/04/ © (SG)² Konsortium.
TRUST:Team for Research in Ubiquitous Secure Technologies
NCSA Supercluster Administration
Web Server Administration
Cluster Computers.
Presentation transcript:

DETER Testbed Status Kevin Lahey (ISI) Anthony D. Joseph (UCB) January 31, 2006

Current PC Hardware ISI ● 64 pc3000 (Dell 1850) ● 11 pc2800 (Sun V65x) ● 64 pc733 (IBM Netfinity 4500R) UCB ● 32: bpc3000 (Dell 1850) ● 32: bpc3060 (Dell 1850) ● 32 bpc2800 (Sun V60x) Approx 1/3 of nodes are currently down for repair or reserved for testing

Special Hardware ISI 4 Juniper M7i routers 2 Juniper IDP-200 IDS 1 Cloud Shield McAfee Intrushield 2600 UCB Minibed 8-32 HP DL360G2 Dual 1.4GHz/512KB PIII

Current Switches ISI 1 Cisco 6509 (336 GE ports) 7 Nortel T (48 GE ports each) Gigabit Switch Interconnects UCB 1 Foundry FastIron 1500 (224 GE ports) 10 Nortel T (48 GE ports each) Gigabit Switch Interconnects UCB Minibed 6 Nortel T (48 GE ports each)

Current Configuration... pc733s pc2800s pc3000s Cisco 6509 Nortel Gb... Junipers... bpc2800s bpc3000s Foundry FastIron 1500 Nortel 5510s 1Gb (expandable)... bpc3060s ISIUCB 1Gb VPN

New Hardware for 2006 ISI ● 64 Dell 1850, identical to previous pc3000s – Dual 3GHz Xeons with 2GB RAM, but with 2MB cache instead of 1MB, and 6 interfaces instead of 5 ● 32 IBM x330 (dual 1GHz Pentium IIIs with 1GB RAM) UCB ● 96+ TBD nodes, depending on overhead recovery ● Full Boss and Users nodes: – 2 HP DL360 Dual 3.4GHz/2MB Cache Xeon, 800MHz FSB, 2GB RAM – HP Modular Smart Array 20s: 12 x 500GB SATA drives (6TB) Combined ● Nortel T and 10Gb-capable Nortel T switches

New ISI Configuration... pc733s pc2800s... 2 x 10Gb... pc3000s... pc1000s pc3000s 1Gb (10Gb later) 1Gb... Junipers Cisco 6509 Nortel 5510

DETER Clusters ISI UCB

Progress (1) ● People: – New ops guy (Kevin isi) getting up to speed ● Reliability: – Daily backups for users and boss, 1-time tarballs for all other nodes – More robust Nortel switch configuration – ISI or UCB users/boss machines can run either or both clusters ● Security: “Panic Switch” to disconnect from Internet

Progress (2) ● Emulab Software: – Unified boot image for –com1 and –com2 machines – DNS servers, IP addresses in Database – Click image with polling ● Incorporated state of Emulab as of about 9/30/05 – Debugged at UCB, then installed at ISI – Firewall and experimental nodes must be resident on the same switch – Release/update procedure is still problematic – for discussion in testbed breakout

In-Progress (1) ● Reliability: – Automating fail-over between clusters (DB mirroring / reconciliation scripts) ● Security: – Automatic disk wiping on a project/experiment basis – Automating leak testing for control/experiment networks ● Performance: – Redoing the way emulab-in-emulab handles the control net (saves 1 experimental node interface) – Improving the performance of the VPN/IPsec links – Supporting a local tftp/frisbee server at UCB

In-Progress (2) ● Federation: – Supporting running federated experiments between separately administered Emulabs using emulab-in-emulab ● Netgraph module to rewrite 802.1q tags as they pass through a VPN tunnel (similar to Berkeley and ISI link) ● Configuration: – Incorporating EMIST setup/visualization tools into Dashboard ● New Emulab Hardware Types: – Supporting IBM BladeCenters (currently testing with 12x2 BC) – Routers as first-class objects

Network Topology ● Open hypothesis: Inter-switch links may be a bottleneck – Foundry/Cisco-Nortel and Nortel-Nortel – Adding multiple 10GE interconnects ● Exploring alternate node interconnection topologies – Example: connecting each node to multiple switches ● Potential issue: Assign is a very complex program – There may be all sorts of gotchas lurking out there

Other New Nodes on the Horizon ● Secure64 ● NetFPGA2 ● pc3000s with 10 interfaces ● Research Accelerator for MultiProcessing (RAMP) – 1, Mhz FPGA-based CPUs – Some number of elements devoted to FSM traffic generators – Many 10GE I/O ports – ~$100K for 8U box at 1.5KW