Tools for TEIN2 operation Yoshitaka Hattori Jin Tanaka APAN-JP.

Slides:



Advertisements
Similar presentations
NAGIOS AND CACTI NETWORK MANAGEMENT AND MONITORING SYSTEMS.
Advertisements

Lee, Seungjun ( ) Korea Advanced Institute of Science and Technology August 28, 2003 APAN Measurement WG meeting eTOP End-to-end.
DataTAG CERN Oct 2002 R. Hughes-Jones Manchester Initial Performance Measurements With DataTAG PCs Gigabit Ethernet NICs (Work in progress Oct 02)
High Speed Total Order for SAN infrastructure Tal Anker, Danny Dolev, Gregory Greenman, Ilya Shnaiderman School of Engineering and Computer Science The.
CIT 470: Advanced Network and System AdministrationSlide #1 CIT 470: Advanced Network and System Administration Servers.
Page 1 Dorado 400 Series Server Club Page 2 First member of the Dorado family based on the Next Generation architecture Employs Intel 64 Xeon Dual.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Performance Analysis of Orb Rabin Karki and Thangam V. Seenivasan 1.
CdL was here DataTAG/WP7 Amsterdam June 2002 R. Hughes-Jones Manchester 1 EU DataGrid - Network Monitoring Richard Hughes-Jones, University of Manchester.
Network Administration Procedures Tools –Ping –SNMP –Ethereal –Graphs 10 commandments for PC security.
CALICE UCL, 20 Feb 2006, R. Hughes-Jones Manchester 1 10 Gigabit Ethernet Test Lab PCI-X Motherboards Related work & Initial tests Richard Hughes-Jones.
PFLDNet Argonne Feb 2004 R. Hughes-Jones Manchester 1 UDP Performance and PCI-X Activity of the Intel 10 Gigabit Ethernet Adapter on: HP rx2600 Dual Itanium.
Simple Comparison By Akhyari Nasir. Intro  Network monitoring and measurement have become more and more important in a modern complicated network. 
RouterBOARD 1000 September, 2008 product overview.
Can Google Route? Building a High-Speed Switch from Commodity Hardware Guido Appenzeller, Matthew Holliman Q2/2002.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
ASUS Confidential ASUS AP140R Server Introduction By Server Team V1.0.
Capacity Planning in SharePoint Capacity Planning Process of evaluating a technology … Deciding … Hardware … Variety of Ways Different Services.
LOGO. Types of System Boards  Nonintegrated System Board  Nonintegrated system boards can be easily identified because each expansion slot is usually.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
IBIS System: Requirements and Components Lois M. Haggard Office of Public Health Assessment.
Network Performance Measurement Atlas Tier 2 Meeting at BNL December Joe Metzger
5 September 2015 Culrur-exp project CULTURe EXchange Platform (CULTUR-EXP) project kick-off meeting, August 2013, Tbilisi, Georgia Joint Operational.
© Copyright IBM Corporation 2006 Course materials may not be reproduced in whole or in part without the prior written permission of IBM IBM BladeCenter.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Chapter 1 An Introduction to Networking
JPL Campus Advanced ServicesCdL Claudia de Luna (818) December 4, 2000 JPL Campus Network Advanced.
DELL PowerEdge 6800 performance for MR study Alexander Molodozhentsev KEK for RCS-MR group meeting November 29, 2005.
Windows 2000 Advanced Server and Clustering Prepared by: Tetsu Nagayama Russ Smith Dale Pena.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Evaluation of the LDC Computing Platform for Point 2 SuperMicro X6DHE-XB, X7DB8+ Andrey Shevel CERN PH-AID ALICE DAQ CERN 10 October 2006.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
APAN 10Gbps End-to-End Performance Measurement Masaki Hirabaru (NICT), Takatoshi Ikeda (KDDI/NICT), and Yasuichi Kitamura (NICT) July 19, 2006 Network.
A Measurement Based Memory Performance Evaluation of High Throughput Servers Garba Isa Yau Department of Computer Engineering King Fahd University of Petroleum.
Wright Technology Corp. Minh Duong Tina Mendoza Tina Mendoza Mark Rivera.
4 Dec 2006 Testing the machine (X7DBE-X) with 6 D-RORCs 1 Evaluation of the LDC Computing Platform for Point 2 SuperMicro X7DBE-X Andrey Shevel CERN PH-AID.
APII/APAN Measurement Framework: Advanced Network Observatory Masaki Hirabaru (NICT), Takatoshi Ikeda (KDDI Lab), Motohiro Ishii (QIC), and Yasuichi Kitamura.
Abilene Observatory Chris Robb Indiana University APAN Engineering Workshop 2004 Slides prepared by Chris Small, IU Global NOC Software Engineer.
APAN SIP SERVER Hosted at the APAN Tokyo XP Thanks to  Prof. Konishi for organizing this  Takatoshi Ikeda/ KDDI for mounting the server at APAN TokyoXP.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
Internet2 Network Observatory Update Matt Zekauskas, Measurement SIG 2006 Fall Member Meeting 4-Dec-2006.
Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product.
10-Jun-2005 OWAMP (One-Way Active Measurement Protocol) Jeff Boote Network Performance Workshop.
The DCS lab. Computer infrastructure Peter Chochula.
AmendmentsAmendments Advanced Higher. The PCI bus was adequate for many years, providing enough bandwidth for all the peripherals most users might want.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 Lessons Learned in Grid Networking or How do we get end-2-end performance to Real Users ? Richard.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
L1/HLT trigger farm Bologna setup 0 By Gianluca Peco INFN Bologna Genève,
10-Jun-05 BWCTL (Bandwidth Test Control) Jeff Boote Network Performance Workshop.
CubicRing ENABLING ONE-HOP FAILURE DETECTION AND RECOVERY FOR DISTRIBUTED IN- MEMORY STORAGE SYSTEMS Yiming Zhang, Chuanxiong Guo, Dongsheng Li, Rui Chu,
CRISP WP18, High-speed data recording Krzysztof Wrona, European XFEL PSI, 18 March 2013.
SMOOTHWALL FIREWALL By Nitheish Kumarr. INTRODUCTION  Smooth wall Express is a Linux based firewall produced by the Smooth wall Open Source Project Team.
INDIANAUNIVERSITYINDIANAUNIVERSITY IRNC Measurement John Hicks HPCC Engineer Indiana University 18 th APAN Meeting – Cairns 4-July-2004.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
NetFlow Analyzer Best Practices, Tips, Tricks. Agenda Professional vs Enterprise Edition System Requirements Storage Settings Performance Tuning Configure.
DIT314 ~ Client Operating System & Administration
Computer Hardware.
Cluster Active Archive
Networking between China and Europe
IP Control Gateway (IPCG)
Achieving reliable high performance in LFNs (long-fat networks)
Cluster Computers.
Presentation transcript:

Tools for TEIN2 operation Yoshitaka Hattori Jin Tanaka APAN-JP

Which tools? NREN NOC decides the tools by herself. –This presentation is only our proposal But we hope the other NREN NOCs can get the remote measurement data via web page – traffic graph, raw data… –At least operators and researchers –Hopefully, to public, if allowed Which tools are expected for the common data exchange? –Use existing well-known tools There are many open sources for installing and configuring them Well-known data formats are useful for both operators and researchers –Development of new tools expected for TEIN2 Provide the function that cannot be supported by well-known tools Might require more human resources and cost

Well-known Tools FunctionTools Required machine Install at NOC only Install at POP(s) Install at all nodes Remarks Tracking Request Tracker or other bug tracker system Machine A FreeBSD ☆☆☆☆ ★★★ -- To be installed at NOC only for managing trouble and other operation work Usage data MRTG or RRDtool + Frontend Machine A ☆☆ ☆☆☆☆ RRDtool + cricket is FreeBSD ★★★★ recommended for traffic Animated Traffic MAP Machine A ☆☆☆ FreeBSD ★★★ Perform ance data Throughp ut data Iperf & BWCTL Machine B- ☆☆☆☆☆☆☆☆ NTP source is required To be installed at least two locations Linux- ★★★★★★★★ RTT & Packet loss data Smoke Ping Machine A ☆☆ ☆☆☆ FreeBSD ★★★★ One way latency data OWAMP Machine C- ☆☆☆☆☆☆☆ NTP source is required To be installed at least two locations FreeBSD- ★★★★★★★★ Flow analysis flow-tools (NetFlow) Machine A ☆☆☆☆☆☆☆☆ Depending on router capability FreeBSD ★★★★★★★★ Equipment status Router data IU Router Proxy or Looking-glass Machine A ☆☆☆☆☆☆☆☆ FreeBSD ★★★ ☆ Useful for advanced operation ☆ :Lowest ☆☆☆☆☆ :Highest ★ Man-hours and cost ★ :Lowest ★★★★★ :Highest

Machines Machine A for measurements (traffic etc.) –It will be installed on NOC (and some POPs) –FreeBSD or Linux based, RRDtool+cricket, Animated Traffic Map, SmokePing, Flow-tools and IU Router Proxy Machine B for throughput performance –Linux with tuned kernel for TCP, iPerf and BWCTL –It will be installed on each POP (and some nodes, if we can allow its cost) Machine C for One-way latency performance –FreeBSD or Linux based, OWAMP –NTP MUST be configured –It will be installed on each POP (and some nodes, if we can allow its cost) In minimum case –At least one Machine A must be installed in NOC for measurements –Machine B and C are optional

Figure To EU TransPAC2 to US NOC Collecting Traffic and Flow Data Machine A Machine B&C HK SG JP Machine B&C Throughput Performance Measurement One way latency Measurement Machine B To EU Machine C POPs Nodes Machine A BJ Machine B&C APAN -JP TEIN2-North TEIN2-South TEIN2-JP

How to administrate tools? There are two possible schemes 1.All tools and servers are administrated only by NOC May need more human resources at NOC Difficult to manage remotely-placed servers 2.NOC administrate them in cooperation with organizations at each POP Tools and their data are administrated by NOC The organizations may provide remote-hand service Server hardware support and its OS administration may be provided by the organizations, if they allow its cost NOC may provide technical support to them

Implementation plan (proposal) Firstly, develop server and implement standard tools such as traffic measurement at NOC, and open the data –At least one Machine A at NOC Secondly, install advanced tools such as Router Proxy on NOC (and some POPs, if its cost is allowed) –Install them on Machine A –In minimum case, these tools may be installed only at NOC Finally, install the servers for advanced tools such as iPerf or OWAMP at each POP and some nodes, if its cost is allowed –Machine B and C

Links Bug Tracking systems – – – (in Japanese only) MRTG, RRDTool and cricket: – – – Animated Traffic Map: – iPerf & BWCTL: – – Smoke Ping: – OWAMP: – Flow-tools: – IU Router Proxy and Looking-glass variants: – – Very useful tools link on Grobal NOC –

Thank you

Appendix. Server specification Machine A (for measurements) –OS: FreeBSD –Needs enough processing capability, disk and memory space Dual processors are recommended, if you can allow its costs RAID 1 or 5 system is recommended to rescue data from disk failure – Middle-range server such as SuperMicro 6023P-8R (2U, 6 disk slots, redundant P/S) About $2,400 for minimum configuration (Xeon 2.8GHz single, 1GB MEM, 72GB Disk), over $3,000 for dual processor It may be good idea to rent one from your vendor (DELL or HP...) – Low-end server Will enable you to reduce rack space and get more stability than standard PC –  Standard PC system Less than $700 (3GHz class processor, 1G MEM, 250GB Disk) Installing in standard office room is *NOT* good idea –High temperature without air conditioning may cause equipment failure (mostly disk failure) More rack space may be required More stability, performanceLower cost

Appendix. Server specification (cont.) Machine B (for throughput performance) –OS: Linux with tuned kernel (web100) –Needs enough processing capability, memory and I/O bandwidth Processing capability may be enough with single processor, I/O bandwidth is important –Intel or Broadcom Gigabit NIC connected via PCI-X or PCI-Express bus Onboard or additional card Legacy 32bits 33MHz PCI bus can’t provide enough bandwidth for 1Gbps –We can use cheap IDE disk, because disk speed is not significant – Middle-range server such as SuperMicro 6013P-8 (1U, 3 disk slots) Onboard dual Intel PCI-X based Gigabit NIC (1000Base-T) It costs about $2,000 for minimum configuration (Xeon 2.8GHz single, 1GB MEM, 72GB Disk) It may be good idea to rent one from your vendor (DELL or HP...) – Low-end server with PCI-X or PCI-Express based onboard Gigabit NIC Will enable you to reduce rack space and get more stability than standard PC –  PCI-Express based standard PC system Only less than $800, good for minimizing cost but certain rack space may be required Carefully select the main board, when using onboard Gigabit NIC –Some onboard Gigabit NIC may provide poor performance and stability –Intel or Broadcom connected via PCI-Express will be nice More stability, performanceLower cost

Appendix. Server specification (cont.) Machine C (for One way latency performane) –OS: FreeBSD or Linux –Needs NTP synchronization – Low-end 1U server Will enable you to reduce rack space and get more stability than standard PC Low-end processor can provide enough processing capability – Standard PC system Less than $600 (Mid-range processor, 512M MEM, 40GB Disk) Installing in standard office room is *NOT* good idea –High temperature without air conditioning may cause equipment failure (mostly disk failure) More rack space may be required More stability, performanceLower cost