Telescience Project and bandwidth challenge on the TransPAC Toyokazu Akiyama Cybermedia Center, Osaka University.

Slides:



Advertisements
Similar presentations
OptIPuter All Hands Meeting, 2006NCMIR Persistent Collaboration Spaces Hardware installations assembled at each site. Unify SW at each site (Rocks Viz.
Advertisements

TCP Performance over IPv6 Yoshinori Kitatsuji KDDI R&D Laboratories, Inc.
Experience of High Performance Experiments Yoshinori Kitatsuji Tokyo XP KDDI R&D Laboratories, Inc.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
ASU/TGen Computational Facility.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
Networking Issues for WED, WestGrid, and Heritage Projects Dr. Pierre Boulanger Advanced Man-Machine Interface Laboratory University of Alberta
About the EnLightened Computing Project Prepared for Grid Computing Class at UNCW and UNCC Carla S. Hunt Senior Solutions Architect MCNC April 12, 2007.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Virtual Reality at Boston University Glenn Bresnahan Boston University Scientific Computing and Visualization (
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
1 1 WLCWLCG workshop G workshop. Introduction to KISTI Introduction to NSDC Project Activities in 2009 System architecture Management plan for Alice tier-1.
Module 10 Configuring and Managing Storage Technologies.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
ResearchChannel: Think Forward. Think Success. Michael Wellings, Director, Engineering San Diego.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
INDIANAUNIVERSITYINDIANAUNIVERSITY Spring 2001 TransPAC Engineering and Future Plans James Williams and Chris Robb
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst IDF: Multi-Core Processing for.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
The Singapore Advanced Research & Education Network.
D EPT. OF I NFO. & C OMM., KJIST Access Grid with High Quality DV Video JongWon Kim, Ph.D. 17 th APAN Meeting /JointTech WS Jan. 29 th, 2004 Networked.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
Telescience for Advanced Tomography Applications 1 Telescience featuring IPv6 enabled Telemicroscopy Steven Peltier National Center for Microscopy and.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Telescience Update Shinji Shimojo Fang-Pang Lin. Telescience working group on PRAGMA 1. Environment: a. Common Test Platform: b. Common Architecture i.
KISTI Supercomputing Center 16 th APAN Meetings (August 27, 2003) Collaborative VR Grid in Meteorology Minsu Joh Head of Supercomputing.
INDIANAUNIVERSITYINDIANAUNIVERSITY TransPAC John Hicks TransPAC HPCC Engineer Indiana University APAN Conference – Shanghai 27-August-2002.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
TOPS “Technology for Optical Pixel Streaming” Paul Wielinga Division Manager High Performance Networking SARA Computing and Networking Services
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Pacific Rim Application and Grid Middleware Assembly: PRAGMA A community building collaborations and advancing grid-based applications Peter Arzberger,
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
Ultimate Integration Joseph Lappa Pittsburgh Supercomputing Center ESCC/Internet2 Joint Techs Workshop.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
Center for Computational Visualization University of Texas, Austin Visualization and Graphics Research Group University of California, Davis Molecular.
ISDN or IP Codec Camera(s) Microphone(s) Monitor (s) Resident PC or CPU Peripheral Hardware  Computer  Document camera  DVD.
Vizard Virtual Reality Toolkits Vizard Virtual Reality Toolkits.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
Mark Ellisman, PhD Director, National Center for Microscopy and Imaging Research Professor of Neurosciences and Bioengineering University of California.
Scientific Visualization Facilities The Digital Worlds Institute Andy Quay Associate Director Digital Worlds Institute University of Florida.
Chapter 3 Selecting the Technology. Agenda Internet Technology –Architecture –Protocol –ATM IT for E-business –Selection Criteria –Platform –Middleware.
Macromolecular Crystallography Workshop 2004 Recent developments regarding our Computer Environment, Remote Access and Backup Options.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Tackling I/O Issues 1 David Race 16 March 2010.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Digital Media SEAGF Somsak Sriprayoonsakul Thai National Grid Center
Voltaire and the CERN openlab collaborate on Grid technology project using InfiniBand May 27, 2004 Patrick Chevaux EMEA Business Development
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
Telescience/ECOgrid breakout Shinji Shimojo (Osaka University) Fang-Pang Lin (NCHC)
Performance measurement of transferring files on the federated SRB
Saint Andrew the First-Called
Cluster Active Archive
TOPS “Technology for Optical Pixel Streaming”
PRAGMA Telescience at iGRID 2002
TOPS “Technology for Optical Pixel Streaming”
The National Grid Service Mike Mineter NeSC-TOE
Presentation transcript:

Telescience Project and bandwidth challenge on the TransPAC Toyokazu Akiyama Cybermedia Center, Osaka University

Overview Telemicroscopy Globus Enabled Computation Advanced Visualization Advanced Networking SRB Enabled Access to Distributed/Federated Databases Environment that Promotes Collaboration, Education and Outreach Partnership Remote Instrumentation Databases & Digital Libraries Computation Visualization Network Connectivity Collaboration, Education & Outreach Source: Steven Peltier

Telemicroscopy Datagrid UHVEM Controller CCD camera (dynamic image) CCD camera (static image) UHVEM Telecontrol Server Telecontrol Client Dynamic Image Server Dynamic Image Client Static Image Server Datagrid Storage Datagrid Storage Osaka University Computational Resource Provider Analyzed result viewer Datagrid Storage Cluster for Image analysis Video Conference System Video Conference System Remote site

Recent activities Challenges on dynamic image transfer iGrid2002 SC2002 New equipments installation

iGrid2002 Telecontrol from Amsterdam and SDSC over global IPv6 network. DVTS over IPv6 Participants NCMIR, SDSC(US) NCHC(Taiwan) Research Center for UHVEM, Cybermedia Center(Japan)

Demonstration configuration Japan JGNv6 USA Abilene Seattle iGrid Amsterdam SURFnet NCHC Taiwan TANet2 Pacific APAN (TransPAC) Asia APAN Sunnyvale SDSC WIDE New York Osaka Tokyo Control DV images

Demonstration configuration Japan JGNv6 USA Abilene Seattle iGrid Amsterdam SURFnet NCHC Taiwan TANet2 Pacific APAN (TransPAC) Asia APAN Sunnyvale SDSC WIDE New York Osaka Tokyo 3D rendering job

SC2002 HDTV over IPv6 Bandwidth challenge Participants NCMIR, SDSC KDDI R&D Laboratories Inc. Research Center for UHVEM, Cybermedia Center

HDTV Codec & Network Adapter KH-300N MPTS LINK HDTV over IPv6 requirements 1. 100Mbps bandwidth ( including 4ch sound ) 2. Under error rate ( for business use spec )

Bandwidth challenge results

New equipments installation 4k x 4k slow-scan CCD camera Performance verification is required Visualization environment Data sharing environment Storage Resource Broker MCAT server Miracle Linux 2.1 Oracle 9.2.0i SRB agents

FC RAID disk: Disk space: 11TB (RAID5) Controller: 4 Cache: 128MB/Controller Interface: Fibre Channel x 4 Datagrid System 17 Dec 2002 Cybermedia Center, Osaka University 【 Datagrid analysis system & storage 】 SGI Onyx300 CPU: 16CPU 485 SPECfp SPECint2000 Memory: 16GB Graphics pipe: 3 Texture memory: 256MB/pipe Frame buffer: 160MB/pipe Network: 1000SX x 4 Other I/O: Fibre Channel x 4, Ultra160SCSI Software: IRIX, AVS/Express for CAVE, C, C++, Fortran, Globus Toolkit 他 【 Datagrid visualization system 】 CAVE system Screen: 4 Projector: Christie Digital Systems Marquee8500/3D 【 Datagrid high speed network switch 】 Extreme Summit5i I/F: 1000SX x 12, 1000LX x 1, 1000SX x 2 Protocol: RIP, RIPv2, OSPF 【 Datagrid high speed network switch 】 Extreme Summit5i I/F: 1000 T x 12, 1000LX x 1 Protocol: RIP, RIPv2 【 Datagrid visualization client 】 IBM Intellistation MPro CPU: Intel Xeon 2.20GHz Memory: 2GB Network: 100BT, 1000BT Other I/O: Ultra 160 SCSI Software: Windows 2000 SGI OpenGL Vizserver Client 【 Datagrid high quality image generator 】 HITACHI H-3000 UHVEM High quality image recording system H-3061DS Biogrid System NEC SX-5 NEC Blade Server Express5800/ISS for PC-Cluster (X/2.2G(512)) 8 nodes Blade Server 78 nodes(156CPU) AlphaServer GS80 68/1001 model8 Tru64 UNIX Express5800/140Ra-4(Ⅲ-Xeon/700(2M)) 3nodes H-3000 形 UHVEM 【 Datagrid shared view 】 Icemap IH-S01 【 Datagrid dome visualization system 】 Panasonic CyberDome1800 Institute of Laser Engineering Cybermedia Center ( Toyonaka ) Research Center for Ultra-High Voltage Electron Microscopy Cybermedia Center ( Suita ) ODINSODINS SX-5 acces terminals Super Computer (SX-5)

Future Works Development of data sharing and visualization environment Integration of telemicroscopy system Telescience portal Development of QoS enabled environment