Thursday AM, Lecture 2 Brian Lin OSG

Slides:



Advertisements
Similar presentations
Module 1: Installing Windows XP Professional
Advertisements

Managing Your Network Environment © 2004 Cisco Systems, Inc. All rights reserved. Managing Cisco IOS Devices INTRO v2.0—9-1.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
 Proxy Servers are software that act as intermediaries between client and servers on the Internet.  They help users on private networks get information.
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
CS 4700 / CS 5700 Network Fundamentals Lecture 17.5: Project 5 Hints (Getting a job at Akamai) Revised 3/31/2014.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
Introduction to Windows XP Professional Chapter 2 powered by dj.
Tutorial 11 Installing, Updating, and Configuring Software
Software Distribution Overview Prepared By: Melvin Brewster Chaofeng Yan Sheng Shan Zhao Khanh Vu.
Building a Real Workflow Thursday morning, 9:00 am Lauren Michael Research Computing Facilitator University of Wisconsin - Madison.
SRM at Clemson Michael Fenn. What is a Storage Element? Provides grid-accessible storage space. Is accessible to applications running on OSG through either.
CH2 System models.
Input/ Output By Mohit Sehgal. What is Input/Output of a Computer? Connection with Machine Every machine has I/O (Like a function) In computing, input/output,
Software.
Apache Web Server v. 2.2 Reference Manual Chapter 1 Compiling and Installing.
Module 1: Installing and Configuring Servers. Module Overview Installing Windows Server 2008 Managing Server Roles and Features Overview of the Server.
Introduction to CVMFS A way to distribute HEP software on cloud Tian Yan (IHEP Computing Center, BESIIICGEM Cloud Computing Summer School.
Implementing Hyper-V®
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
How computer’s are linked together.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 3: Operating-System Structures System Components Operating System Services.
Module 9: Implementing Caching. Overview Caching Overview Configuring General Cache Properties Configuring Cache Rules Configuring Content Download Jobs.
Module 3 Planning and Deploying Mailbox Services.
Building a Real Workflow Thursday morning, 9:00 am Lauren Michael Research Computing Facilitator University of Wisconsin - Madison.
Unified scripts ● Currently they are composed of a main shell script and a few auxiliary ones that handle mostly the local differences. ● Local scripts.
Module 8 : Configuration II Jong S. Bok
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
1 MSRBot Web Crawler Dennis Fetterly Microsoft Research Silicon Valley Lab © Microsoft Corporation.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
UpgradinguPortal to What’s new that matters Better use of third party frameworks Faster! Improved caching Drag and Drop New Skin & Theme Accessibility.
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
Hands-On Microsoft Windows Server 2008 Chapter 7 Configuring and Managing Data Storage.
Active-HDL Server Farm Course 11. All materials updated on: September 30, 2004 Outline 1.Introduction 2.Advantages 3.Requirements 4.Installation 5.Architecture.
Considerations on Using CernVM-FS for Datasets Sharing Within Various Research Communities Catalin Condurache STFC RAL UK ISGC, Taipei, 18 March 2016.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
Five todos when moving an application to distributed HTC.
Cofax Scalability Document Version Scaling Cofax in General The scalability of Cofax is directly related to the system software, hardware and network.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Presenter: Yue Zhu, Linghan Zhang A Novel Approach to Improving the Efficiency of Storing and Accessing Small Files on Hadoop: a Case Study by PowerPoint.
THE ATLAS COMPUTING MODEL Sahal Yacoob UKZN On behalf of the ATLAS collaboration.
Large Input in DHTC Thursday PM, Lecture 1 Lauren Michael CHTC, UW-Madison.
Lesson 9: SOFTWARE ICT Fundamentals 2nd Semester SY
CernVM-FS vs Dataset Sharing
Dynamic Extension of the INFN Tier-1 on external resources
Large Output and Shared File Systems
System Architecture & Hardware Configurations
NA61/NA49 virtualisation:
Dag Toppe Larsen UiB/CERN CERN,
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Dag Toppe Larsen UiB/CERN CERN,
Introduction to CVMFS A way to distribute HEP software on cloud
Thursday AM, Lecture 2 Lauren Michael CHTC, UW-Madison
Status of Storm+Lustre and Multi-VO Support
OSG Connect and Connect Client
System Architecture & Hardware Configurations
WLCG Demonstrator R.Seuster (UVic) 09 November, 2016
OGSA Data Architecture Scenarios
Building Web Applications
Brian Lin OSG Software Team University of Wisconsin - Madison
Web Servers / Deployment
Any Data, Anytime, Anywhere
LO3 – Understand Business IT Systems
Credential Management in HTCondor
Troubleshooting Your Jobs
Brian Lin OSG Software Team University of Wisconsin - Madison
Thursday AM, Lecture 1 Lauren Michael
Presentation transcript:

Thursday AM, Lecture 2 Brian Lin OSG Large Input in DHTC Thursday AM, Lecture 2 Brian Lin OSG

Transfers More Data HTCondor File Transfer HTTP Proxies StashCache Local Storage

Hardware transfer limits <10MB/file, 1GB total exec server exec server exec server submit server exec server HTCondor <1GB/file and total submit file executable dir/ input output (exec dir)/ executable input output

Reducing data needs An HTC best practice! split large input for better throughput and less per-job data eliminate unnecessary data compress and combine files

Large input in HTC and OSG exec server file size method of delivery words within executable or arguments? tiny – 100MB per file HTCondor file transfer (up to 1GB total per-job) 100MB – 1GB, shared download from web server (local caching) 1GB - 10GB, unique or shared StashCache (regional replication) 10 GB - TBs shared file system (local copy, local execute servers)

Using a Web Proxy Place the file onto a local, proxy-configured web server Have HTCondor download via HTTP address proxy web server submit server exec server

Using a Web Proxy Place the file onto a proxy-configured web server Have HTCondor download via HTTP address proxy web server file submit server exec server

Using a Web Proxy Place the file onto a proxy-configured web server Have HTCondor download via HTTP address proxy web server proxy web cache file submit server exec server

Using a Web Proxy Place the file onto a proxy-configured web server Have HTCondor download via HTTP address proxy web server proxy web cache file HTCondor submit server exec server

Using a Web Proxy Place the file onto a proxy-configured web server Have HTCondor download via HTTP address proxy web server proxy web cache file file HTCondor submit server exec server

Using a Web Proxy Place the file onto a proxy-configured web server Have HTCondor download via HTTP address proxy web server proxy web cache file file HTCondor submit server exec server exec server exec server

Downloading HTTP Files HTCondor submit file: transfer_input_files = http://host.univ.edu/path/to/shared.tar.gz Anywhere (in-executable, or test download) wget http://host.univ.edu/path/to/shared.tar.gz in-executable: make sure to delete after un-tar or at the end of the job!!! (HTCondor thinks it’s ‘new’)

Web Proxy Considerations Managed per-VO Memory limited, max file size: 1 GB Local caching at OSG sites good for shared input files, only perfect for software and common input need to rename changed files!!! Files are downloadable by ANYONE who has the specific HTTP address Will work on 100% of OSG sites, though not all sites will have a local cache

In the OSG (Ex. 2.1) place files in $HOME/stash/public address: http://stash.osgconnect.net/~user/shared.tar.gz Stash server proxy web cache file file HTCondor any HTC submit exec server exec server exec server

Large input in HTC and OSG exec server file size method of delivery words within executable or arguments? tiny – 100MB per file HTCondor file transfer (up to 1GB total per-job) 100MB – 1GB, shared download from web server (local caching) 1GB - 10GB, unique or shared StashCache (regional replication) 10 GB - TBs shared file system (local copy, local execute servers)

Using StashCache for Input regionally-cached repository managed by OSG Connect

Placing Files in StashCache Place files in /home/username/stash/public on osgconnect.net osgconnect.net “Stash” origin regional cache local server file /home/username/stash/public any OSG submit exec server exec server exec server

Obtaining Files in StashCache Use HTCondor transfer for other files osgconnect.net “Stash” origin regional cache local server file file /home/username/stash/public HTCondor any OSG submit exec server exec server exec server

Obtaining Files in StashCache Download using stashcp command (available as an OASIS software module) login.osgconnect.net “Stash” origin regional cache local server file file stashcp /home/username/stash/public HTCondor any OSG submit exec server exec server exec server

In the Submit File Require StashCashe sites in the submit file +WantsStashCache Require sites with OASIS modules (for stashcp) Requirements = <OTHER REQUIREMENTS> && (HAS_MODULES =?= true)

In the Job Executable #!/bin/bash # setup: module load stashcache stashcp /user/username/public/file.tar.gz ./ <untar, then remove the tarball> <job commands> <remove all files from StashCache> # END

StashCache Considerations Available at ~90% of OSG sites Regional caches on very fast networks Max file size: 10 GB shared OR unique data Can copy multiple files totaling >10GB Just like HTTP proxy, change name when update files

Large input in HTC and OSG exec server file size method of delivery words within executable or arguments? tiny – 100MB per file HTCondor file transfer (up to 1GB total per-job) 100MB – 1GB, shared download from web server (local caching) 1GB - 10GB, unique or shared StashCache (regional replication) 10 GB - TBs shared file system (local copy, local execute servers)

Other Options? Some distributed projects with LARGE, shared datasets may have project-specific repositories that exist only on certain sites (e.g. CMS, ATLAS, LIGO?, FIFE?, others?) Jobs will require specific sites with local copies and use project- specific access methods OASIS? Best for lots of small files per job (e.g. software) StashCache and web proxies better for fewer larger files per job

make sure to delete data when you no longer need it in the origin!!! Cleaning Up Old Data For StashCache AND web proxies: make sure to delete data when you no longer need it in the origin!!! StashCache and VO-managed web proxy servers do NOT have unlimited space! Some may regularly clean old data for you. Check with local support.

Other Considerations Only use these options if you MUST!! Each comes with limitations on site accessibility and/or job performance, and extra data management concerns file size method of delivery words within executable or arguments? tiny – 10MB per file HTCondor file transfer (up to 1GB total per-job) 10MB – 1GB, shared download from web server (local caching) 1GB - 10GB, unique or shared StashCache (regional replication) 10 GB - TBs shared file system (local copy, local execute servers)

Exercises 2.1 Using a web proxy for shared input place the blast database on the web proxy 2.2 StashCache for shared input place the blast database in StashCache 2.3 StashCache for unique input convert movie files

Questions? Next: Exercises 2.1-2.3 Later: Large output and shared filesystems