Bryce Carmichael Lee Godley III Diaminatou Goudiaby Unquiea Wade Mentor: Dr. Eric Akers.

Slides:



Advertisements
Similar presentations
Copyright © 2008 SAS Institute Inc. All rights reserved. SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks.
Advertisements

Natasha Pavlovikj, Kevin Begcy, Sairam Behera, Malachy Campbell, Harkamal Walia, Jitender S.Deogun University of Nebraska-Lincoln Evaluating Distributed.
Dinker Batra CLUSTERING Categories of Clusters. Dinker Batra Introduction A computer cluster is a group of linked computers, working together closely.
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
Department of Mathematics and Computer Science
© 2014 Pearson Education, Inc., Upper Saddle River, NJ. All rights reserved. This material is protected by Copyright and written permission should be obtained.
Dr. David Wallom Use of Condor in our Campus Grid and the University September 2004.
AStudy on the Viability of Hadoop Usage on the Umfort Cluster for the Processing and Storage of CReSIS Polar Data Mentor: Je’aime Powell, Dr. Mohammad.
1 Workshop 20: Teaching a Hands-on Undergraduate Grid Computing Course SIGCSE The 41st ACM Technical Symposium on Computer Science Education Friday.
OxGrid, A Campus Grid for the University of Oxford Dr. David Wallom.
Tools for Publishing Environmental Observations on the Internet Justin Berger, Undergraduate Researcher Jeff Horsburgh, Faculty Mentor David Tarboton,
: Distributed Systems Dr. Rajkumar Buyya Senior Lecturer and Director of MEDC Course Grid Computing and Distributed Systems (GRIDS) Laboratory Dept.
Chapter 5 Application Software.
SOFTWARE.
Brain Campbell Bryce Carmichael Unquiea Wade Mentor: Dr. Eric Akers.
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
LIGO-G E ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech Argonne, May 10, 2004.
GIS Web Application Development Team Team Mentor Joe Ausby Team Members Akeem B. Archer Lee A. Smalls Jr.
Melissa Otis Faculty Advisor: Dr. Chris Bauer Department of Chemistry, University of New Hampshire Peer-Led Team Learning in General Chemistry Background.
Hungarian Supercomputing GRID
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
So, Jung-ki Distributed Computing System LAB School of Computer Science and Engineering Seoul National University Implementation of Package Management.
Kurochkin I.I., Prun A.I. Institute for systems analysis of RAS Centre for grid-technologies and distributed computing GRID-2012, Dubna, Russia july.
DISTRIBUTED COMPUTING
The Application of READ 180 in Student Achievement Xiaolan Rong ED690 Dr. Lauren B. Birney.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Peter Keller Computer Sciences Department University of Wisconsin-Madison Quill Tutorial Condor Week.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
Invitation to Computer Science 5 th Edition Chapter 6 An Introduction to System Software and Virtual Machine s.
LIGO-G9900XX-00-M ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech.
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
Implementation of a Polycom VSX 8000 Teleconferencing System: Developing Standards and Practices for Participating in Virtual Conferences.
© Paradigm Publishing Inc. 5-1 Chapter 5 Application Software.
NGS Innovation Forum, Manchester4 th November 2008 Condor and the NGS John Kewley NGS Support Centre Manager.
The Results of Data Collected from Surveys to Predict the Effectiveness and Analyze the Trends of Undergraduate Research Experience Programs (2008 and.
Software Engineering Prof. Ing. Ivo Vondrak, CSc. Dept. of Computer Science Technical University of Ostrava
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
Institute For Digital Research and Education Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University.
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Quill / Quill++ Tutorial.
Nara Institute of Science and technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
Authors: Ronnie Julio Cole David
Architecture of Decision Support System
CPS ® and CAP ® Examination Review OFFICE SYTEMS AND TECHNOLOGY, Fifth Edition By Schroeder and Graf ©2005 Pearson Education, Inc. Pearson Prentice Hall.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
Automatic Statistical Evaluation of Resources for Condor Daniel Nurmi, John Brevik, Rich Wolski University of California, Santa Barbara.
A Comparative Analysis of Localized Command Line Execution, Remote Execution through Command Line, and Torque Submissions of MATLAB® Scripts for the Charting.
Application Software System Software.
 Does the addition of computing cores increase the performance of the CReSIS Synthetic Aperture Radar Processor (CSARP)?  What MATLAB toolkits and/or.
Computer Applications Chapter 16. Management Information Systems Management Information Systems (MIS)- an organized system of processing and reporting.
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
A Comparison of Job Duration Utilizing High Performance Computing on a Distributed Grid Team Members JerNettie Burney Robyn Evans Michael Austin Mentor.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Matthew Farrellee Computer Sciences Department University of Wisconsin-Madison Condor and Web Services.
Distributed Data Servers and Web Interface in the Climate Data Portal Willa H. Zhu Joint Institute for the Study of Ocean and Atmosphere University of.
Derek Weitzel Grid Computing. Background B.S. Computer Engineering from University of Nebraska – Lincoln (UNL) 3 years administering supercomputers at.
INFSO-RI JRA2 Test Management Tools Eva Takacs (4D SOFT) ETICS 2 Final Review Brussels - 11 May 2010.
Background Computer System Architectures Computer System Software.
©1997 by Eric Mazur Published by Pearson Prentice Hall
TeraGrid Capability Discovery John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory.
CernVM and Volunteer Computing Ivan D Reid Brunel University London Laurence Field CERN.
Integrating Scientific Tools and Web Portals
Condor – A Hunter of Idle Workstation
Grid Means Business OGF-20, Manchester, May 2007
Patrick Dreher Research Scientist & Associate Director
Basic Grid Projects – Condor (Part I)
Computers: Tools for an Information Age
Computers: Tools for an Information Age
Presentation transcript:

Bryce Carmichael Lee Godley III Diaminatou Goudiaby Unquiea Wade Mentor: Dr. Eric Akers

OVERVIEW Abstract Background National Buoy Data Center Condor Gnuplot Results Conclusion Future Work References Acknowledgements

ABSTRACT This project presents an example of a real world problem with data collection and processing. In order to solve this dilemma, cluster research was conducted to construct a model to serve as a base for a super computing pool, allocating data to a central location, and examining the data from the National Data Buoy Center. The model serves as a standard for solving real world problems using resources such as Condor, which utilizes a six hundred plus core cluster and two computer laboratories on Elizabeth City State University’s campus. In order to standardize the model, major components must be accessed using Condor. These components include the allocation of data to a central location, the access of data from that location, the distribution of various jobs to both the laboratory and cluster computers, and the automatic installation of Gnuplot onto a computer.

NATIONAL DATA BUOY CENTER Agency within the National Weather Service (NWS) Deploys buoys in locations for meteorological and environmental observations Stations measure sea surface temperature, average wave period, dominant wave period, and wave height Measurements are submitted through Condor for processing and graphically visualized with GNUplot for analyzing purposes

CONDOR Free-ware, software system Developed at the University of Wisconsin-Madison. Implements the power of a cluster on a network. Able to perform data collection and processing.

GNUPLOT Most widely used program to plot and visualize data It is an open source program that runs through the command-line Shows the statistical data of buoys from the NDBC. The program is capable of producing high-quality, publication graphics and deals with large data sets.

Initial Java Program

New C++ Program

Submission File

RESULTS

-- Submitter: cerser-imac-1 : : cerser-imac-1 ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 25.0 PolarGrid Admi 4/8 15: :00:15 R plot_results.exe 26.0 PolarGrid Admi 4/8 15: :00:20 R plot_results.exe 27.0 PolarGrid Admi 4/8 15: :00:00 I plot_results.exe 3 jobs; 1 idle, 2 running, 0 held

RESULTS $ /cygdrive/c/condor/bin/condor_q -- Submitter: cerser-imac-1 : : cerser-imac-1 ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 0 jobs; 0 idle, 0 running, 0 held

CONCLUSION The results from the research project yield evidence of successful completion of solving our real world problem. Once the program was written efficiently, the jobs were rightfully submitted to Condor, processed, and plotted through gnuplot. This endeavor opens the door to greater data submission and more efficient plotting.

FUTURE WORK Find and solve the incompatibility issues between the java universe and Condor on the Mac OSX computers. Every C++ program submitted should run successfully within seconds. Making Condor more efficient within our workstations. Software packages will be researched to see if there are more applicable packages other than GNUPlot that can be used to graph data.

REFERENCES Z. Juhasz, Distributed and Parallel Systems: Cluster and Grid Computing (The Springer International Series in Engineering and Computer Science).New York, NY: Springer, Inc., 2005, pp. 34–57. H. Bauke, High Performance Cluster Computing: Architectures and Systems, Vol. 1. Upper Saddle River, NJ: Prentice Hall PTR, 1999, pp. 320–339. R. Buyya, Cluster Computing.Hauppauge, NY: Nova Science Publishers, 2001, pp. 23–45. G. Pfister, In Search of Clusters (2nd Edition). Upper Saddle River, NJ: Prentice Hall PTR, 1997, pp. 67–89, “Condor Project Homepage”, Retrieved from the World Wide Web “Building a Linux Cluster on a Budget”, Retrieved from the World Wide Web “GNU Plot Homepage”, Retrieved from the World Wide Web

QUESTIONS ?