Www.mimos.my© 2010 MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad Key Size Analysis of Brute Force.

Slides:



Advertisements
Similar presentations
NGS computation services: API's,
Advertisements

MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad P-GRADE Performance.
Linux commands exercise 1. What do you need, if you try to these at home? You need to download and install Ubuntu Linux from the Internet – DVD is need.
Cluster Computing at IQSS Alex Storer, Research Technology Consultant.
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Job Submission.
MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad Blender Job Submission in P-GRADE.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Parallel ISDS Chris Hans 29 November 2004.
MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad 7 Stages Heating System Amber Job.
HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Batch Queuing Systems The Portable Batch System (PBS) and the Load Sharing Facility (LSF) queuing systems share much common functionality in running batch.
Using Clusters -User Perspective. Pre-cluster scenario So many different computers: prithvi, apah, tejas, vayu, akash, agni, aatish, falaq, narad, qasid.
Tutorial on MPI Experimental Environment for ECE5610/CSC
IT MANAGEMENT OF FME, 21 ST JULY  THE HPC FACILITY  USING PUTTY AND WINSCP TO ACCESS THE SERVER  SENDING FILES TO THE SERVER  RUNNING JOBS 
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
ISG We build general capability Job Submission on the Olympus Cluster J. DePasse; S. Brown, PhD; T. Maiden Pittsburgh Supercomputing Center Public Health.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Supporting MPI Applications on EGEE Grids Zoltán Farkas MTA SZTAKI.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
Information Security and Cybercrimes
1 Some basic Unix commands u Understand the concept of loggin into and out of a Unix shell u Interact with the system in a basic way through keyboard and.
Introduction to Linux and Shell Scripting Jacob Chan.
Understanding the Basics of Computational Informatics Summer School, Hungary, Szeged Methos L. Müller.
ISG We build general capability Purpose After this tutorial, you should: Be comfortable submitting work to the batch queuing system of olympus and be familiar.
Introduction to UNIX/Linux Exercises Dan Stanzione.
1 Operating Systems Lecture 3 Shell Scripts. 2 Brief review of unix1.txt n Glob Construct (metacharacters) and other special characters F ?, *, [] F Ex.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
1. Introduction  The JavaScript Grid Portal is trying to find a way to access Grid through Web browser, while using Web 2.0 technologies  The portal.
1 Shell Programming – Extra Slides. 2 Counting the number of lines in a file #!/bin/sh #countLines1 filename=$1#Should check if arguments are given count=0.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Airavata Usecases from SEAGrid Applications and Workflows Sudhakar Pamidighantam 4 Oct 2015.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
How to get started on cees Mandy SEP Style. Resources Cees-clusters SEP-reserved disk20TB SEP reserved node35 (currently 25) Default max node149 (8 cores.
Introduction to Linux OS (IV) AUBG ICoSCIS Team Prof. Volin Karagiozov March, 09 – 10, 2013 SWU, Blagoevgrad.
Rochester Institute of Technology Job Submission Andrew Pangborn & Myles Maxfield 10/19/2015Service Oriented Cyberinfrastructure Lab,
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Unix and Samba By: IC Labs (Raj Kidambi). What is Unix?  Unix stands for UNiplexed Information and Computing System. (It was originally spelled "Unics.")
Password Cracking By Allison Ramondetta & Christine Giordano.
MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad P-GRADE Portal Heuristic Evaluation.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
06/08/10 P-GRADE Portal and MIMOS P-GRADE portal developments in the framework of the MIMOS-SZTAKI joint project Mohd Sidek Salleh MIMOS Berhad Zoltán.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Systems Software. Systems software Applications software such as word processing, spreadsheet or graphics packages Operating systems software to control.
A GANGA tutorial Professor Roger W.L. Jones Lancaster University.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Introduction to HPC Workshop March 1 st, Introduction George Garrett & The HPC Support Team Research Computing Services CUIT.
Password Cracking COEN 252 Computer Forensics. Social Engineering Perps trick Law enforcement, private investigators can ask. Look for clues: Passwords.
Centre for Parallel Computing Tamas Kiss Centre for Parallel Computing A Distributed Rendering Service Tamas Kiss Centre for Parallel Computing Research.
Advanced Computing Facility Introduction
Hackinars in Bioinformatics
GRID COMPUTING.
PARADOX Cluster job management
Unix Scripts and PBS on BioU
OpenPBS – Distributed Workload Management System
MPI Basics.
Special jobs with the gLite WMS
Using Paraguin to Create Parallel Programs
Grid Application Support Group Case study Schrodinger equations on the Grid Status report 16. January, Created by Akos Balasko
Chapter 8 File Security.
Practice #0: Introduction
Introduction to HPC Workshop
College of Engineering
Mike Becher and Wolfgang Rehm
Types of Software.
Learning Objectives To be able to describe the purpose of the CPU
Working in The IITJ HPC System
Presentation transcript:

MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad Key Size Analysis of Brute Force Attack for CyberSecurity Malaysia in P-GRADE Portal

About Key Size Analysis of Brute Force Attack and CyberSecurity Malaysia Key Size Analysis of Brute Force Attack is a fast password cracker, currently available for many flavors of Unix, Windows, DOS, BeOS, and OpenVMS. Its primary purpose is to detect weak Unix passwords. CyberSecurity Malaysia began its operation as the Malaysian Computer Emergency Response Team (MyCERT) in In 1998, it became the National ICT Security & Emergency Response Centre (NISER) and by 2007, NISER underwent another transformation and was renamed CyberSecurity Malaysia. With a new mandate, CyberSecurity Malaysia is positioned as the national cyber security specialist under the Ministry of Science, Technology and Innovation (MOSTI) MIMOS Berhad. All Rights Reserved.

Key Size Analysis of Brute Force Attack Benchmark in MIMOS cluster MIMOS Berhad. All Rights Reserved. The processing speed is doubled when the number of processors used was doubled Using all 128 CPU Cores at clock speed 2GHz was able to achieve speed of about 63,642 tries per second for Free BSH hash password.

Key Size Analysis of Brute Force Attack Workflow MIMOS Berhad. All Rights Reserved. #!/bin/sh #PBS –N csm.32cpus #PBS -l select=4:ncpus=8 export PROJECT_DIR=project_directory export EXEC_DIR=executable_directory echo "Project Directory = $PROJECT_DIR" echo "Executable Directory = $EXEC_DIR" echo echo "List of files in the project directory $PROJECT_DIR..." ls -l $PROJECT_DIR/ echo echo "List of files in the executable directory $EXEC_DIR/run..." ls -l $EXEC_DIR/run/ echo mpirun -np 32 -machinefile $PBS_NODEFILE $EXEC_DIR/run/ csm $PROJECT_DIR/mypasswd What will Submit job do? 1.It will submit (qsub) pbs-script to the local resource manager (PBS Pro) 2.Save the submission PBS Job ID in a file and pass it to the Monitor job.

Key Size Analysis of Brute Force Attack Workflow MIMOS Berhad. All Rights Reserved. # begin monitoring FINISH_STATUS="0" until [[ $FINISH_STATUS -eq "1" ]] do WC=`ssh $PBS_SERVER "tracejob -n 30 $PBS_JOBID | grep 'dequeuing from' | wc -l"` if [[ $WC -eq 1 ]]; then FINISH_STATUS="1" else FINISH_STATUS="0" fi done echo $WC > tracejob.out echo "Job $PBS_JOBID has finished..." What will Monitor job do? 1.It will receive PBS Job ID from Submit job 2.During runtime, it will goes to cluster head node and do PBS Pro tracejob to check it the given PBS Job ID already finished.

Key Size Analysis of Brute Force Attack Workflow MIMOS Berhad. All Rights Reserved.. executor.info zip -r $COLLECT_JOB_DIR/project-directory.zip $PROJECT_DIR echo "Done" > collect.status What will Collect and CleanUp job do? 1.Collect will compress all the output files and in one zip file 2.CleanUp will delete the Job Execution Directory which contain the output files and also delete all the log files.. executor.info echo "Removing project directory..." rm -rf $PROJECT_DIR echo "Removing csm output pot and log files..." $EXEC_DIR/run/cleanup.sh

THANK YOU MIMOS Berhad. All Rights Reserved.