IMPROUVEMENT OF COMPUTER NETWORKS SECURITY BY USING FAULT TOLERANT CLUSTERS Prof. S ERB AUREL Ph. D. Prof. PATRICIU VICTOR-VALERIU Ph. D. Military Technical.

Slides:



Advertisements
Similar presentations
Distributed Data Processing
Advertisements

MUNIS Platform Migration Project WELCOME. Agenda Introductions Tyler Cloud Overview Munis New Features Questions.
Chapter 5: Server Hardware and Availability. Hardware Reliability and LAN The more reliable a component, the more expensive it is. Server hardware is.
Network+ Guide to Networks, Fourth Edition
Objektorienteret Middleware Presentation 2: Distributed Systems – A brush up, and relations to Middleware, Heterogeneity & Transparency.
Storage area Network(SANs) Topics of presentation
Reliability Week 11 - Lecture 2. What do we mean by reliability? Correctness – system/application does what it has to do correctly. Availability – Be.
1 ITC242 – Introduction to Data Communications Week 12 Topic 18 Chapter 19 Network Management.
Introduction Security is a major networking concern. 90% of the respondents to the 2004 Computer Security Institute/FBI Computer Crime and Security Survey.
Data Networking Fundamentals Unit 7 7/2/ Modified by: Brierley.
Network+ Guide to Networks, Fourth Edition Chapter 1 An Introduction to Networking.
Session 3 Windows Platform Dina Alkhoudari. Learning Objectives Understanding Server Storage Technologies Direct Attached Storage DAS Network-Attached.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
What is a Computer Network? Two or more computers which are connected together.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
11 SERVER CLUSTERING Chapter 6. Chapter 6: SERVER CLUSTERING2 OVERVIEW  List the types of server clusters.  Determine which type of cluster to use for.
By : Nabeel Ahmed Superior University Grw Campus.
Rack Mounted Network Equipment Patch Panels Network Switch Router Rack Servers HDD Arrays UPS System Provides circuit protection and an uninterruptable.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
Team Members Lora zalmover Roni Brodsky Academic Advisor Professional Advisors Dr. Natalya Vanetik Prof. Shlomi Dolev Dr. Guy Tel-Zur.
CLUSTER COMPUTING Prepared by: Kalpesh Sindha (ITSNS)
Lecture 13 Fault Tolerance Networked vs. Distributed Operating Systems.
GeoVision Solutions Storage Management & Backup. ๏ RAID - Redundant Array of Independent (or Inexpensive) Disks ๏ Combines multiple disk drives into a.
Network+ Guide to Networks, Fourth Edition Chapter 1 An Introduction to Networking.
Managing Multi-User Databases AIMS 3710 R. Nakatsu.
Module 13: Configuring Availability of Network Resources and Content.
NETWORK Topologies An Introduction.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
CSI-09 COMMUNICATION TECHNOLOGY FAULT TOLERANCE AUTHOR: V.V. SUBRAHMANYAM.
Guide to Linux Installation and Administration, 2e 1 Chapter 9 Preparing for Emergencies.
Common Devices Used In Computer Networks
INSTALLING MICROSOFT EXCHANGE SERVER 2003 CLUSTERS AND FRONT-END AND BACK ‑ END SERVERS Chapter 4.
Module 9: Configuring Storage
IT Infrastructure Chap 1: Definition
© 2005 Mt Xia Technical Consulting Group - All Rights Reserved. HACMP – High Availability Introduction Presentation November, 2005.
©G. Millbery 2001Communications and Networked SystemsSlide 1 Purpose of Network Components  Switches A device that controls routing and operation of a.
GROUP INVOLVED IN A WEB APPLICATION DEVELOPMENT Continue.
LAN Switching and Wireless – Chapter 1
1 Windows 2000 Product family (Week 3, Monday 1/23/2006) © Abdou Illia, Spring 2006.
Chapter 21 Topologies Chapter 2. 2 Chapter Objectives Explain the different topologies Explain the structure of various topologies Compare different topologies.
1 Week #10Business Continuity Backing Up Data Configuring Shadow Copies Providing Server and Service Availability.
Clustering In A SAN For High Availability Steve Dalton, President and CEO Gadzoox Networks September 2002.
ITGS Networks. ITGS Networks and components –Server computers normally have a higher specification than regular desktop computers because they must deal.
OSIsoft High Availability PI Replication
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
Group 2 Bernard Smith Thomas Laborde Hannah Prather Fault Tolerance Environment Power Topology and Connectivity Servers Hurricane Preparedness Network.
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 12: Planning and Implementing Server Availability and Scalability.
I NTRODUCTION TO N ETWORK A DMINISTRATION. W HAT IS A N ETWORK ? A network is a group of computers connected to each other to share information. Networks.
CHAPTER 7 CLUSTERING SERVERS. CLUSTERING TYPES There are 2 types of clustering ; Server clusters Network Load Balancing (NLB) The difference between the.
WINDOWS SERVER 2003 Genetic Computer School Lesson 12 Fault Tolerance.
Install, configure and test ICT Networks
Copyright 2007, Information Builders. Slide 1 iWay Web Services and WebFOCUS Consumption Michael Florkowski Information Builders.
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
High Availability Environments cs5493/7493. High Availability Requirements Achieving high availability Redundancy of systems Maintenance Backup & Restore.
Elements of an ICT networks COMMUNICATION DEVICES: 1.Network interface card 2.Hub 3.Switch 4.Router STANDARDS AND PROCEDURES: 1.Enable devices to communicate.
This courseware is copyrighted © 2016 gtslearning. No part of this courseware or any training material supplied by gtslearning International Limited to.
Seminar On Rain Technology
What is raid? RAID is the term used to describe a storage systems' resilience to disk failure through the use of multiple disks and by the use of data.
OSIsoft High Availability PI Replication Colin Breck, PI Server Team Dave Oda, PI SDK Team.
PERFORMANCE MANAGEMENT IMPROVING PERFORMANCE TECHNIQUES Network management system 1.
rain technology (redundant array of independent nodes)
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 12: Planning and Implementing Server Availability and Scalability.
Managing Multi-User Databases
Chapter 15: Networking Services Design Optimization
Distribution and components
Introduction to Networks
Storage Virtualization
Introduction To Distributed Systems
Distributed Systems and Concurrency: Distributed Systems
Presentation transcript:

IMPROUVEMENT OF COMPUTER NETWORKS SECURITY BY USING FAULT TOLERANT CLUSTERS Prof. S ERB AUREL Ph. D. Prof. PATRICIU VICTOR-VALERIU Ph. D. Military Technical Academy Bucharest, Romania

FAULT TOLERANT SYSTEMS F A fault tolerant system is one that can continue to operate reliably by producing acceptable outputs in spite of occasional occurrences of component failures. F The basic principle of fault tolerant design is the use of redundancy. F A fault tolerant system can be viewed as a nested set of subsystems. F Fault tolerant architectures package redundant partitions into replaceable units.

CLUSTERS AND FAULT TOLERANT CLUSTERS F A cluster is a set of computers connected over a local network, that function as a single large multicomputer. The cluster software is a layer that runs on top of local operating systems running on each computer. F A fault tolerant cluster is a cluster with external storage devices connected to the nodes on a common input/output bus. Clients are connected over the networks to a server application that is executing on the nodes.

SINGLE POINTS OF FAILURE OF A CLUSTER F nodes in the cluster; F disks used to store application or data, adapters, controllers and cables used to connect the nodes to the disks; F the network backbones over which the users are accessing the cluster nodes and network adapters attached to each node; F power sources; F applications.

A SAMPLE CONFIGURATION FOR A FAULT TOLERANT CLUSTER

ELIMINATING NODES AS SINGLE POINTS OF FAILURE F When a node providing critical services in a cluster fails, another node in the cluster takes over its resources and provides the same services to the end user, in a process known as failover. F After the failover, clients can access the second node as easily as the first. F The process of failover is handled by special high availability software running at the top level cluster operating system.

ELIMINATING DISKS AS SINGLE POINTS OF FAILURE F Disks are physically connected to all nodes, so that applications and data are also accessible by another node in the event of failover. F There are two methods available for providing disk redundancy: –using disk arrays in a RAID configuration; –using software mirroring.

ELIMINATING NETWORKS AS SINGLE POINTS OF FAILURE F For eliminating network failure can be provided fully redundant LAN connections, and configured local switching of LAN interfaces. F For eliminating cable failures, can be configured redundant cabling and redundant LAN interface cards on each node. F For eliminating the loss of client connectivity, can be configured redundant routers or redundant hubs or switches through which clients can access the services of the cluster.

ELIMINATING POWER SOURCES AS SINGLE POINTS OF FAILURE F The use of multiple power circuits with different circuit breakers reduces the likelihood of a complete power outage. F An uninterruptible power supply provides standby in the event of an interruption to the power source. F Small local uninterruptible power supply can be used to protect individual system processor units and data disks.

ELIMINATING APPLICATIONS AND DATA AS SINGLE POINTS OF FAILURE F The cluster management software provides services like as failure detection, recovery, load balancing, and the ability to manage the servers as a single system. F If there is a node failure, the cluster reconfigures itself and the applications that were running on the failed node and data used by these applications are made available on another node. F Another approach is to provide different instances of the same application running on multiple nodes.

INTEROPERABILITY BETWEEN M&S AND C4ISR SYSTEMS F A key task for the M&S community is to link M&S systems with live or real C4ISR systems. F Within the C4ISR community there is a similar pressing need to link C4ISR equipments with simulations.

COMMON KEY CONCEPTS IN M&S SYSTEMS, C4ISR SYSTEMS, AND FAULT TOLERANT CLUSTERS F open and distributed systems; F networks; F high level operating systems; F segments, federates (federations) and packages; F hierarchical architecture; F commercial standards, specifications, and products F interoperability and reusability; F high availability systems.

OPEN AND DISTRIBUTED SYSTEMS F All modern systems used for modeling and simulation and C4ISR are open and distributed systems. F The architecture of all modern fault tolerant systems is that of a cluster, which is one of the best open and distributed systems.

NETWORKS F A fault tolerant cluster is a set of independent computers connected over a network, and always with external storage devices, containing applications and data, connected to the nodes on a common input/output bus. Clients are connected over the networks to a server application that is executing on the nodes. F The basic High Level Architecture protocol establishes that the communications path between any federates is over the network.

HIGH LEVEL OPERATING SYSTEMS F In a fault tolerant system the cluster software is a layer that runs on top of local operating systems running on each computer. F The high availability applications in the fault tolerant cluster run at the top level cluster software. F In the High Level Architecture the Runtime Infrastructure is a high level distributed operating system for the federation.

SEGMENTS, FEDERATES (FEDERATIONS) AND PACKAGES F The basic components of the High Level Architecture are the simulations themselves, or more generally, the federates. F In DII-COE-based systems, all software and data are packaged in self-contained units called segments. F By using the high-level cluster software, application services and all the resources needed to support the application can be putted together into special entities called application packages.

HIERARCHICAL ARCHITECTURE F All fault-tolerant clusters are partitioned at several levels, but in addition it contains redundant components and recovery mechanisms which may be employed in different ways at different levels. F Simulations that use the HLA are modular in nature allowing federates to join and resign from the federation as the simulation executes. F At top of any fault tolerant cluster, command and control, and High Level Architecture compliant system there is a distributed operating system that runs on top of local operating systems running on each computer or on top of federates and federations.

COMMERCIAL STANDARDS, SPECIFICATIONS, AND PRODUCTS F The commercial marketplace generally moves at a faster pace than the military marketplace F Using already built items lowers production costs F The probability of product enhancements is increased because the marketplace is larger F The probability of standardization is increased because a larger customer base drives it

INTEROPERABILITY AND REUSABILITY F The High Level Architecture can be seen as a “software bus” that allow applications and data to communicate with one another, regardless of who designed them, the platform they are running on, and the language they are written in. F The fault tolerant cluster can offer a good architecture for High Level Architecture to work with these federations or for applications running on C4ISR systems.

HIGH AVAILABILITY SYSTEMS F The military systems used in M&S and command and control must not succumb to different faults and must continue to operate reliably in spite of occasional occurrences of component failures. F High availability and security must be designed into the architecture. F Fault tolerance is the best guarantee that the system will be high available, and the essential services will be offered in real-time to the users of M&S systems or C4ISR systems.