DS - IV - TT - 1 HUMBOLDT-UNIVERSITÄT ZU BERLIN INSTITUT FÜR INFORMATIK DEPENDABLE SYSTEMS Vorlesung 4 Topological Testing Wintersemester 2000/2001 Leitung:

Slides:



Advertisements
Similar presentations
Basic Communication Operations
Advertisements

Communication Networks Recitation 3 Bridges & Spanning trees.
 Network topology is the layout pattern of interconnections of the various elements (links, nodes, etc.) of a computer.  Network topologies may be physical.
Parallel Architectures: Topologies Heiko Schröder, 2003.
Parallel Architectures: Topologies Heiko Schröder, 2003.
1 Lecture 23: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Appendix E)
Interconnection Networks 1 Interconnection Networks (Chapter 6) References: [1,Wilkenson and Allyn, Ch. 1] [2, Akl, Chapter 2] [3, Quinn, Chapter 2-3]
Parallel Routing Bruce, Chiu-Wing Sham. Overview Background Routing in parallel computers Routing in hypercube network –Bit-fixing routing algorithm –Randomized.
NUMA Mult. CSE 471 Aut 011 Interconnection Networks for Multiprocessors Buses have limitations for scalability: –Physical (number of devices that can be.
Interconnection Network PRAM Model is too simple Physically, PEs communicate through the network (either buses or switching networks) Cost depends on network.
DS -V - FDT - 1 HUMBOLDT-UNIVERSITÄT ZU BERLIN INSTITUT FÜR INFORMATIK Zuverlässige Systeme für Web und E-Business (Dependable Systems for Web and E-Business)
Communication operations Efficient Parallel Algorithms COMP308.
7. Fault Tolerance Through Dynamic or Standby Redundancy 7.6 Reconfiguration in Multiprocessors Focused on permanent and transient faults detection. Three.
1 Lecture 25: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) Review session,
8.1 Chapter 8 Switching Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Connecting LANs, Backbone Networks, and Virtual LANs
Switching Techniques Student: Blidaru Catalina Elena.
Interconnect Network Topologies
Interconnection Networks. Applications of Interconnection Nets Interconnection networks are used everywhere! ◦ Supercomputers – connecting the processors.
Interconnect Networks
Network Topologies Topology – how nodes are connected – where there is a wire between 2 nodes. Routing – the path a message takes to get from one node.
Broadcast & Convergecast Downcast & Upcast
Lecture 12: Parallel Sorting Shantanu Dutt ECE Dept. UIC.
Basic Communication Operations Based on Chapter 4 of Introduction to Parallel Computing by Ananth Grama, Anshul Gupta, George Karypis and Vipin Kumar These.
All that remains is to connect the edges in the variable-setters to the appropriate clause-checkers in the way that we require. This is done by the convey.
CSE Advanced Computer Architecture Week-11 April 1, 2004 engr.smu.edu/~rewini/8383.
Topology aggregation and Multi-constraint QoS routing Presented by Almas Ansari.
Dynamic Interconnect Lecture 5. COEN Multistage Network--Omega Network Motivation: simulate crossbar network but with fewer links Components: –N.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
CHAPTER 12 INTRODUCTION TO PARALLEL PROCESSING CS 147 Guy Wong page
1 Dynamic Interconnection Networks Miodrag Bolic.
Switching breaks up large collision domains into smaller ones Collision domain is a network segment with two or more devices sharing the same Introduction.
CSCI 232© 2005 JW Ryder1 Parallel Processing Large class of techniques used to provide simultaneous data processing tasks Purpose: Increase computational.
Switches and indirect networks Computer Architecture AMANO, Hideharu Textbook pp. 92~13 0.
Lecture 3 Innerconnection Networks for Parallel Computers
Overview of computer communication and Networking Communication VS transmission Computer Network Types of networks Network Needs Standards.
Anshul Kumar, CSE IITD CSL718 : Multiprocessors Interconnection Mechanisms Performance Models 20 th April, 2006.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 January Session 4.
Multiprossesors Systems.. What are Distributed Databases ? “ A Logically interrelated collection of shared data ( and a description of this data) physically.
1 Multicasting in a Class of Multicast-Capable WDM Networks From: Y. Wang and Y. Yang, Journal of Lightwave Technology, vol. 20, No. 3, Mar From:
InterConnection Network Topologies to Minimize graph diameter: Low Diameter Regular graphs and Physical Wire Length Constrained networks Nilesh Choudhury.
1 Interconnection Networks. 2 Interconnection Networks Interconnection Network (for SIMD/MIMD) can be used for internal connections among: Processors,
Basic Linear Algebra Subroutines (BLAS) – 3 levels of operations Memory hierarchy efficiently exploited by higher level BLAS BLASMemor y Refs. FlopsFlops/
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture.
Winter 2014Parallel Processing, Fundamental ConceptsSlide 1 2 A Taste of Parallel Algorithms Learn about the nature of parallel algorithms and complexity:
Ch 8. Switching. Switch  Devices that interconnected with each other  Connecting all nodes (like mesh network) is not cost-effective  Some topology.
HYPERCUBE ALGORITHMS-1
INTERCONNECTION NETWORKS Work done as part of Parallel Architecture Under the guidance of Dr. Edwin Sha By Gomathy Gowri Narayanan Karthik Alagu Dynamic.
Topology How the components are connected. Properties Diameter Nodal degree Bisection bandwidth A good topology: small diameter, small nodal degree, large.
Basic Communication Operations Carl Tropper Department of Computer Science.
DS - IX - NFT - 0 HUMBOLDT-UNIVERSITÄT ZU BERLIN INSTITUT FÜR INFORMATIK DEPENDABLE SYSTEMS Vorlesung 9 NETWORK FAULT TOLERANCE Wintersemester 99/00 Leitung:
COMP8330/7330/7336 Advanced Parallel and Distributed Computing Communication Costs in Parallel Machines Dr. Xiao Qin Auburn University
INTERCONNECTION NETWORK
Overview Parallel Processing Pipelining
Parallel Architecture
Interconnect Networks
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Interconnection Networks (Part 2) Dr.
Chapter 8 Switching Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Lecture 23: Interconnection Networks
Interconnection topologies
Refer example 2.4on page 64 ACA(Kai Hwang) And refer another ppt attached for static scheduling example.
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
NTHU CS5421 Cloud Computing
Multiprocessors Interconnection Networks
Mesh-Connected Illiac Networks
Communication operations
High Performance Computing & Bioinformatics Part 2 Dr. Imad Mahgoub
Chapter 11 Limitations of Algorithm Power
Brad Karp UCL Computer Science
Chapter 2 from ``Introduction to Parallel Computing'',
Presentation transcript:

DS - IV - TT - 1 HUMBOLDT-UNIVERSITÄT ZU BERLIN INSTITUT FÜR INFORMATIK DEPENDABLE SYSTEMS Vorlesung 4 Topological Testing Wintersemester 2000/2001 Leitung: Prof. Dr. Miroslaw Malek

DS - IV - TT - 2 WHAT IS TOPOLOGICAL TESTING? APPLICATION OF FORMAL GRAPH THEORY METHODS TO TEST SYSTEMS WHOSE ORGANIZATION OR BEHAVIOR CAN BE DESCRIBED BY A GRAPH

DS - IV - TT - 3 TOPOLOGY OF A SYSTEM OBJECTIVE: –Given the topology of a system, minimize the test time TOPOLOGY OF A SYSTEM: –A graph description of a system that reflects either its physical organization or its behavior

DS - IV - TT - 4 DOMAINS OF APPLICABILITY BEHAVIOR: –Testing the Finite State Machine representation of a system, e.g., testing protocol conformance ORGANIZATION: –Use the organization of a system to be tested in an optimal fashion, e.g., use of Hamiltonians and Eulerians HIERARCHY: –Partitioning for parallel testing –System integration

DS - IV - TT - 5 METHODS OF TOPOLOGICAL TESTING (1) HAMILTONIAN: Testing the nodes –Switch Model –Graph Model

DS - IV - TT - 6 METHODS OF TOPOLOGICAL TESTING (2) EULERIAN: Testing the edges

DS - IV - TT - 7 METHODS OF TOPOLOGICAL TESTING (3) TRAVELING SALESMAN PROBLEM (TSP): CHINESE POSTMAN PROBLEM:

DS - IV - TT - 8 METHODS OF TOPOLOGICAL TESTING (4) PARTITIONING: COVERING:

DS - IV - TT - 9 METHODS OF TOPOLOGICAL TESTING (5) PATH COVERING: DOMINATING SET:

DS - IV - TT - 10 METHODS OF TOPOLOGICAL TESTING (6) BROADCAST AND COLLECTION SPANNING TREES: COLORING PROBLEM:

DS - IV - TT - 11 THE RUBIK'S CUBE OF TOPOLOGICAL TESTING Partitioning + Path Covering Hamiltonian + TSP Eulerian + CPP PMS RTL logic behavior organization hierarchy level graph concept type

DS - IV - TT - 12 APPLICATION EXAMPLES TESTING MULTISTAGE INTERCONNECTION NETWORKS (BANYAN) TESTING HYPERCUBES PROTOCOL TESTING USING FINITE STATE MACHINES –Uyar and Dahbura MEMORY TESTING –Hayes –Patel

DS - IV - TT - 13 TESTING OF MULTISTAGE INTERCONNECTION NETWORKS (BANYANS) An (f, L) SW-Banyan is an L level Multistage Interconnection Network having N (=f L ) inputs and outputs and using f x f switches This type of network has been used in ETL's Sigma I, Butterfly, IBM's RP3, TRAC, PASM and other computers

DS - IV - TT - 14 A 2 x 2 SWITCH FAULT MODEL: –Stuck-at and bridge faults on data and control lines –Routing faults –Conflict resolution

DS - IV - TT - 15 PROPERTIES OF BANYANS The system graph has a Hamiltonian and an Eulerian. –This property can be used to implement a serial (on-line) fault detection on nodes and edges. There exists f pair wise edge-disjoint test graphs, each with n disjoint paths between pairs of processors. –This property can be used to implement parallel (off-line) fault detection.

DS - IV - TT - 16 PARALLEL TESTING OF DATA PATHS Two tests are sufficient to detect any number of s-a-  faults on the data part of the vertices 2 f tests are sufficient to detect any number of multiple s-a-  faults on the data part of the edges Fault location: –Vertices s-a-  : 2f tests can locate up to f-1 faults –Edges s-a-  : 2f + log L tests

DS - IV - TT - 17 TESTING ROUTING Only f tests are needed to test the routing capabilities of the switches in the entire network Use the f edge- disjoint test graphs (f = 2)

DS - IV - TT - 18 TESTING THE CONTROL AND PRIORITY LOGIC (1) Objective: Test the correct behavior of the switches under any input pattern especially in case of contention on the outputs To test completely for control faults, every mapping of inputs to outputs has to be tested Number of tests for an f x f switch is:

DS - IV - TT - 19 TESTING THE CONTROL AND PRIORITY LOGIC (2) 2 x 2 switch: T 2 = 8 4 x 4 switch: T 4 = 624 8x8 switch: T 8 = 43,046,721 In the case of round-robin priority, tests should be repeated for every priority state

DS - IV - TT - 20 TESTING TECHNIQUE Test separately the Arbitration Logic Blocks (ALB), the Routing and Storage Blocks and the SELECT/DESELECT lines Design of the control part of a 4 x 4 switch f2 f-1 tests are sufficient to test the conflict resolution capabilities of a switch in the case of fixed priority and f 2 f f tests are sufficient in the case of round-robin priority

DS - IV - TT - 21 Finite State Machine Finite State Machine of the ALB of a 4 x 4 switch using round-robin priority

DS - IV - TT - 22 TESTING CONFLICT RESOLUTION IN THE ENTIRE NETWORK i-conflict  i inputs requesting the same output in a switch Two tests are sufficient to produce an i-conflict and an (f-i)- conflict in every switch in the network independently of the number of levels 2-conflicts and 1-conflicts produced by the same test on all even numbered levels of the network

DS - IV - TT - 23 ESTIMATION OF TESTING TIME FOR THE SIGMA-1 COMPUTER SIGMA-1 interconnection network: L = 2 two levels 10 x 10 switches configured as 8 x 8's Round-robin priority Time assumed for traversal of the network and memory access: t = 120 ns Estimated testing time using a pseudoexhaustive method: ~20 hours Actual testing time: ~22.5 hours Estimated testing time using our method: 26 seconds OVER 3000 TIMES BETTER!

DS - IV - TT - 24 TESTING OF HYPERCUBES (1) PROPERTIES OF HYPERCUBES: Distributed-memory, message-passing multiprocessor N = 2 N processors consecutively numbered by binary integers from 0 through 2 N - 1 Each processor connected to all other processors whose binary tags differ from its own by exactly one bit Degree of each vertex = n Homogeneous

DS - IV - TT - 25 TESTING OF HYPERCUBES (2) Cube of each dimension is obtained by replicating the one of next lower dimension, then connecting corresponding nodes Partitioning into smaller sub-cubes is easy An example of a hypercube of dimension 4

DS - IV - TT - 26 ROUTING ON A HYPERCUBE At each stage, the routing scheme is simply to send the message to the neighbor whose binary tag agrees with the tag of the ultimate destination in the next bit position that differs between the sender and final destination Alternatively, Source S n S n-1 …… S 2 S 1 Destination d n d n-1 …… d 2 d 1 Bit wise EXORx n x n-1 …… x 2 x 1 where x i = S i + d i for i = 1, ……, n Those values of i for which x i = 1 indicate the dimension that must be traversed to transfer a message from source to destination

DS - IV - TT - 27 COMMUNICATION No shared memory Message-passing communication system Store-and-forward function at each node, e.g., a receiving processor checks the address of the message and reroutes the message if not intended for it

DS - IV - TT - 28 TESTING TECHNIQUE (1) 1.Partition circuit into node and edge disjoint Q 2 ’s (or C 4 ’s ) 2.Perform a ring test, i.e., each node sends a packet to the diagonal node, all in the same direction, first clockwise, then anti-clockwise 3.Repeat for all ( ) partitions n2n2

DS - IV - TT - 29 TESTING TECHNIQUE (2) Number of node and edge disjoint Q 2‘s in each partition = 2 n-2 where n is the dimension of the cube There are n C 2 such partitions possible. All partitions are to be tested Time is of the order of 0(n 2 ) or 0(log 2 N)

DS - IV - TT - 30 PARTITIONING A 3-CUBE First Phase: [000, 001, 011, 010] and [100, 101, 111, 110] Second Phase: [000, 001, 101, 100] and [010, 011, 111, 110] Third Phase: [000, 010, 110, 100] and [001, 011, 111, 100]

DS - IV - TT - 31 WHAT IS TESTED BY THE ABOVE TEST? All nodes All communication links All paths between any pair of processors TESTING CONTENTION: To test contention of two (a 2-conflict), partition Q n into Q 2’s as before. Within each Q 2, only one node can be tested at a time

DS - IV - TT - 32 Testing Contention of 3 on a Q 3 (1) Estimation of Test Time: Time to test the hypercube with pseudoexhaustive test = Nn(2 n-1 -1)t 0(N 2 logN)

DS - IV - TT - 33 Testing Contention of 3 on a Q 3 (2) IN FACT, TESTING MAXIMAL CONTENTION CAN BE DEFINED AS GRAPH COLORING OF G 2 FOR Q n  Q 2 n T: 2  log 2 (n+1)  COLORS ARE SUFFICIENT TO COLOR Q 2 n FOR M n  M 2 n Five colors are necessary to color M 2 n

DS - IV - TT - 34 EXAMPLES (1) 1.A HYPERCUBE

DS - IV - TT - 35 EXAMPLES (2) 2.A MESH COMMERCIAL QUALITY TEST DEVELOPED FOR A MESH NETWORK IN SYMULT, STATE-OF-THE-ART MULTIPROCESSOR, REQUIRES CONSTANT NUMBER OF TESTS AND TEST TIME IS LESS THAN 2.5 ms, REGARDLESS OF THE SYSTEM SIZE

DS - IV - TT - 36 Time Time to test the hypercube with our tests = For hypercube of dimension 10 (Q 10 ) time to test for contention decreases from the range of about 7 min - 8 hours 40 min to 5 sec min, which results in about 80 times improvement  log 12 n C 2 +  n i=3 n i C2 (i + 1) i ] t [

DS - IV - TT - 37 EXAMPLES (3)

DS - IV - TT - 38 EXAMPLES (4)

DS - IV - TT - 39 CONCLUSIONS TOPOLOGICAL TESTING EXPLORES POWER OF GRAPH THEORY TO TEST –BEHAVIOR –ORGANIZATION –HIERARCHY –OF COMPUTER SYSTEMS AND NETWORKS EFFICIENT ALGORITHMS FOR TESTING MULTISTAGE NETWORKS AND HYPER-CUBES ACHIEVE OVER THREE ORDERS OF MAGNITUDE SPEEDUP WITHOUT COMPRO- MISING TEST COVERAGE