Project 11: Influence of the Number of Processors on the Miss Rate Prepared By: Suhaimi bin Mohd Sukor M031010040.

Slides:



Advertisements
Similar presentations
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
Advertisements

Multi-core systems System Architecture COMP25212 Daniel Goodman Advanced Processor Technologies Group.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache III Steve Ko Computer Sciences and Engineering University at Buffalo.
Practical Caches COMP25212 cache 3. Learning Objectives To understand: –Additional Control Bits in Cache Lines –Cache Line Size Tradeoffs –Separate I&D.
1 Copyright © 2011, Elsevier Inc. All rights Reserved. Appendix I Authors: John Hennessy & David Patterson.
CS 258 Parallel Computer Architecture Lecture 15.1 DASH: Directory Architecture for Shared memory Implementation, cost, performance Daniel Lenoski, et.
CSC 4250 Computer Architectures December 8, 2006 Chapter 5. Memory Hierarchy.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
Overview of Cache and Virtual MemorySlide 1 The Need for a Cache (edited from notes with Behrooz Parhami’s Computer Architecture textbook) Cache memories.
1 Lecture 11: SMT and Caching Basics Today: SMT, cache access basics (Sections 3.5, 5.1)
1 Lecture 12: Cache Innovations Today: cache access basics and innovations (Sections )
1 Lecture 14: Cache Innovations and DRAM Today: cache access basics and innovations, DRAM (Sections )
Caches J. Nelson Amaral University of Alberta. Processor-Memory Performance Gap Bauer p. 47.
LRU Replacement Policy Counters Method Example
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
Csci4203/ece43631 Review Quiz. 1)It is less expensive 2)It is usually faster 3)Its average CPI is smaller 4)It allows a faster clock rate 5)It has a simpler.
1 Lecture 13: Cache Innovations Today: cache access basics and innovations, DRAM (Sections )
1 Energy-efficiency potential of a phase-based cache resizing scheme for embedded systems G. Pokam and F. Bodin.
1 Lecture: Cache Hierarchies Topics: cache innovations (Sections B.1-B.3, 2.1)
Lecture 41: Review Session #3 Reminders –Office hours during final week TA as usual (Tuesday & Thursday 12:50pm-2:50pm) Hassan: Wednesday 1pm to 4pm or.
11/10/2005Comp 120 Fall November 10 8 classes to go! questions to me –Topics you would like covered –Things you don’t understand –Suggestions.
Cache Organization of Pentium
Multiprocessor Cache Coherency
Non-Uniform Cache Architectures for Wire Delay Dominated Caches Abhishek Desai Bhavesh Mehta Devang Sachdev Gilles Muller.
Memory/Storage Architecture Lab Computer Architecture Memory Hierarchy.
By: Aidahani Binti Ahmad
Lecture Objectives: 1)Define set associative cache and fully associative cache. 2)Compare and contrast the performance of set associative caches, direct.
Cache Control and Cache Coherence Protocols How to Manage State of Cache How to Keep Processors Reading the Correct Information.
CS 3410, Spring 2014 Computer Science Cornell University See P&H Chapter: , 5.8, 5.15.
CSE 378 Cache Performance1 Performance metrics for caches Basic performance metric: hit ratio h h = Number of memory references that hit in the cache /
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
Effects of wrong path mem. ref. in CC MP Systems Gökay Burak AKKUŞ Cmpe 511 – Computer Architecture.
SYNAR Systems Networking and Architecture Group CMPT 886: Computer Architecture Primer Dr. Alexandra Fedorova School of Computing Science SFU.
Abdullah Aldahami ( ) March 23, Introduction 2. Background 3. Simulation Techniques a.Experimental Settings b.Model Description c.Methodology.
Sequential Hardware Prefetching in Shared-Memory Multiprocessors Fredrik Dahlgren, Member, IEEE Computer Society, Michel Dubois, Senior Member, IEEE, and.
CSE 241 Computer Engineering (1) هندسة الحاسبات (1) Lecture #3 Ch. 6 Memory System Design Dr. Tamer Samy Gaafar Dept. of Computer & Systems Engineering.
Analyzing Performance Vulnerability due to Resource Denial-Of-Service Attack on Chip Multiprocessors Dong Hyuk WooGeorgia Tech Hsien-Hsin “Sean” LeeGeorgia.
The Memory Hierarchy Lecture # 30 15/05/2009Lecture 30_CA&O_Engr Umbreen Sabir.
Influence Of The Cache Size On The Bus Traffic Mohd Azlan bin Hj. Abd Rahman M
ASPLOS’02 Presented by Kim, Sun-Hee.  Technology trends ◦ The rate of frequency scaling is slowing down  Performance must come from exploiting concurrency.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
Cache Memory Chapter 17 S. Dandamudi To be used with S. Dandamudi, “Fundamentals of Computer Organization and Design,” Springer,  S. Dandamudi.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 5:
Project Summary Fair and High Throughput Cache Partitioning Scheme for CMPs Shibdas Bandyopadhyay Dept of CISE University of Florida.
Lecture 20 Last lecture: Today’s lecture: Types of memory
Cache Miss-Aware Dynamic Stack Allocation Authors: S. Jang. et al. Conference: International Symposium on Circuits and Systems (ISCAS), 2007 Presenter:
Recitation 6 – 3/11/01 Outline Cache Organization Replacement Policies MESI Protocol –Cache coherency for multiprocessor systems Anusha
ECE462/562 Class Project Intelligent Cache Replacement Policy Team member : Chen, Kemeng Gregory A Reida.
COMP SYSTEM ARCHITECTURE
Replacement Policy Replacement policy:
Multilevel Memories (Improving performance using alittle “cash”)
Lecture: Cache Hierarchies
A Study on Snoop-Based Cache Coherence Protocols
Lecture: Cache Hierarchies
Lecture 21: Memory Hierarchy
Lecture 21: Memory Hierarchy
Directory-based Protocol
Part V Memory System Design
Interconnect with Cache Coherency Manager
Module IV Memory Organization.
Performance metrics for caches
Chapter 5 Exploiting Memory Hierarchy : Cache Memory in CMP
Performance metrics for caches
Memory Systems CH008.
Lecture 22: Cache Hierarchies, Memory
Lecture 11: Cache Hierarchies
Lecture 21: Memory Hierarchy
Cache - Optimization.
Lecture 13: Cache Basics Topics: terminology, cache organization (Sections )
Performance metrics for caches
Presentation transcript:

Project 11: Influence of the Number of Processors on the Miss Rate Prepared By: Suhaimi bin Mohd Sukor M

Project 11 Configure a system with the following architectural characteristics: Cache coherence protocol = MESI. Scheme for bus arbitration = LRU. Word wide (bits) = 16. Words by block = 32 (block size = 64 bytes). Blocks in main memory = (main memory size = 32 MB). Blocks in cache = 256 (cache size = 16 KB). Mapping = Set-Associative. Cache sets = 64 (four-way set associative caches). Replacement policy = LRU.

Question : Does the global miss rate increase or decrease as the number of processors increases? Why? Does this increment or decrement happen for all the benchmarks or does it depend on the different locality grades, shared data,...? What does it happen with the capacity and coherence misses when you enlarge the number of processors? Are there conflict misses in these experiments? Why? Answer The global miss rate will increase as the number of processor increases. By the result, the increment are happened in all benchmarks. Generally, the greater number of processors, the higher miss rate and network traffics.

Question: Configure the number of processors in the SMP using the following configurations: 1, 2, 4, and 8. For each of the configurations, obtain the global miss rate for the system using the three traces that were generated by the post-mortem scheme: FFT, Simple and Weather. (Note: To change with Comp, Hydro and Nasa7 with your permission) Answer : I have attached the result by configuring each of the processor using SMP Cache simulator. I have chose Comp, Hydro and Nasa7 traces to get my result.

Full Result NO. OF PROCESSOR MISS RATE (%) COMP HYDRONASA7 MISSES NO.MISSES NO. MISSES NO

Result by using Comp traces

Using Comp traces No of Processor : 1

Using Comp traces No of Processor : 2

Using Comp traces No of Processor : 4

Using Comp traces No of Processor : 8

Using Hydro traces

Using Hydro traces No of Processor : 1

Using Hydro traces No of Processor : 2

Using Hydro traces No of Processor : 4

Using Hydro traces No of Processor : 8

Using Nasa7 traces

Using Nasa7 traces No of Processor : 1

Using Nasa7 traces No of Processor : 2

Using Nasa7 traces No of Processor : 4

Using Nasa7 traces No of Processor : 8

Conclusion Question: In conclusion, does the increase of the number of processors improve the multiprocessor system performance? Why and in what sense? Answer: In conclusion, the increase of the number of processors can improve the multiprocessor system performance. However, it depends on the bus bandwidth, if the Multiprocessor System have enough bus bandwidth then the performance will increase, if not the system will slow down.