Download presentation
Presentation is loading. Please wait.
Published byBuck Bradley Modified over 9 years ago
1
www.compaq.com Operating System Issues in Multi-Processor Systems John Sung Hardware Engineer Compaq Computer Corporation
2
Outline Multi-Processor Hardware Issues Snoopy Bus System Architecture AMD Athlon’s Snoopy Protocol ccNUMA System Architecture AMD Athlon’s LDT System Bus SGI Origion’s ccNUMA System Architecture Alpha 21364 System Architecture ccNUMA and CPU Scheduling Conclusion
3
Multi-Processor Hardware Issues Bandwidth/Latency Processor to Processor Processor to Processor Processor to Memory Processor to Memory Processor to I/O Processor to I/O Scalability Increase performance as you increase CPU/Memory Increase performance as you increase CPU/Memory Coherency/Synchronization Give software coherent view of memory Give software coherent view of memory Provide synchronization primitives Provide synchronization primitives
4
Snoopy Bus System Architecture
5
A bus Connects Processors,Memory,and I/O Scales upto ~16 processors Limited by bus bandwidth Cache Coherency Protocol Snoops the bus for memory traffic Snoops the bus for memory traffic Each set has to “listen” for addresses in it’s cache Each set has to “listen” for addresses in it’s cache Does the “right thing” to give software coherent view of memory Does the “right thing” to give software coherent view of memory
6
Snoopy Bus System Architecture CPU Core Cache Bus CPU Core Cache CPU Core Cache Memory I/O Memory I/O Memory I/O
7
ccNUMA System Architecture
8
Cache-Coherent Non-Uniform Memory Access Memory is distributed and attached to processors Some network connects each processor/memory sets Each processor owns part of the memory space Cache coherency protocol Gives software coherent view of memory Gives software coherent view of memory Protocol primitives for synchronization Protocol primitives for synchronization Directory to keep track of who has a copy of memory Directory to keep track of who has a copy of memory
9
ccNUMA System Architecture CPU Core Cache Memory Directory I/O Network Router CPU Core Cache Network Router Network Fabric Memory Directory I/O
10
SGI Origin System Architecture
11
SGI CrayLink TM Node = 2 CPU and their cache Module = Memory + Directory + HUB 2 Modules per Router System = Modules + Routers + CrayLink TM Network
12
SGI CrayLink TM
13
Processor System Network
14
Bisectional Bandwidth
15
ccNUMA and CPU Scheduling Issues
16
OS’s Questions Single CPU System What to schedule next? What to schedule next? ccNUMA System What to schedule next? What to schedule next? Which cpu to schedule it to? Which cpu to schedule it to? Where should the process information be located at? Where should the process information be located at? 1 or many instances of OS? 1 or many instances of OS?
17
OS’s Choices for a Process Single CPU System Process has1 choice Process has1 choice Process information has 1 choice Process information has 1 choice ccNUMA System with N CPU’s and M Memory Process has N choices Process has N choices Process information M choices per virtual page Process information M choices per virtual page “Distance” between process and it’s information “Distance” between process and it’s information
18
Context Switch Penalty Single CPU System Saving/Restoring process state (PCB) Saving/Restoring process state (PCB) Scheduling routine Scheduling routine ccNUMA System Saving/Restoring process state (PCB) Saving/Restoring process state (PCB) Scheduling routine Scheduling routine Moving process’s information Moving process’s information
19
Some Common Sense Replicate parts of the OS across processors System calls will happen often System calls will happen often Minimize process movement Cost of moving a process to another CPU is high Cost of moving a process to another CPU is high Less than swaping to disk, most of the time Less than swaping to disk, most of the time Higher than simple context switching Higher than simple context switching But if you have to move a process Minimize the amount of information to move Minimize the amount of information to move Opportunity for a cache???? Opportunity for a cache????
20
Conclusion Hardware Bandwidth and Latency for performance Bandwidth and Latency for performance Cache Coherency for correctness Cache Coherency for correctness Operating System ccNUMA adds complexity in CPU scheduling ccNUMA adds complexity in CPU scheduling HW performance = Lower Context Switch Penalty => flexibility in scheduling choices for a process HW performance = Lower Context Switch Penalty => flexibility in scheduling choices for a process
21
References Alpha http://www.digital.com/alphaoem/present/ev7forum98.ppt http://www.digital.com/alphaoem/present/ev7forum98.ppt http://www.compaq.com/InnovateForum99/presentation/session31/ http://www.compaq.com/InnovateForum99/presentation/session31/ http://www.digital.com/alphaoem/ http://www.digital.com/alphaoem/ AMD http://www.amd.com/products/cpg/mpf/speech/slides99.ppt http://www.amd.com/products/cpg/mpf/speech/slides99.ppt SGI http://www-europe.sgi.com/origin/numa_tech.html http://www-europe.sgi.com/origin/numa_tech.html BenchMarks http://www.spec.org/ http://www.spec.org/ http://www.tpc.org/ http://www.tpc.org/
22
Abbreviation Index AMD - Advanced Micro Devices SGI - Silicon Graphics Inc. ECC - Error Correction Code SECDED - Single Error Correct Double Error Detect API - Alpha Processor Inc AGP - Accelerated Graphics Port DDR DRAM - Double Data Rate Dynamic RAM LTD - Lightning Data Transport PCI - Peripheral Component Interconnect CMOS - Complementary Metal Oxide Semiconductor CAS - Column Address Strobe TPC-C -Transaction Processing Performance Council Benchmark ccNUMA - Cache-Coherent Non-Uniform Memory Access SMP - Symmetric Multi-Processing
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.