Download presentation
Presentation is loading. Please wait.
Published byEugenia Nora Fowler Modified over 9 years ago
1
+ CS 325: CS Hardware and Software Organization and Architecture Introduction 2 2/2/2016 1
2
+ 2 Computer Architecture Understand where computers are going Future capabilities drive the (computing) world Understand high-level design concepts The best architects understand all the levels Devices, circuits, architecture, compiler, applications Write better software The best software designers also understand hardware Need to understand hardware to write fast software
3
+ Thoughts on Computer Architecture Principle of equivalence of hardware and software: Anything that can be done with software can also be done with hardware. Anything that can be done with hardware can also be done with software. 3
4
+ Computer Components At the most basic level, a computer is a device consisting of four pieces: A CPU to interpret and execute programs Memory to store both data and programs A system interconnection for communication among the CPU, memory, and I/O devices Interfaces for transferring data to and from the outside world 4
5
+ An Example System Consider this advertisement: What does it all mean? 5
6
+ Measures of Capacity Data Measurement SizeExample BitSingle Binary Digit (0 or 1) Byte8 bitsASCII value with parity Kilobyte (KB)1,024 Bytes2KB RAM AGC, landed on the moon Megabyte (MB)1,024 KilobytesAvg. MS Word Document size Gigabyte (GB)1,024 Megabytes4 – 8 GB, typical RAM capacity Terabyte (TB)1,024 Gigabytes2 – 4 TB, typical large capacity HDD Petabyte (PB)1,024 Terabytes~16PB/Week delivered to Steam users Exabyte (EB)1,024 Petabytes1Gram of DNA = ~450EB of data Zettabyte (ZB)1,024 Exabytes~40ZB of total digital data by 2020 (400,000,000,000 GB!) Yottabyte (YB)1,024 Zettabytes~$100 Trillion for 1 YB of data storage 6
7
+ Historical Development Generation Zero: Calculating clock – Wilhelm Schickard (1592) Pascaline – Blaise Pascal (1623) Difference Engine – Charles Babbage (1791) Punched card tabulating machines – Herman Hollerith (1860) Punched cards were commonly used for computer input well into the 1970s. 7
8
+ Historical Development Generation One – Vacuum Tube: 1945 - 1953 Electronic Numerical Integrator and Computer (ENIAC) (1946) IBM 650 (1953) First mass-produced computer 8
9
+ Historical Development Generation Two – Transistors: 1954 – 1965 Solid state semiconductor used for switching electrical signals and maintaining a digital state. IBM 7094 (scientific) IBM 1401 (business) Digital Equipment Corporation DEC PDP-1 9
10
+ Historical Development Generation Three – Integrated Circuit: 1965 – 1980 Solid state circuits made of semiconductor material. Much smaller than discrete circuits. IBM 360 DEC PDP-8, PDP-11 Cray-1 Supercomputer 10
11
+ Historical Development Generation Four – VLSI: 1980 - ? Very Large Scale Integrated Circuits, billions of transistors are now common. Enabled the creation of microprocessors. The first was the 4-bit Intel 4004. 11
12
+ Historical Development Moore’s Law (1965) Gordon Moore, Intel founder “The density of transistors in an integrated circuit will double every year.” Contemporary version: “The density of silicon chips will double every 18 months.” But this “law” cannot hold forever… 12
13
+ Historical Development Rock’s Law Arthur Rock, Intel financier “The cost of capital equipment to build semiconductors will double every four years.” In 1968, a new semiconductor manufacturing plant cost about $12,000 At the time, $12,000 would buy a nice home in the suburbs. 13
14
+ Historical Development Rock’s Law 2015, Intel D1X fabrication plant in Hillsboro, Oregon cost over $3 billion. $3 billion is more than the GDP of some small countries. For Moore’s Law to hold, Rock’s Law must fall, or vice versa. But no one can predict which will give out first. 14
15
+ Computer Level Hierarchy Writing complex programs requires a “divide and conquer” approach, where each software module solves a part of the problem. Complex computer systems employ a similar technique through a series of machine layers. 15
16
+ Orientation: A Server 16
17
+ Orientation: MacBook Air 17
18
+ Orientation: Iphone 18
19
+ Current Iteration of CPUs 19 Intel Skylake Corei7 Quad-core Desktop Processor
20
+ Computer Level Hierarchy Components at each level execute their own particular instructions, using components at lower levels to perform tasks as required. 20
21
+ Computer Level Hierarchy Level 6: The User Level Program execution and GUI Most familiar level 21
22
+ Computer Level Hierarchy Level 5: High-Level Language Write and interact with languages Java C Python 22
23
+ Computer Level Hierarchy Level 4: Assembly Language Lower level programming language in which there is a strong correspondence between the language and the CPU’s machine code instructions 23
24
+ Computer Level Hierarchy Level 3: System Software Controls executing processes on the system. Protects system resources. OS Kernels 24
25
+ Computer Level Hierarchy Level 2: Machine Level Also known as the Instruction Set Architecture (ISA) Level Consists of instructions that are particular to the architecture of the CPU Programs written at this level do not need compilers, interpreters, or assemblers. 25
26
+ Computer Level Hierarchy Level 1: Control Level A control unit decodes and executes instructions and moves data through the system Control units can be microprogrammed or hardwired. A microprogram is a program written in a low level language that is implemented by the hardware Hardwired control units consist of hardware that directly executes machine instructions 26
27
+ Computer Level Hierarchy Level 0: Digital Logic Level Lowest abstracted level consisting of digital circuits Gates and connections Implements the mathematical logic of all higher levels. 27
28
+ Computer Level Hierarchy Level -1: Discrete Component Level Resistors, Diodes, Capacitors, Transistors Very complex! 10 components needed just to create a NOT gate Not covered in detail 28
29
+ The Von Neumann Model On the ENIAC, all programming was done at the digital logic level. Programming the computer involved moving plugs and wires! A different hardware configuration was needed to solve every unique problem type. Configuring the ENIAC to solve a “simple” problem required many days labor by skilled technicians. 29
30
+ The Von Neumann Model John Von Neumann: Introduced in 1945 the concept of the stored- program computer. Digital computer that keeps programs and data together in read/write memory called Random- Access Memory (RAM) Today’s current computer architecture is based on the Von Neumann stored-program concept. 30
31
+ The Von Neumann Model Today’s stored-program computers have the following characteristics: Three hardware systems: A CPU. A main and secondary memory system. An I/O system. The capacity to carry out sequential instruction processing. A single data path between the CPU and main memory. This single path is known as the Von Neumann bottleneck. 31
32
+ The Von Neumann Model General depiction of a Von Neumann stored-program system These computers use Fetch, decode, execute cycles to run programs 32
33
+ The Von Neumann Model The control unit fetches the next instruction from memory using the program counter to determine where the instruction is located. 33
34
+ The Von Neumann Model The instruction is decoded into a language that the ALU can understand. 34
35
+ The Von Neumann Model Any data operands required to execute the instruction are fetched from memory and placed into registers within the CPU. 35
36
+ The Von Neumann Model The ALU executes the instruction and places results in registers or memory. 36
37
+ The Von Neumann Model Conventional stored-program computers have undergone many incremental improvements over the years. These improvements include adding specialized buses, floating-point units, and cache memories, to name only a few. But enormous improvements in computational power require departure from the classic von Neumann architecture. Adding processors is one approach. 37
38
+ The Von Neumann Model late 1960s high-performance computer systems were equipped with dual processors to increase computational throughput. 1970s supercomputer systems were introduced with 32 processors. 1980s Supercomputers with 1,000 processors were built in the 1980s. 1999 IBM announced its Blue Gene system containing over 1 million (single core) processors. 2015 MilkyWay-2 in China is the world’s most powerful supercomputer with over 3.1 million (Intel Xeon) CPU cores and 1PB RAM (1 million GB) 38
39
+ The Von Neumann Model Parallel processing is only one method of providing increased computational power. DNA computers, quantum computers, and dataflow systems are also heavily researched. At this point, it is unclear whether any of these systems will provide the basis for the next generation of computers. 39
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.