(2.1) Fundamentals Terms for magnitudes – logarithms and logarithmic graphs Digital representations – Binary numbers – Text – Analog information Boolean algebra Logical expressions and circuits
(2.2) Information Technology Magnitude Terms Large – kilo=1,000 – mega=1,000,000 – giga=1,000,000,000 – tera=1,000,000,000,000 – peta=1,000,000,000,000,000 Small – milli=1/1,000 – micro=1/1,000,000 – nano=1/1,000,000,000 – pico=1/1,000,000,000,000 – femto=1/1,000,000,000,000,000
(2.3) Logarithms Because of these great differences in magnitudes, often use logarithms to represent values – a logarithm is the power to which some base must be raised to get a particular value For example, the base 10 logarithm of 1000 (written log ) is 3, since 10 3 = 1000
(2.4) Logarithm Scale Graphs Graphs often use a logarithmic scale on one axis so that the data fit on a reasonable size graph
(2.5) Logarithm Scale Graphs (continued) The problem with this is that such graphs lose the impact of how rapidly the magnitudes change
(2.6) Binary Numbers Digital systems operate using the binary number system – only two digits, 0 and 1 – can be represented in computer several ways » voltage high or low » magnetized one direction or another – each digit is a binary digit, or bit – referred to as being in base 2 Magnitudes of binary numbers determined using positional notation, just like decimal – = 1* * * *10 3 – = 1* * * * * *2 5
(2.7) Converting Between Number Systems To convert binary to decimal, simply perform arithmetic in base 10 – = 1* * * * * *2 5 = = 37 To convert decimal to binary, divide the decimal value by 2 – remainder is rightmost digit of binary number – repeat on quotient 37/2 = 18remainder 1 18/2 = 9remainder 0 9/2 = 4remainder 1 4/2 = 2remainder 0 2/2 = 1remainder 0 1/2 = 0remainder 1 binary number is
(2.8) Converting Between Number Systems (continued) Alternatively, build a table of powers of 2, write 1 by largest magnitude less than value to convert, then subtract that from the number and repeat until get to 0 – produces number most significant digit first
(2.9) Binary Arithmetic What happens if you add two digits in base 10 and get a result greater than 9? – generate a carry Same thing happens if you add two binary digits and get a result greater than
(2.10) Binary Arithmetic (continued) To do addition, we need just one more piece of information Then, we can add two binary numbers by using the four cases on the previous slide and the identity above carry addend addend result
(2.11) Binary Arithmetic (continued) Subtraction uses a similar idea, that of a borrow from the next column left when we’re trying to subtract a larger digit from a smaller With binary digits, the same thing holds
(2.12) Binary Arithmetic (continued) Consider a couple examples
(2.13) Binary Arithmetic (continued) Multiplication is simple – 0 times anything is 0 – 1 times anything is that thing again For example x
(2.14) Binary Arithmetic (continued) For division, the divisor is either – less than or equal to what it’s dividing into, so the quotient is 1 – greater than what it’s dividing into, so the quotient is 0 For example quotient remainder
(2.15) Octal and Hexadecimal Reading and writing binary numbers can be confusing, so we often use octal (base 8) or hexadecimal (base 16) numbers – group binary number into sets of 3 (octal) or 4 (hex) bits, then replace each group by its corresponding digit from the tables – to convert back to binary, just replace each octal or hex digit with its binary equivalent
(2.16) Real Numbers Previous numeric values were all integers We commonly use real numbers (with decimal point and fractional part) as well – = 2x x x x x10 -3 Same idea holds for binary numbers – = 1x x x x x x x x2 -3 Can also write these in scientific notation – x 10 2 – x 2 5 Referred to as floating point numbers in “computer speak”
(2.17) Holding Binary Numbers in a Computer Computer memory is organized into chunks of 8 bits, called bytes The range of values that an integer can hold depends on how many bytes of memory are used – 1 byte – 2 bytes-32, ,767 – 4 bytes-2,147,483, ,147,483,647 Floating point numbers usually have 4 or 8 byte representations – separate exponent and magnitude magnitude exp sign
(2.18) Representing Text Text is an example of discrete information – like integers - there are only certain values that are allowed Representing text in a computer is simply a matter of defining a correspondence between each character and a unique binary number – called a code – need different numbers for upper and lower case representation of same letter – need representation for digits as characters – want A to be less than B so it’s possible to alphabetize character information
(2.19) Representing Text (continued) American Standard Code for Information Interchange (ASCII) code is standard for most computers – 7-bit code (128 possible characters) – stored in memory as single byte Won’t represent non-Roman characters easily – new 16-bit UniCode will
(2.20) Representing Analog Information If the data we want to represent in a computer is not discrete but continuous, need to turn it into a sequence of numerical values by sampling – examples are sound, pressures, images, video – sequence of samples approximates original signal
(2.21) Representing Analog Information (continued) Values used for the samples determine precision of measurement – too coarse a division of the range of possible input values yields a poor approximation – too fine a division wastes storage space (since more bits needed for each sample) » 8 bits, 256 levels; 16 bits, 65,536 levels
(2.22) Representing Analog Information (continued) Number of samples in given time period is called the frequency or sample rate – defined by number of measurements per second (Hz) – sample rate needed depends on how rapidly the input signal changes
(2.23) Representing Analog Information (continued) Need to trade off sampling rate and precision to achieve acceptable approximation without letting resulting digital data get too large Audio CD – 44.1 KHz sampling rate – 16 bit precision – 1 minute of CD-quality stereo is almost 10.6 Mbytes For images – resolution (number of samples in horizontal and vertical direction) takes role of sampling rate – Precision is measured by number of bits per sample (samples are called pixels) per channel
(2.24) Original – 1600 x 800 Pixels, 24 Bits per Pixel
(2.25) Lower Resolution – 300 x 150 Pixels, 24 Bit
(2.26) Lower Resolution – 150 x 75 Pixels, 24 Bit
(2.27) Lower Resolution – 50 x 25 Pixels, 24 Bit
(2.28) Lower Resolution – 25 x 12 Pixels, 24 Bit
(2.29) Base Image – 300 x 150, 24 Bit
(2.30) Lower Precision – 300 x 200 Pixels, 8 Bit
(2.31) Lower Precision – 300 x 200 Pixels, 4 Bits
(2.32) Lower Precision – 300 x 200 Pixels, 1 Bit
(2.33) Boolean Algebra Developed in 1854 by English mathematician George Boole – logical algebra in which all quantities are either true or false – fits well with binary representations (1 = true, 0 = false) Foundation of all computer hardware design Three fundamental logical operations example of a truth table
(2.34) Boolean Algebra (continued) It’s important that the possible values for A and B are assigned so they cover all the possible combinations – assign methodically as shown on preceding slide
(2.35) Boolean Algebra (continued) Two other logical operations (combinations of the fundamental ones) are important – not or (nor) – not and (nand) – any logic function that can be expressed using and, or, not can also be expressed using just one of nand, nor think of as or followed by not, or and followed by not
(2.36) Logical Expressions Can combine these logical operations just as we combine arithmetic expressions, to produce logical expressions – order of operations is not first, then and, then or – do equal precedence operations left to right – change order with parentheses
(2.37) Implementing Logical Expressions To convert the logical expression to a circuit that calculates the equivalent logical value, simply provide a circuit for each of the terms of the logical expression
(2.38) Implementing Logical Expressions (continued) Of course, it’s not really as simple as this – there may be many possible logical expressions that produce the same output of 0s and 1s – the hardware designer must choose the optimal one based on one or more criteria » minimum number of logic functions » fewest different types of logic functions » fewest levels of logic functions between inputs and outputs
(2.39) Remembering the Past The previous logic circuit is an example of a combinational circuit – the output at any given time depends solely on the current values of the input Another kind of logic circuit is a sequential circuit – the output at any given time depends on the current values of the input and the current value of the output