Binary, Hex, and Arbitrariness

Slides:



Advertisements
Similar presentations
Intro to CS – Honors I Representing Numbers GEORGIOS PORTOKALIDIS
Advertisements

Representing Numbers: Integers
18-May-15 Numbers. 2 Bits and bytes A bit is a single two-valued quantity: yes or no, true or false, on or off, high or low, good or bad One bit can distinguish.
Integer Types. Bits and bytes A bit is a single two-valued quantity: yes or no, true or false, on or off, high or low, good or bad One bit can distinguish.
Computer Structures Lecture 2: Working with 0’s and 1’s.
Assembly Language and Computer Architecture Using C++ and Java
Level ISA3: Information Representation
Agenda Shortcuts converting among numbering systems –Binary to Hex / Hex to Binary –Binary to Octal / Octal to Binary Signed and unsigned binary numbers.
Introduction to Programming with Java, for Beginners
CS 61C L02 Number Representation (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C.
Assembly Language and Computer Architecture Using C++ and Java
Binary Expression Numbers & Text CS 105 Binary Representation At the fundamental hardware level, a modern computer can only distinguish between two values,
Computer Systems 1 Fundamentals of Computing
 Binary Binary  Binary Number System Binary Number System  Binary to Decimal Binary to Decimal  Decimal to Binary Decimal to Binary  Octal and Hexadecimal.
Codes and number systems Introduction to Computer Yung-Yu Chuang with slides by Nisan & Schocken ( ) and Harris & Harris (DDCA)
Number Representations and Computer Arithmetic. CS 21a 9/23/02 Odds and Ends Slide 2 © Luis F. G. Sarmenta and John Paul Vergara, Ateneo de Manila University.
Numeral Systems Subjects: Numeral System Positional systems Decimal
Computers Organization & Assembly Language
Numbering Systems CS208.
Summer 2014 Chapter 1: Basic Concepts. Irvine, Kip R. Assembly Language for Intel-Based Computers 6/e, Chapter Overview Welcome to Assembly Language.
Assembly Language for x86 Processors 7th Edition
IT-101 Section 001 Lecture #3 Introduction to Information Technology.
EX_01.1/46 Numeric Systems. EX_01.2/46 Overview Numeric systems – general, Binary numbers, Octal numbers, Hexadecimal system, Data units, ASCII code,
Lec 3: Data Representation Computer Organization & Assembly Language Programming.
10-Sep Fall 2001: copyright ©T. Pearce, D. Hutchinson, L. Marshall Sept Representing Information in Computers:  numbers: counting numbers,
CCE-EDUSAT SESSION FOR COMPUTER FUNDAMENTALS Date: Session III Topic: Number Systems Faculty: Anita Kanavalli Department of CSE M S Ramaiah.
Chapter 19 Number Systems. Irvine, Kip R. Assembly Language for Intel-Based Computers, Translating Languages English: Display the sum of A times.
1 Data Representation Characters, Integers and Real Numbers Binary Number System Octal Number System Hexadecimal Number System Powered by DeSiaMore.
Binary numbers. Primary memory Memory = where programs and data are stored – Unit = bit “BIT” is a contraction for what two words? Either a 1 or a 0 (because.
CPE 201 Digital Design Lecture 2: Digital Systems & Binary Numbers (2)
Number Systems & Binary Arithmetic
Programming and Data Structure
Numbers Mar 27, 2013.
Binary, Denary, Hexadecimal Conversion Binary Addition
NUMBER SYSTEMS.
Data Representation ICS 233
Lec 3: Data Representation
Number Systems and Codes (parte 1)
Data Representation.
Hexadecimal, Signed Numbers, and Arbitrariness
Number Representation
Assembly Language (CSW 353)
Digital Systems and Number Systems
Data Representation Binary Numbers Binary Addition
Lesson Objectives Aims
Data Representation in Computer Systems
Number Systems.
Octal & Hexadecimal Number Systems
Octal & Hexadecimal Number Systems
Lesson objectives Understand how computers represent and manipulate numbers [unsigned integers, signed integers (sign and magnitude, Two’s complement)
Number System conversions
Number Systems.
Octal & Hexadecimal Number Systems
IT 0213: INTRODUCTION TO COMPUTER ARCHITECTURE
University of Gujrat Department of Computer Science
Binary, Hex, and Arbitrariness
LING 408/508: Programming for Linguists
Fundamentals of Data Representation
Bitwise Operations and Bitfields
Octal & Hexadecimal Number Systems
Numbers and Arithmetic and Logical Operation
Chapter 2: Number Systems
Decimal and binary representation systems
23/04/2019 Data Representation Conversion.
Numbers 27-Apr-19.
CS334: Number Systems Lab 1.
Numbers and Arithmetic and Logical Operation
Numbers 6-May-19.
Binary & Hex Review.
Presentation transcript:

Binary, Hex, and Arbitrariness CS/COE 0447 Jarrett Billingsley

Class Announcements ?????? CS447

Binary CS447

Positional number systems the numbers we use are written positionally: the position of a digit within the number has a meaning Most Significant Least Significant 1,234 1 × 103 + 2 × 102 + 3 × 101 + 4 × 100 = 1s 10s 100s 1000s 100 101 102 103 how many digit symbols do we have in our number system? 10: 0, 1, 2, 3, 4, 5 ,6 ,7, 8, 9 CS447

Range of numbers suppose we have a 4-digit numeric display what is the smallest number it can show? what is the biggest number it can show? how many different numbers can it show? 9999 - 0 + 1 = 10,000 what power of 10 is 10,000? 104 let's generalize. with n digits: how many numbers can we represent? 10n what is the largest number? 10n-1 - smallest is 0 (assuming no negatives…) - biggest is 9,999 CS447

510 = 1012 Numeric Bases we use a base-10 (decimal) numbering system but we can use (almost) any number as a base! the most common bases when dealing with computers are base-2 (binary) and base-16 (hexadecimal) when dealing with multiple bases, you can write the base as a subscript to be explicit about it: 510 = 1012 - you can have irrational bases! sure! why not!!!!!! CS447

Rules for Bases given base B, there are B digit symbols each place is worth Bi, starting with i = 0 on the right given n digits: you can represent Bn numbers the largest representable number is Bn – 1 so how about base-2? how about with… 5 binary digits? - base 2 has 2 digit symbols (0, 1) - places are worth 2^0, 2^1, 2^2… = 1, 2, 4, 8, 16… - with 5 bits: - 2^5 (32) numbers - max is 2^5-1 = 31 CS447

Binary (base-2) we call a Binary digIT a bit – a single 1 or 0 when we say an n-bit number, we mean one with n binary digits 1 × 128 + 0 × 64 + 0 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 0 × 1 = 15010 1001 0110 MSB LSB = 27 26 25 24 23 22 21 20 128s 64s 32s 16s 8s 4s 2s 1s to convert binary to decimal: ignore 0s, add up place values wherever you see a 1. - MSB and LSB are most/least significant bit - since there are only 2 choices for each place, the process is super simple - you gotta know these powers of 2! YOU GOTTA! CS447

Making change you want to give someone $9.63 in change, using the fewest bills and coins possible. how do you count it out? $5× $1× 25¢× 10¢× 5¢× 1¢×__ left: $9.63 1 4 2 1 3 -$5= $4.63 -$4= $0.63 -50¢= $0.13 -10¢= $0.03 -0¢= $0.03 -3¢= $0.00 biggest to smallest most significant to least significant WHERE COULD THIS BE GOING... - doooooes this process remind you of anything? ;O CS447

Converting decimal to binary you want to convert the number 8310 to binary. 128s 64s 32s 16s 8s 4s 2s 1s left: 83 - 0 = 83 - 64 = 19 - 0 = 19 - 16 = 3 - 0 = 3 - 0 = 3 - 2 = 1 - 1 = 1 1 1 1 for each place from MSB: if place value < remainder: digit = 1 remainder = remainder - place else, digit = 0. - see how binary makes this so simple: it reduces this "division" problem to a sequence of yes/no questions - having only 0 and 1 makes a lot of things simpler… think about multiplication! CS447

Bits, bytes, nybbles, and words 8 bits (8b) = 1 byte (1B) bit number: 7 6 5 4 3 2 1 1 bit 4 bits (4b) = 1 nybble another nybble a word is the "most comfortable size" of number for a CPU BUT WATCH OUT: Windows and x86 use word for 16 bits and double word (or dword) for 32 bits when we say "32-bit CPU," we mean its word size is 32 bits - we number bits from 0 at the LSB upwards to the left, since those are the powers of the place - a byte is usually the smallest unit of measurement - this is why your 50 megabit (Mb/s) internet connection can only give you at most 6.25 megabytes (MB) per second - a 32-bit machine can, for example, add two 32-bit numbers at once CS447

Why binary? Whynary? cause it's the easiest thing to implement. :P arithmetic also becomes really easy (as we'll see in several weeks) so, everything on a computer is represented in binary. everything. - with binary, you only have to make circuits (or whatever) distinguish between two states - technically everything is represented as "bit patterns" and we only treat it as "binary numbers" a lot of the time, but yanno CS447

Hexadecimal CS447

Shortcomings of binary and decimal binary numbers can get really long, really quickly 3,927,66410 = 11 1011 1110 1110 0111 00002 but nice "round" numbers in binary look arbitrary in decimal 10000000000000002 = 32,76810 we could use base-4, base-8, base-16, base-32, etc. base-4 is not much terser than binary 3,927,66410 = 120 3331 2323 00004 base-32 would require 32 digit symbols base-8 and base-16 look promising! spoilers, no one uses base-8 anymore - the "arbitrary look" of decimal is because 10 is not a power of 2 – there's that factor of 5 in there - in C, C++, Java, I think C# and many others, a numeric literal that starts with 0 is in octal - it was useful on 9-, 12-, 18-, and 36-bit machines - but those haven't existed in decades - DNA uses base 4 with a word size of 3 digits! tho we write the digits as "ACGT" - each "codon" (word) of 3 digits represents one amino acid in a protein chain - 64 codons map to ~22 amino acids, for some redundancy - start and stop codons exist too – and likely all sorts of other things, like control flow (decision making) CS447

Let's make a base-2 16 number system given base B, there are B digit symbols each place is worth Bi, starting with i = 0 on the right given n digits: you can represent Bn numbers the largest representable number is Bn – 1 so how about base-16? how about with 4 hex digits? - there are 16 digit symbols (0-9, and then A-F) - places are 16^0, 16^1, etc (1, 16, 256, 4096 – interestingly all end in 6 in decimal) - 4 hex digits = 16^4 numbers = 65536 numbers - biggest is 65535 CS447

Hexadecimal or "hex" (base-16) digit symbols A, B, C, D, E, F mean 10, 11, 12, 13, 14, 15 we call one hexadecimal digit a hex digit. no cute name :( 0 × 167 + 0 × 166 + 3 × 165 + 11 × 164 + 14 × 163 + 14 × 162 + 7 × 161 + 0 × 160 = 3,927,66410 003B EE70 = 167 166 165 164 163 162 161 160 to convert hex to decimal: use a dang calculator lol - ACE are even, BDF are odd - you can kiiiiinda do bytes in your head (2-digit hex numbers) since that only needs one multiplication by 16 but still CS447

0xB8 = 1011 10002 = 18410 Getting a feel for it 4 bits (1 nybble) are equivalent to 1 hex digit 1 byte = 8 bits and therefore 2 hex digits (2 nybbles) how many hex digits is a 32-bit number? how many bits is a 5-digit hex number? 0xB8 = 1011 10002 = 18410 (this is common notation for hex, derived from the C language.) these are three ways of representing the same value. they're different views of the same data. - 32 bits = 8 hex digits - 5 hex digits = 20 bits CS447

Converting between binary and hex say we had this binary number: 11101111101110011100002 starting from the LSB, divide into groups of 4 bits (pad with 0s if needed, like here) 0011 1011 1110 1110 0111 0000 Bin Hex 0000 1000 8 0001 1 1001 9 0010 2 1010 A 0011 3 1011 B 0100 4 1100 C 0101 5 1101 D 0110 6 1110 E 0111 7 1111 F know this table. 0x 3 B E E 7 hex → binary? look at the table! - 4 bits = 1 hex digit = 4 bits, so all you need is this table to translate between em - it's easy to recover the table if you forgot it: count in binary up to 15 and write the hex digits next to em - I kinda remembered A=1010, and then counted up from there to remember lol - since this is 6 hex digits, how many BYTES is it? (3) CS447

Powers of Two memorize at least the powers up to ~210 Dec Hex 20 1 0x1 22 4 0x4 23 8 0x8 24 16 0x10 25 32 0x20 26 64 0x40 27 128 0x80 Dec Hex 28 256 0x100 29 512 0x200 210 1,024 0x400 211 2,048 0x800 212 4,096 0x1000 216 65,536 0x10000 220 1,048,576 0x100000 232 4.2 billion uhhhhh 255 = 0xFF = 28-1 = max 8-bit 1K (like kilobyte) 65535 = 0xFFFF = 216-1= max 16-bit 1M (like megabyte) 0xFFFFFFFF = 232-1 = max 32-bit - if you can't remember a power of two, add the previous one to itself! - notice the hex versions are nice round numbers with a repeating pattern - if you see numbers like this, you can visualize them as a 1 with 0s after memorize at least the powers up to ~210 CS447

Arbitrariness CS447

ok so everything's in binary, and hex is convenient shorthand how does it know how to add two numbers? how does it know how to manipulate strings? how does it know if one pattern of bits is a string or a number or a video or a program or a file or an icon or CS447

IT DOESN'T CS447

What it means to be "arbitrary" it means there's no reason for it to be that way. we just kinda agree that it's how things are. one of the biggest things I want you to know is: what a pattern of bits means is arbitrary. as a corollary: the same pattern of bits can be interpreted many different ways. CS447

One bit pattern, many possible meanings all information on computers is stored as patterns of bits, but… how these bits are interpreted, transformed, and displayed is up to the programmer and the user -59 Signed integer R3G3B2 color 11000100 196 Unsigned integer 0xC4 Hexadecimal call nz z80 instruction Ä Unicode - you need CONTEXT to know what this value means - even beyond the simple interpretation, its PURPOSE is another layer of abstraction on top of that CS447

The computer doesn't know or care. when writing assembly (and C!) programs, the computer has no idea what you mean, cause nothing means anything to it "my program assembles/compiles, why is it crashing?" cause the computer is stupid there's no difference between nonsense code and useful code every "smart" thing a computer does, it does because a human programmed it to act like that dumb CPU its code. human. lovely person" - it's good at doing fun things with bit patterns - but don't confuse what it does with intelligence CS447