Computing Environments CSC8304 Marcus Kaiser

Slides:



Advertisements
Similar presentations
Week 3. Assembly Language Programming  Difficult when starting assembly programming  Have to work at low level  Use processor instructions >Requires.
Advertisements

Assembly Language for Intel-Based Computers, 4 th Edition Chapter 1: Basic Concepts (c) Pearson Education, All rights reserved. You may modify and.
1 Chapter 2 The Digital World. 2 Digital Data Representation.
Digital Data Representation
The Binary Numbering Systems
ENGR2216 FORTRAN PROGRAMMING FOR ENGINEERS. Chapter 1 The computer CPU MEMORY INPUT/OUTPUT DEVICES DATA REPRESENTATION BINARY SYSTEM OCTAL & HEXADECIMAL.
Digital Fundamentals Floyd Chapter 2 Tenth Edition
Chapter 3 Data Representation.
Introduction to Computers and Programming. Some definitions Algorithm: –A procedure for solving a problem –A sequence of discrete steps that defines such.
Representing Information as Bit Patterns Lecture 4 CSCI 1405, CSCI 1301 Introduction to Computer Science Fall 2009.
Comp 1001 Introduction to Information Technology & Computer Architecture Wednesday 12-1Dr. Joe Carthy
Introduction to Programming with Java, for Beginners
Binary and Hexadecimal Numbers
Chapter 3 Data Representation. Chapter goals Describe numbering systems and their use in data representation Compare and contrast various data representation.
Chapter 1 Data Storage. 2 Chapter 1: Data Storage 1.1 Bits and Their Storage 1.2 Main Memory 1.3 Mass Storage 1.4 Representing Information as Bit Patterns.
Assembly Language for Intel-Based Computers, 5 th Edition Chapter 1: Basic Concepts (c) Pearson Education, All rights reserved. You may modify.
2 Systems Architecture, Fifth Edition Chapter Goals Describe numbering systems and their use in data representation Compare and contrast various data.
Ceng 230 Programming with C
Introduction to Computers and Programming. Some definitions Algorithm: Algorithm: A procedure for solving a problem A procedure for solving a problem.
String Escape Sequences
Binary and Decimal Numbers
IT-101 Section 001 Lecture #4 Introduction to Information Technology.
Number Systems Lecture 02.
Codes and number systems Introduction to Computer Yung-Yu Chuang with slides by Nisan & Schocken ( ) and Harris & Harris (DDCA)
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
© 2009 Pearson Education, Upper Saddle River, NJ All Rights ReservedFloyd, Digital Fundamentals, 10 th ed Digital Fundamentals Tenth Edition Floyd.
Programmable Logic Controllers
Computers Organization & Assembly Language
Computer Arithmetic Nizamettin AYDIN
Computer Hardware and Software Chapter 1. Overview Brief History of Computers Hardware of a Computer Binary and Hexadecimal Numbers Compiling vs. Interpreting.
CSU0014 Assembly Languages Homepage: Textbook: Kip R. Irvine, Assembly Language for Intel-Based Computers,
Summer 2014 Chapter 1: Basic Concepts. Irvine, Kip R. Assembly Language for Intel-Based Computers 6/e, Chapter Overview Welcome to Assembly Language.
Assembly Language for x86 Processors 7th Edition
1 Digital Technology and Computer Fundamentals Chapter 1 Data Representation and Numbering Systems.
Chapter 2 Computer Hardware
Copyright © 2003 by Prentice Hall Module 5 Central Processing Unit 1. Binary representation of data 2. The components of the CPU 3. CPU and Instruction.
Data Representation.
Lec 3: Data Representation Computer Organization & Assembly Language Programming.
Number Systems Spring Semester 2013Programming and Data Structure1.
Cis303a_chapt03-2a.ppt Range Overflow Fixed length of bits to hold numeric data Can hold a maximum positive number (unsigned) X X X X X X X X X X X X X.
10-Sep Fall 2001: copyright ©T. Pearce, D. Hutchinson, L. Marshall Sept Representing Information in Computers:  numbers: counting numbers,
Chapter 19 Number Systems. Irvine, Kip R. Assembly Language for Intel-Based Computers, Translating Languages English: Display the sum of A times.
CISC1100: Binary Numbers Fall 2014, Dr. Zhang 1. Numeral System 2  A way for expressing numbers, using symbols in a consistent manner.  " 11 " can be.
Introduction to Programming Instructor: Yong Tang Brookhaven National Laboratory Working on accelerator control (BNL Phone #)
Computer Science Binary. Binary Code Remember the power supply that is inside your computer and how it sends electricity to all of the components? That.
Data Representation Conversion 24/04/2017.
Data Representation, Number Systems and Base Conversions
Computer Math CPS120 Introduction to Computer Science Lecture 4.
Digital Logic Lecture 3 Binary Arithmetic By Zyad Dwekat The Hashemite University Computer Engineering Department.
MECH1500 Chapter 3.
Data Representation. How is data stored on a computer? Registers, main memory, etc. consists of grids of transistors Transistors are in one of two states,
DATA REPRESENTATION 4 Y. Colette Lemard February 2009.
ASCII AND EBCDIC CODES By : madam aisha.
Computer Math CPS120 Introduction to Computer Science Lecture 7.
09/06/ Data Representation ASCII, Binary Denary Conversion, Integer & Boolean data types.
 Computers are 2-state devices › Pulse – No pulse › On – Off  Represented by › 1 – 0  BINARY.
Number Systems. The position of each digit in a weighted number system is assigned a weight based on the base or radix of the system. The radix of decimal.
Computer basics.
Programming and Data Structure
Lec 3: Data Representation
INFS 211: Introduction to Information Technology
Number Representation
Assembly Language (CSW 353)
3.1 Denary, Binary and Hexadecimal Number Systems
Data Representation Binary Numbers Binary Addition
Information Support and Services
Data Structures Mohammed Thajeel To the second year students
Data Representation Conversion 05/12/2018.
Information Representation
Presentation transcript:

Computing Environments CSC8304 Marcus Kaiser

About this module Module Leader: Dr Marcus Kaiser Today’s lecture – Introduction to Computing Environments Lectures for the next 4 weeks cover the need for database systems, statistical packages, security and poll-based/event-driven software Reading Week then follows – no lecture Remaining lectures cover programming languages, scripting and Perl Lectures are on Tuesdays, 9:30-11am, CLT.602 Lecture notes can be found on (Training > Computing Environments for Bioinformatics ) Practical classes start in DAYSH.821 and are on Wednesdays 11-12am

Assessment Coursework Databases/SQL5 Nov deadline15% of final mark Scripting/Perl10 Dec deadline15% of final mark Exam ExamJanuary70% of final mark

A computing environment Requirements Services Enabling presentation of services to users and interaction with such services by users Mobile devices (e.g., mobile phones) Video game consoles Printers Tactile feedback devices Monitors A world of supporting services Robots

A heterogeneous environment A modern day computing environment is made up from many different types of enabling technologies. An enabling technology refers to the mechanism that is used to implement a service. Many technologies share the same ultimate goal (e.g., sending a message from one computer to another). However, such technologies may attempt to achieve the same goal in different ways (e.g., Microsoft and Linux operating systems).

Standards There are instances when vendors must adhere to some standard to ensure integration (the Internet protocols exemplify this). Standards play a crucial role in computer system development. There are two types of standard: Provided by an organisation that is recognised by the community as assuming the role of identifying standards (members of such an organisation are usually drawn from different vendors). Provided by a vendor (or group of vendors) and deployed without international recognition (however, such recognition may occur at a later date).

Computer technology evolution 1945: ENIAC :Pentium Complexity18,000 ValvesX M transistors Size200 m 3 X cm 3 Speed150 ops/sX x 10 9 ops/s Consumption10 kWX W Cost$ X <£1000 ReliabilityHoursX 1000Years

What if cars improved in a similar fashion?!! Speed70 mphX km/s Fuel50 mpgX ,000 mpg Cost£10,000X £1 Reliability1 YearX Years Weight1 tonX mg

Conceptual Levels of Computers A digital computer is capable of computation, communication and storage. Computation is performed by carrying out instructions, a sequence of instructions to solve a problem is called a program Instructions are recognised and executed by the electronic circuits in the computer Instructions can only permit simple operations, typically, they can: Add two numbers Check if a number is zero Move data around in Memory This set of instructions is called the machine language Different types of computers usually have different machine languages The machine language is said to be the interface between the software and the hardware.

Conceptual Levels of Computers contd. Most users do not use machine language Use high level language e.g. Java The high level language is translated to machine language by a compiler Computers can be thought of as having different levels, each with its own language. Each level carries out tasks on behalf of the level above it. Helps to cope with understanding the complexity of computing systems Software and/or Hardware Assembly Language Level (Assembly programmer) Conventional machine level (hardware designer) Integrated circuit level (VLSI designer) Transistor level (Physical designer) Silicon + electronics level Chemical engineer Application Software (anybody) High Level Language (Java programmer) Operating System Level (programmer)

Data Representation (1) Humans count in base 10, using 10 digits Difficult to represent electronically Machines count in base 2 Two-state devices are easy to make (transistors) Only two digits used (0 and 1) called binary digits or bits Electrically represented by 0 volts and 5 volts Each bit occupies one memory cell or wire The basic working unit consists usually of 8 bits, called a byte The basic memory unit is a multiple number of bytes e.g. 2 bytes = 16 bits 4 bytes = 32 bits 8 bytes = 64 bits The basic memory unit is called the word length

Data Representation (2) All bytes in the memory are numbered, or addressable The first byte is numbered 0, the second 1, and so on Memory size is usually expressed in terms of (Mega)bytes It is common practice to: Write the least significant bit (LSB) on the right Write the most significant bit (MSB) on the left Start counting from zero All data held within the computer is represented by a number of bits or bytes All high level objects, such as a Java or C++ class must be translated into bits

Data Representation (3) Data comes in many forms Booleans Characters (i.e. text) Integers, both positive and negative; e.g. -230, -1, 0, Real numbers, also called floating point numbers, e.g. 3.0, log (13), sin(π/4), 22/7 Structured data types defined by programming languages e.g. Arrays Strings Classes Each type is represented by one or more bits

Bits, Bytes, and Buzzwords Byte Kilobyte (KB) Megabytes (MB) Gigabytes (GB) Terabytes (TB) = 8 bits = 1024 (2 10 ) Bytes = 2 20 Bytes = 2 30, or about a billion, Bytes = 2 40, or about a trillion, Bytes Terms used to describe file size or memory size:

Integer data Integer numbers don’t allow fractions Humans use the decimal number system. There are 10 digits, 0 – 9. Each place within a decimal number represents a power of 10. For example 236 = 2 * * * is not a ‘natural’ base (it is an anatomical incident!) Computers work more naturally with base 2 because transistors have two states In base 2, only digits 0 and 1 are used. Greatly simplifies the arithmetic and makes it much faster

Binary Numbers Each place within a binary number represents a power of 2. e.g binary 101 = 1 x x x 2 0 (equals five in decimal) Electrical representation: three wires ONOFFON (5V)(0V)(5V)

Binary Arithmetic Humans perform decimal addition by: Memorising all single-digit additions Writing the numbers to be added down right-aligned, one above the other Starting at the right and working towards the left Adding the digits, writing down the result and propagating any carry to the next column Subtraction works much the same way except that you must borrow from the next column Multiplication with a single-digit number works much the same way too Multiplication with a multi-digit number is treated as a series of separate single digit multiplications, the results of which are added together Binary addition, subtraction and multiplication can treated exactly the same except that only the digits 0 and 1 are used.

Basic Binary Arithmetic - examples

Hexadecimal Numbers Problem with binary arithmetic – Long strings are fine for machines but awkward for humans e.g. What is the binary number ?? Guess then work it out! We (humans) therefore often use hexadecimal numbers (or hex for short). This uses base 16. There are 16 “digits” ( A B C D E F) Each place represents a power of 16: e.g. 29F = 2 * * F * 16 0 (=671 in decimal)

Integer Representation For the sake of economy, different hardware representations are used to implement different integer ranges The following are commonly found: NameBitsRange signedRange unsigned Byte8-128 … Word Long Quad

Integer overflow It is possible that the result of an integer calculation is bigger than the allowed maximum (both positive and negative) Look at the following 8-bit addition (1) (256+) 94 (-256+) 94 The final carry “disappears” because there is no hardware provision for it. The problem is called overflow (or underflow) Is this serious? Would you like this to happen to your bank account? Overflow is a serious problem. It indicates the presence of a bug in your program. The hardware can detect overflow and will cause your program to crash Overflow occurred in the European Space Agency’s Ariane 5 rocket when the on-board software attempted to fit a 64 bit number into 16 bits. This did indeed cause the program to crash...

Floating Point data The range of possible values using 32 bits to represent a number, positive or negative, is large However, bigger number representations are needed. e.g. numbers to allow fractions and powers as required by many scientific applications To represent fractions using integers, you would need two of them One for the numerator and one for the denominator Would be a major nuisance – not computationally amenable The way to do this is to use floating point numbers. Floating point data types allow a much greater range of possible values They are represented in floating point notation

Floating Point Notation Details of how floating point values are represented vary from one machine to another. The IEEE standard is one of the standard floating point representations More info at html

Character Data Used for textual data, but can represent small integers Usually held in a byte although commonly only 7 bits are needed There are two major character sets: EBSIDIC (on IBM mainframe machines) ASCII (on all other machines) We concentrate on ASCII (American Standard Code for Information Interchange) It has been standardised by ISO (International Standardisation Organisation) ASCII was actually designed for use with teletypes and so the descriptions are somewhat obscure Often ‘text’ documents are referred to as in ‘ASCII’ format – easier for document interchange

ASCII The characters are classed as Graphic characters (printable or displayable symbols) Control characters (intended to be used for various control functions, such as vertical motion and data communications. The basic ASCII set uses 7 bits for each character, giving it a total of 128 unique symbols. The extended ASCII character set uses 8 bits, which gives it an additional 128 characters. The extra characters represent characters from foreign languages and special symbols for drawing pictures. More

Unicode Unicode is a new system to standardise character representation Unicode provides a unique number for every character, independent of platform, program, or language Adopted by such industry leaders as Apple, HP, IBM, JustSystem, Microsoft, Oracle, SAP, Sun, Sybase, Unisys. Required by modern standards such as XML, Java, ECMAScript (JavaScript), LDAP, CORBA 3.0, WML An implementation of the ISO/IEC standard Enables Internationalization

Unicode How Unicode Works It defines a large (and steadily growing) number of characters (> 110,000). Each character gets a name and a number, e.g. LATIN CAPITAL LETTER A is 65 and TIBETAN SYLLABLE OM is Includes a table of useful character properties such as "this is lower case" or "this is a number" or "this is a punctuation mark". The Unicode standard also includes a large volume of helpful rules and explanations about how to display these characters properly, do line-breaking and hyphenation and sorting Unicode is important – do some extra reading! – try a Google search

Summary A modern day computing environment is made up from many different types of enabling technologies Standards are used to permit interoperability Computers can be thought of as a number of different levels, ranging from the application software that we all use, right through to the electronic circuits within the computer Computers count in binary Various ways of representing numeric and character data