BITTOR A New Foundation for Logical & Numeric Uncertainty Technical Overview Markov Monoids are used as a Mathematical Foundation for a New Theory of Logical.

Slides:



Advertisements
Similar presentations
Lecture 2: Basic Information Theory TSBK01 Image Coding and Data Compression Jörgen Ahlberg Div. of Sensor Technology Swedish Defence Research Agency (FOI)
Advertisements

5.1 Real Vector Spaces.
Fast Algorithms For Hierarchical Range Histogram Constructions
The Assembly Language Level
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
CENG536 Computer Engineering Department Çankaya University.
A New Kind of Number Encompassing Logical And Numerical Uncertainty With Applications to Computer Systems and Quantum Theory Joseph E. Johnson, PhD USC.
Networks, Lie Monoids, & Generalized Entropy Metrics Networks, Lie Monoids, & Generalized Entropy Metrics St. Petersburg Russia September 25, 2005 Joseph.
2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
ExaSphere Network Analysis Engine © 2006 Joseph E. Johnson, PhD
Introduction to Gröbner Bases for Geometric Modeling Geometric & Solid Modeling 1989 Christoph M. Hoffmann.
1 Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information? TS Eliot, 1934.
Assembly Language and Computer Architecture Using C++ and Java
Basic Concepts and Definitions Vector and Function Space. A finite or an infinite dimensional linear vector/function space described with set of non-unique.
1 Foundations of Interval Computation Trong Wu Phone: Department of Computer Science Southern Illinois University Edwardsville.
1 Error Analysis Part 1 The Basics. 2 Key Concepts Analytical vs. numerical Methods Representation of floating-point numbers Concept of significant digits.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Continuous-Time Fourier Methods
2015/7/12VLC 2008 PART 1 Introduction on Video Coding StandardsVLC 2008 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
1 Binary Numbers Again Recall that N binary digits (N bits) can represent unsigned integers from 0 to 2 N bits = 0 to 15 8 bits = 0 to bits.
Quantum One: Lecture 7. The General Formalism of Quantum Mechanics.
Number Systems Lecture 02.
4 Operations On Data Foundations of Computer Science ã Cengage Learning.
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
Quantum One: Lecture 8. Continuously Indexed Basis Sets.
Fuzzy Logic Jan Jantzen Logic is based on set theory, and when we switch to fuzzy sets it will have an effect on.
Introduction and Chapter 1
Copyright © Cengage Learning. All rights reserved. CHAPTER 2 THE LOGIC OF COMPOUND STATEMENTS THE LOGIC OF COMPOUND STATEMENTS.
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
ECE 8053 Introduction to Computer Arithmetic (Website: Course & Text Content: Part 1: Number Representation.
Chapter 6 The Normal Probability Distribution
Computer Arithmetic Nizamettin AYDIN
Introduction and Vectors
1 Digital Technology and Computer Fundamentals Chapter 1 Data Representation and Numbering Systems.
IT253: Computer Organization
Lecture 2 Number Representation and accuracy
Numeric Processing Chapter 6, Exploring the Digital Domain.
Data Representation in Computer Systems
Information Representation. Digital Hardware Systems Digital Systems Digital vs. Analog Waveforms Analog: values vary over a broad range continuously.
Basic Concepts in Number Theory Background for Random Number Generation 1.For any pair of integers n and m, m  0, there exists a unique pair of integers.
1 Lesson 8: Basic Monte Carlo integration We begin the 2 nd phase of our course: Study of general mathematics of MC We begin the 2 nd phase of our course:
Numerical Methods Fast Fourier Transform Part: Informal Development of Fast Fourier Transform
ECE 8053 Introduction to Computer Arithmetic (Website: Course & Text Content: Part 1: Number Representation.
MECN 3500 Inter - Bayamon Lecture 3 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
COMP201 Computer Systems Floating Point Numbers. Floating Point Numbers  Representations considered so far have a limited range dependent on the number.
PHARMACOECONOMIC EVALUATIONS & METHODS MARKOV MODELING IN DECISION ANALYSIS FROM THE PHARMACOECONOMICS ON THE INTERNET ®SERIES ©Paul C Langley 2004 Maimon.
Two Main Uses of Statistics: 1)Descriptive : To describe or summarize a collection of data points The data set in hand = the population of interest 2)Inferential.
AGC DSP AGC DSP Professor A G Constantinides©1 Signal Spaces The purpose of this part of the course is to introduce the basic concepts behind generalised.
Quantum Two 1. 2 Angular Momentum and Rotations 3.
Copyright © Cengage Learning. All rights reserved. Fundamental Concepts of Algebra 1.1 Real Numbers.
Two’s and one’s complement arithmetic CLOCK ARITHMETIC.
Basic Theory (for curve 01). 1.1 Points and Vectors  Real life methods for constructing curves and surfaces often start with points and vectors, which.
Runge Kutta schemes Taylor series method Numeric solutions of ordinary differential equations.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Number Systems. The position of each digit in a weighted number system is assigned a weight based on the base or radix of the system. The radix of decimal.
Binary Numbers The arithmetic used by computers differs in some ways from that used by people. Computers perform operations on numbers with finite and.
Cosc 2150: Computer Organization Chapter 9, Part 3 Floating point numbers.
Lesson 8: Basic Monte Carlo integration
Mathematical Formulation of the Superposition Principle
Chapter 7. Classification and Prediction
Binary Numbers The arithmetic used by computers differs in some ways from that used by people. Computers perform operations on numbers with finite and.
Floating Point Numbers: x 10-18
Assignment and Arithmetic expressions
Chapter 6 Floating Point
Chapter 2 ERROR ANALYSIS
Quantum One.
Copyright © Cengage Learning. All rights reserved.
Presentation transcript:

BITTOR A New Foundation for Logical & Numeric Uncertainty Technical Overview Markov Monoids are used as a Mathematical Foundation for a New Theory of Logical & Numerical Uncertainty Replacing Current Computer Logic & Computation Infrastructures January 4, 2006 Joseph E. Johnson, Ph.D., Professor of Physics University of South Carolina Columbia SC,

Discussion I: Background: It is well known that computers are too exacting and do not perform ‘estimates’ and ‘approximations’ unless explicitly programmed. A computer could not order food for a party not knowing how many people would come or how much they would eat – yet a person can easily operate with such extremely limited information. It is also realized by many scientists that all observations are estimates and furthermore most intelligent decision-making is based upon making optimal decisions using very limited knowledge. A good example is the field of pharmacy and medicine where drug dosage, risk analysis, and procedures have large errors. Likewise one thinks of the complex error analysis for engineering and architectural efforts with unknown strengths of materials, temperatures, loads and stresses. And there is certainly the business investment and financial domains of uncertain currencies, interest rates, supply, and demand. It is also known that while mathematics has developed a rigorous system of integer, rational, real, and complex numbers, that the current system of truncating real numbers is not a rigorous method of managing uncertainly. Thus the entire field of statistics is overlaid on traditional mathematics to manage these uncertainties. Finally, one notes that all values in the sciences and engineering (length, time, mass…) are assumed to be ‘real’ numbers in spite of the fact that it would take infinite effort to measure such a number, with a resulting infinite information level, and in certain violation of quantum theory – thus impossible. Real numbers cannot truly represent the results of observations.

Discussion II Our line of thought: A natural (and almost mandatory) line of thought would be to generalize the fundamental piece of information – the bit, 1 & 0, to continuous probability values – but how? My work with continuous Markov transformations led me to realize, that the values (representation space) (x 1, x 2 ) upon which they acted, could provide the natural generalizations of 1 & 0 with 1=(1,0) and 0 = (0,1) and thus allowing all intermediate values when x 1 & x 2 assumed the value range 0 to 1 (i.e. x 1 + x 0 = 1). This occurred to me because the (linear) Markov transformations maintain both the sum of the values (i.e. x 1 +x 2 = 1) and their non-negativity thus allowing them to be interpreted as probabilities.

Discussion III - Postulates Postulate 1: is the fundamental entity I thus make the postulate that the fundamental entity of information is the bit vector (x 1, x 2 ) (bittor i.e. the pair of numbers) which is formally the representation space of the Markov Lie Monoid (part of the general linear group of continuous real transformations). (x 1 +x 0 =1 & x i non- negative). This allows continuous values between 1 and 0 that are to be interpreted as the probability (x 1 ) that ‘1’ is the correct value and that the probability that the value is zero is x 0. Thus ‘1’ = (1, 0) and ‘0’ = (0, 1) and (1/2, 1/2) represents equal probability for a 1 or a 0 and thus represents zero information. These bittors now become the fundamental objects in a new mathematics that we propose below.

Discussion III - Postulates Postulate 2: Definition of Logic The next most foundational concept is that the fundamental entities in a system must have rules of combination and these rules must provide a generalization of Boolean logic (combinational product rules for AND, OR, NOR, NOT etc). Realizing that since x = represent probabilities, and since independent probabilities multiply, then the rule must be z = x y where if x and y are bittors then z will also be a bittor (components are nonnegative and sum to unity). We postulate z i = c  ijk x j y k generalizing the normal logic (computer) operations of AND, OR, NOR, NAND, NOT, etc. where  = 1, 2, … 16 and represent the 16 different ways of partitioning the four products x j y k in two components: (e.g. for ‘AND’ we have z 1 = x 1 y 1, z 0 = x 1 y 0 + x 0 y 1 + x 0 y 0 ). The unary NOT operation reverses (x 1, x 0 ) into (x 0, x 1 ) value wise with an obvious symmetric off-diagonal matrix form.

Discussion III - Postulates Postulate 2a – Linear combination A second operation similar to a sum is defined as the weighted linear combination x = a 1 x 1 + a 2 x 2 +… a n x n where x i are different bittors and where the a i are an n dimensional Markov Lie monoid representation (ie sum to unity and are non-negative – thus a larger dimensional bittor). This operation gives a weighted linear combination of bittors, by bittors. This new mathematics thus has 16 independent products and one method of linear ‘addition’ (combination). Note that these bittor objects close under these two operations.

Discussion III - Postulates Postulate 3: Bittor Numbers Just as the binary reals are defined as a sequence of binary values with a decimal ( ), we now define a bittor number to be the outer product of several Markov monoid two dimensional representations (x j,y k ) (x j,y k )(). ()() Thus a number is an (outer product) of Markov monoid representations and thus is itself such a representation in the product space of Markov transformations. One need only write the upper of the two values. Also, as one needs only a limited accuracy for the ‘error’ one can use a binary value such as This makes a Bittor number take the abbreviated form or more succinctly 110.1(011101) where (1) means an x 1 value that rounds to

Discussion III - Postulates Postulate 4: Bittor Arithmetic Arithmetic is defined with Bittors in exactly the same schema as with binary numbers. For example in adding two bits: 1+0 = 0+1 = 1 and 1+1 = 0+0 = 0 (except carry 1 if 1+1). This is defined as XOR (exclusive OR) of the bittors: z 1 = x 1 y 0 + x 0 y 1 showing that the probability that one gets a one is the product of probabilities for one value to be 1 and the other to be 0. Likewise, z 0 = x 1 y 1 + x 0 y 0 gives the probability to get a 0 upon addition of the two bittors. The other part of addition is the carry which is computed as z 1 = x 1 y 1 ie the AND operation as both values must be 1 in order to have a carry digit.

Discussion III - Postulates Postulate 5: Information Defined Shannon’s definition, I = log 2 (P) gives us an information value of 1 when P is 0 or 1 ie a well defined bit of information with a binary choice. It can be shown that the smooth generalization of Shannon entropy, via Renyi’s form of I = log 2 (a(x 1 b +x 0 b )) determines the constants a and b uniquely to be a=2 and b=2 which is already in the extended bittor logic as the operation ‘EQV’ of the bittor with itself. Thus I = log 2 (2(x 1 2 +x 0 2 )) (which is the log base 2 of self equivalence) is defined to be the information content of a bittor. The information value in an entire bittor number is thus the sum of the information in each component bittor.

Discussion IV Observation: Bittor operations, numbers, and information smoothly reduce to the standard numbers, logic, and binary system in traditional use when the bittors are exact (1,0) or (0,1). Consequently the proposed system is a smooth generalization of the current logic and arithmetic. Furthermore, the bittor structures include the full Boolean logic, binary values, existing number system (integer, rational, real, and complex numbers) with existing mathematical operations – all as a special case.

Technical Summary – I Generalized Logic 1. Basic Entity: I suggest that probability is not a scalar but a component of an ordered ‘vector’ (n-tuple - representation space) that behaves as a ‘bit vector’ or ‘bittor’ under the Lie Markov Monoid of transformations (e.g. {P, (1-P)} = {x 1, x 0 } in two dimensions). These objects generalize the fundamental concepts of ‘1’ and ‘0’ of binary logic and arithmetic allowing for continuous intermediary states of logic. 2. A New Mathematics: Boolean Logic is generalized to be a set of 16 product operations z i = c  ijk x j y k generalizing the normal logic (computer) operations of AND, OR, NOR, NAND, NOT, etc. where  = 1, 2, … 16 of the 16 possible partitions of the four products x j y k into two parts. A second operation is defined as the weighted linear combination x = a 1 x 1 + a 2 x 2 +… a n x n where x i are different bittors and where the a i are an n dimensional Markov Lie monoid representation (ie sum to unity and are non-negative – thus a larger dimensional bittor).

Technical Summary – II Generalized Arithmetic 3. New Bittor Generalized Numbers: Binary numbers are now defined as outer products of these bittor objects where ‘1’ = {1,0} and ‘0’ = {0,1}. Total uncertainly is thus {0.5, 0.5}. It is only necessary to explicitly show the upper component to express a value e.g.( ) 4. Arithmetic: Arithmetic operations (+-*/^) are now defined by the natural Boolean generalizations between the Bittors that comprise a ‘number’.

Technical Summary – III Generalized Information 5. Shannon information is normally defined as ‘1 bit’ for a binary value of 1 or 0 (log 2 (P)). We extend this for a Bittor as I=log 2 (P 1 2 +P 0 2 ) via second order Renyi entropy. 6. The probability values in the bittor need not have the accuracy of a real number but only a binary number of desired length (perhaps 5, or 6) eg x = (00110, 01010).

Discussion Summary: The fundamental objects of observational information are proposed to be Markov Lie monoid two-dimensional representations (called bittors as short for ‘bit vectors’) with the component interpretation of the bittor to be the probabilities to be 1 and 0 respectively. The generalized logic defined by z i = c  ijk x j y k provides an entirely new kind of mathematics among the bittor objects that consists of 16 different products and bittor weighted linear combinations. Bittors and bittor type numbers are capable of automated management of uncertainty and constitute a new kind of mathematical structure that generalizes the existing number systems and contain them.

Proposal We propose that this new type of mathematics be used to partially manage logical and numerical uncertainty by building this mathematics into computers in a fundamental way. We propose that this be done in three stages: (1) simulation in CC++ or JAVA code for demonstrations & testing, (2) built in an integral way for automatic use in CC++ and JAVA programming, and (3) incorporated as hardware as an chip accelerator to speed the processing. We propose that experts from different areas consider the impact of such an extension to our current mathematics and identify problems, and new ways of utilizing these structures.

End Technical Overview