Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 2 Relational Database Design and Normalization August 2016 1.

Similar presentations


Presentation on theme: "Chapter 2 Relational Database Design and Normalization August 2016 1."— Presentation transcript:

1 Chapter 2 Relational Database Design and Normalization August 2016 1

2 Conventional Files Versus the Database Database Design in Perspective – To fully exploit the advantages of database technology, a database must be carefully designed. – The end product is called a database schema, a technical blueprint of the database. – Database design translates the data models that were developed for the system users during the definition phase, into data structures supported by the chosen database technology. – Subsequent to database design, system builders will construct those data structures using the language and tools of the chosen database technology. 2

3 Databases – Database Architecture: A systems analyst, or database analyst, designs the structure of the data in terms of record types, fields contained in those record types, and relationships that exist between record types. These structures are defined to the database management system using its data definition language. – Data definition language (or DDL) is used by the DBMS to physically establish those record types, fields, and structural relationships. Additionally, the DDL defines views of the database. Views restrict the portion of a database that may be used or accessed by different users and programs. DDLs record the definitions in a permanent data repository. 3

4 4

5 Database Concepts Databases – Database Architecture: Some data dictionaries include formal, elaborate software that helps database specialists track metadata – the data about the data –such as record and field definitions, synonyms, data relationships, validation rules, help messages, and so forth. The database management system also provides a data manipulation language to access and use the database in applications. – A data manipulation language (or DML) is used to create, read, update, and delete records in the database, and to navigate between different records and types of records. The DBMS and DML hide the details concerning how records are organized and allocated to the disk. 5

6 Database Concepts Databases – Database Architecture: Many DBMSs don’t require the use of a DDL to construct the database, or a DML to access the database. – They provide their own tools and commands to perform those tasks. This is especially true of PC-based DBMSs. Many DBMSs also include proprietary report writing and inquiry tools to allow users to access and format data without directly using the DML. Some DBMSs include a transaction processing monitor (or TP monitor) that manages on-line accesses to the database, and ensures that transactions that impact multiple tables are fully processed as a single unit. 6

7 Database Concepts Databases – Relational Database Management Systems: There are several types of database management systems and they can be classified according to the way they structure records. Early database management systems organized records in hierarchies or networks implemented with indexes and linked lists. Relational databases implement data in a series of tables that are ‘related’ to one another via foreign keys. – Files are seen as simple two-dimensional tables, also known as relations. – The rows are records. – The columns correspond to fields. 7

8 8

9 9

10 Database Concepts for the Systems Analyst Databases – Relational Database Management Systems: Both the DDL and DML of most relational databases is called SQL (which stands for Structured Query Language). – SQL supports not only queries, but complete database creation and maintenance. – A fundamental characteristic of relational SQL is that commands return ‘a set’ of records, not necessarily just a single record (as in non-relational database and file technology). 10

11 Database Concepts for the Systems Analyst Databases – Relational Database Management Systems: High-end relational databases also extend the SQL language to support triggers and stored procedures. – Triggers are programs embedded within a table that are automatically invoked by a updates to another table. – Stored procedures are programs embedded within a table that can be called from an application program. Both triggers and stored procedures are reusable because they are stored with the tables themselves. – This eliminates the need for application programmers to create the equivalent logic within each application that use the tables. 11

12 Data Analysis for Database Design What is a Good Data Model? – A good data model is simple. As a general rule, the data attributes that describe an entity should describe only that entity. – A good data model is essentially non-redundant. This means that each data attribute, other than foreign keys, describes at most one entity. – A good data model should be flexible and adaptable to future needs. We should make the data models as application- independent as possible to encourage database structures that can be extended or modified without impact to current programs. 12

13 Data Analysis for Database Design Data Analysis – Data analysis is a process that prepares a data model for implementation as a simple, non- redundant, flexible, and adaptable database. The specific technique is called normalization. – Normalization is a technique that organizes data attributes such that they are grouped together to form stable, flexible, and adaptive entities. 13

14 Data Analysis for Database Design Data Analysis – Normalization is a three-step technique that places the data model into first normal form, second normal form, and third normal form. An entity is in first normal form (1NF) if there are no attributes that can have more than one value for a single instance of the entity. An entity is in second normal form (2NF) if it is already in 1NF, and if the values of all non-primary key attributes are dependent on the full primary key – not just part of it. An entity is in third normal form (3NF) if it is already in 2NF, and if the values of its non-primary key attributes are not dependent on any other non-primary key attributes. 14

15 Data Analysis for Database Design Normalization Example – First Normal Form: The first step in data analysis is to place each entity into 1NF. 15

16 16

17 17

18 18

19 19

20 Data Analysis for Database Design Normalization Example – Second Normal Form: The next step of data analysis is to place the entities into 2NF. – It is assumed that you have already placed all entities into 1NF. – 2NF looks for an anomaly called a partial dependency, meaning an attribute(s) whose value is determined by only part of the primary key. – Entities that have a single attribute primary key are already in 2NF. – Only those entities that have a concatenated key need to be checked. 20

21 21

22 Data Analysis for Database Design Normalization Example – Third Normal Form: Entities are assumed to be in 2NF before beginning 3NF analysis. Third normal form analysis looks for two types of problems, derived data and transitive dependencies. – In both cases, the fundamental error is that non key attributes are dependent on other non key attributes. – Derived attributes are those whose values can either be calculated from other attributes, or derived through logic from the values of other attributes. – A transitive dependency exists when a non-key attribute is dependent on another non-key attribute (other than by derivation). – Transitive analysis is only performed on those entities that do not have a concatenated key. 22

23 Data Analysis for Database Design Normalization Example – Third Normal Form: Third normal form analysis looks for two types of problems, derived data and transitive dependencies. (continued) – A transitive dependency exists when a non-key attribute is dependent on another non-key attribute (other than by derivation). » This error usually indicates that an undiscovered entity is still embedded within the problem entity. – Transitive analysis is only performed on those entities that do not have a concatenated key. “An entity is said to be in third normal form if every non- primary key attribute is dependent on the primary key, the whole primary key, and nothing but the primary key.” 23

24 24

25 25

26 Data Analysis for Database Design Simplification by Inspection: When several analysts work on a common application, it is not unusual to create problems that won’t be taken care of by normalization. – These problems are best solved through simplification by inspection, a process wherein a data entity in 3NF is further simplified by such efforts as addressing subtle data redundancy. 26

27 Data Analysis for Database Design – CASE Support for Normalization: Most CASE tools can only normalize to first normal form. – They accomplish this in one of two ways. » They look for many-to-many relationships and resolve those relationships into associative entities. » They look for attributes specifically described as having multiple values for a single entity instance. It is exceedingly difficult for a CASE tool to identify second and third normal form errors. – That would require the CASE tool to have the intelligence to recognize partial and transitive dependencies. 27

28 Database Design The Database Schema – The design of a database is depicted as a special model called a database schema. A database schema is the physical model or blueprint for a database. It represents the technical implementation of the logical data model. A relational database schema defines the database structure in terms of tables, keys, indexes, and integrity rules. A database schema specifies details based on the capabilities, terminology, and constraints of the chosen database management system. 28

29 Database Design The Database Schema – Transforming the logical data model into a physical relational database schema rules and guidelines: 1Each fundamental, associative, and weak entity is implemented as a separate table. – The primary key is identified as such and implemented as an index into the table. – Each secondary key is implemented as its own index into the table. – Each foreign key will be implemented as such. – Attributes will be implemented with fields. » These fields correspond to columns in the table. 29

30 Database Design The Database Schema – Transforming the logical data model into a physical relational database schema rules and guidelines: (continued) – The following technical details must usually be specified for each attribute. » Data type. Each DBMS supports different data types, and terms for those data types. » Size of the Field. Different DBMSs express precision of real numbers differently. » NULL or NOT NULL. Must the field have a value before the record can be committed to storage? » Domains. Many DBMSs can automatically edit data to ensure that fields contain legal data. » Default. Many DBMSs allow a default value to be automatically set in the event that a user or programmer submits a record without a value. 30

31 Database Design The Database Schema – Transforming the logical data model into a physical relational database schema rules and guidelines: (continued) 2Supertype/subtype entities present additional options as follows: – Most CASE tools do not currently support object-like constructs such as supertypes and subtypes. – Most CASE tools default to creating a separate table for each entity supertype and subtype. – If the subtypes are of similar size and data content, a database administrator may elect to collapse the subtypes into the supertype to create a single table. 3Evaluate and specify referential integrity constraints. 31

32 Database Design Data and Referential Integrity – There are at least three types of data integrity that must be designed into any database - key integrity, domain integrity and referential integrity. – Key Integrity: Every table should have a primary key (which may be concatenated). – The primary key must be controlled such that no two records in the table have the same primary key value. – The primary key for a record must never be allowed to have a NULL value. 32

33 Database Design Data and Referential Integrity – Domain Integrity: Appropriate controls must be designed to ensure that no field takes on a value that is outside of the range of legal values. – Referential Integrity: A referential integrity error exists when a foreign key value in one table has no matching primary key value in the related table. 33

34 Database Design Data and Referential Integrity – Referential Integrity: Referential integrity is specified in the form of deletion rules as follows: – No restriction. » Any record in the table may be deleted without regard to any records in any other tables. – Delete:Cascade. » A deletion of a record in the table must be automatically followed by the deletion of matching records in a related table. – Delete:Restrict. » A deletion of a record in the table must be disallowed until any matching records are deleted from a related table. 34

35 Database Design Data and Referential Integrity – Referential Integrity: Referential integrity is specified in the form of deletion rules as follows: (continued) – Delete:Set Null. » A deletion of a record in the table must be automatically followed by setting any matching keys in a related table to the value NULL. 35

36 Database Design Roles – Some database shops insist that no two fields have exactly the same name. This presents an obvious problem with foreign keys – A role name is an alternate name for a foreign key that clearly distinguishes the purpose that the foreign key serves in the table. – The decision to require role names or not is usually established by the data or database administrator. 36


Download ppt "Chapter 2 Relational Database Design and Normalization August 2016 1."

Similar presentations


Ads by Google