Architecture
Database architecture consists of three levels,
external,
conceptual and
internal. Clearly separating the three levels was a major feature of the
relational database model that dominates 21st century databases.
[1]The external level defines how users understand the organization of the data. A single database can have any number of views at the external level. The internal level defines how the data is physically stored and processed by the computing system. Internal architecture is concerned with cost, performance, scalability and other operational matters. The conceptual is a level of indirection between internal and external. It provides a common view of the database that is uncomplicated by details of how the data is stored or managed, and that can unify the various external views into a coherent whole.
[1][edit]Database management systems
A database management system (DBMS) consists of
software that operates databases, providing storage, access, security, backup and other facilities. Database management systems can be categorized according to the
database model that they support, such as
relational or
XML, the type(s) of computer they support, such as a server cluster or a mobile phone, the
query language(s) that access the database, such as
SQL or
XQuery, performance trade-offs, such as maximum scale or maximum speed or others. Some DBMS cover more than one entry in these categories, e.g., supporting multiple query languages.Examples of some commonly used DBMS are MySQL, PostgreSQL, Microsoft Access, SQL Server, FileMaker,Oracle, RDBMS, dBASE, Clipper,FoxPro,etc. Almost every database software comes with an Open Database Connectivity (ODBC) driver that allows the database to integrate with other databases.
[edit]Components of DBMS
Most DBMS as of 2009 implement a relational model.
[2] Other DBMS systems, such as Object DBMS, offer specific features for more specialized requirements. Their components are similar, but not identical.
[edit]RDBMS components
- Sublanguages— Relational DBMS (RDBMS) include Data Definition Language (DDL) for defining the structure of the database, Data Control Language (DCL) for defining security/access controls, and Data Manipulation Language (DML) for querying and updating data.
- Interface drivers—These drivers are code libraries that provide methods to prepare statements, execute statements, fetch results, etc. Examples include ODBC, JDBC,MySQL/PHP, FireBird/Python.
- SQL engine—This component interprets and executes the DDL, DCL, and DML statements. It includes three major components (compiler, optimizer, and executor).
- Transaction engine—Ensures that multiple SQL statements either succeed or fail as a group, according to application dictates.
- Relational engine—Relational objects such as Table, Index, and Referential integrity constraints are implemented in this component.
- Storage engine—This component stores and retrieves data from secondary storage, as well as managing transaction commit and rollback, backup and recovery, etc.
[edit]ODBMS components
Object DBMS (ODBMS) has transaction and storage components that are analogous to those in an RDBMS. Some ODBMS handle DDL, DCL and update tasks differently. Instead of using sublanguages, they provide APIs for these purposes. They typically include a sublanguage and accompanying engine for processing queries with interpretive statements analogous to but not the same as SQL. Example object query languages are
OQL,
LINQ,
JDOQL,
JPAQL and others. The query engine returns collections of objects instead of relational rows.
[edit]Operational database
These databases store detailed data about the operations of an organization. They are typically organized by subject matter, process relatively high volumes of updates using
transactions. Essentially every major organization on earth uses such databases. Examples include
customer databases that record contact, credit, and demographic information about a business' customers, personnel databases that hold information such as salary, benefits, skills data about employees,
manufacturing databases that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.
[edit]Data warehouse
Data warehouses archive modern data from operational databases and often from external sources such as market research firms. Often operational data undergoes transformation on its way into the warehouse, getting summarized, anonymized, reclassified, etc. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use
UPC codes so that it can be compared with
ACNielsen data.Some basic and essential components of data warehousing include retrieving and analyzing data, transforming,loading and managing data so as to make it available for further use.
[edit]Analytical database
Analysts may do their work directly against a data warehouse, or create a separate analytic database for
Online Analytical Processing. For example, a company might extract sales records for analyzing the effectiveness of advertising and other sales promotions at an aggregate level.
[edit]Distributed database
These are databases of local work-groups and departments at regional offices, branch offices, manufacturing plants and other work sites. These databases can include segments of both common operational and common user databases, as well as data generated and used only at a user’s own site.
[edit]End-user database
These databases consist of data developed by individual end-users. Examples of these are collections of documents in spreadsheets, word processing and downloaded files, or even managing their personal baseball card collection.
[edit]External database
These databases contain data collected for use across multiple organizations, either freely or via subscription. The
Internet Movie Database is one example.
[edit]Hypermedia databases
The
Worldwide web can be thought of as a database, albeit one spread across millions of independent computing systems.
Web browsers "process" this data one page at a time, while
web crawlers and other software provide the equivalent of database indexes to support search and other activities.
Main article:
Database model[edit]Post-relational database models
Products offering a more general data model than the relational model are sometimes classified as
post-relational.
[3] Alternate terms include "hybrid database", "Object-enhanced RDBMS" and others. The data model in such products incorporates
relations but is not constrained by
E.F. Codd's Information Principle, which requires that
all information in the database must be cast explicitly in terms of values in relations and in no other way
[4]
Some of these extensions to the relational model integrate concepts from technologies that pre-date the relational model. For example, they allow representation of a
directed graph with
trees on the
nodes.
Some post-relational products extend relational systems with non-relational features. Others arrived in much the same place by adding relational features to pre-relational systems. Paradoxically, this allows products that are historically pre-relational, such as
PICK and
MUMPS, to make a plausible claim to be post-relational.
[edit]Object database models
In recent years, the
object-oriented paradigm has been applied in areas such as engineering and spatial databases, telecommunications and in various scientific domains. The conglomeration of object oriented programming and database technology led to this new kind of database. These databases attempt to bring the database world and the application-programming world closer together, in particular by ensuring that the database uses the same
type system as the application program. This aims to avoid the overhead (sometimes referred to as the
impedance mismatch) of converting information between its representation in the database (for example as rows in tables) and its representation in the application program (typically as objects). At the same time, object databases attempt to introduce key ideas of object programming, such as
encapsulation and
polymorphism, into the world of databases.
A variety of these ways have been tried
[by whom?] for storing objects in a database. Some products have approached the problem from the application-programming side, by making the objects manipulated by the program
persistent. This also typically requires the addition of some kind of query language, since conventional programming languages do not provide language-level functionality for finding objects based on their information content. Others
[which?] have attacked the problem from the database end, by defining an object-oriented data model for the database, and defining a database
programming language that allows full programming capabilities as well as traditional query facilities.
[edit]Storage structures
Databases may store relational tables/indexes in memory or on hard disk in one of many forms:
Object databases use a range of storage mechanisms. Some use virtual memory-mapped files to make the native language (
C++,
Java etc.) objects persistent. This can be highly efficient but it can make multi-language access more difficult. Others disassemble objects into fixed- and varying-length components that are then clustered in fixed sized blocks on disk and reassembled into the appropriate format on either the client or server address space. Another popular technique involves storing the objects in tuples (much like a
relational database) which the database server then reassembles into objects for the client.
[citation needed]Other techniques include clustering by category (such as grouping data by month, or location), storing pre-computed query results, known as
materialized views, partitioning data by range (e.g., a data range) or by hash.
Memory management and storage topology can be important design choices for
database designers as well. Just as
normalization is used to reduce storage requirements and improve database designs, conversely denormalization is often used to reduce join complexity and reduce query execution time.
[5][edit]Indexing
Indexing is a technique for improving database performance. The many types of index share the common property that they eliminate the need to examine every entry when running a query. In large databases, this can reduce query time/cost by orders of magnitude. The simplest form of index is a sorted list of values that can be searched using a
binary search with an adjacent reference to the location of the entry, analogous to the index in the back of a book. The same data can have multiple indexes (an employee database could be indexed by last name and hire date.)
Indexes affect performance, but not results. Database designers can add or remove indexes without changing application logic, reducing maintenance costs as the database grows and database usage evolves.
Given a particular query, the DBMS'
query optimizer is responsible for devising the most efficient strategy for finding matching data. The optimizer decides which index or indexes to use, how to combine data from different parts of the database, how to provide data in the order requested, etc.
Indexes can speed up data access, but they consume space in the database, and must be updated each time the data are altered. Indexes therefore can speed data access but slow data maintenance. These two properties determine whether a given index is worth the cost.
[edit]Transactions
Most DBMS provide some form of support for
transactions, which allow multiple data items to be updated in a consistent fashion, such that updates that are part of a transaction succeed or fail in unison. The so-called
ACID rules, summarized here, characterize this behavior:
- Atomicity: Either all the data changes in a transaction must happen, or none of them. The transaction must be completed, or else it must be undone (rolled back).
- Consistency: Every transaction must preserve the declared consistency rules for the database.
- Isolation: Two concurrent transactions cannot interfere with one another. Intermediate results within one transaction must remain invisible to other transactions. The most extreme form of isolation is serializability, meaning that transactions that take place concurrently could instead be performed in some series, without affecting the ultimate result.
- Durability: Completed transactions cannot be aborted later or their results discarded. They must persist through (for instance) DBMS restarts.
In practice, many DBMSs allow the selective relaxation of these rules to balance perfect behavior with optimum performance.
[edit]Replication
Database replication involves maintaining multiple copies of a database on different computers, to allow more users to access it, or to allow a secondary site to immediately take over if the primary site stops working. Some DBMS piggyback replication on top of their transaction logging facility, applying the primary's log to the secondary in near real-time. Database clustering is a related concept for handling larger databases and user communities by employing a
cluster of multiple computers to host a single database that can use replication as part of its approach.
[6][7] [edit]Security
Database security denotes the system, processes, and procedures that protect a database from unauthorized activity.
- Access control manages who can connect to the database via authentication and what they can do via authorization.
- Auditing records information about database activity: who, what, when, and possibly where.
- Encryption protects data at the lowest possible level by storing and possibly transmitting data in an unreadable form. The DBMS encrypts data when it is added to the database and decrypts it when returning query results. This process can occur on the client side of a network connection to prevent unauthorized access at the point of use.
[edit]Confidentiality
Law and regulation governs the release of information from some databases, protecting medical history, driving records, telephone logs, etc.
[edit]Locking
When a transaction modifies a resource, the DBMS stops other transactions from also modifying it, typically by
locking it. Locks also provide one method of ensuring that data does not change while a transaction is reading it or even that it doesn't change until a transaction that once read it has completed.
[edit]Granularity
Locks can be coarse, covering an entire database, fine-grained, covering a single data item, or intermediate covering a collection of data such as all the rows in a RDBMS table.
[edit]Lock types
Locks can be
shared[9] or
exclusive, and can lock out
readers and/or
writers. Locks can be created
implicitly by the DBMS when a transaction performs an operation, or
explicitly at the transaction's request.
Shared locks allow multiple transactions to lock the same resource. The lock persists until all such transactions complete. Exclusive locks are held by a single transaction and prevent other transactions from locking the same resource.
Read locks are usually shared, and prevent other transactions from modifying the resource. Write locks are exclusive, and prevent other transactions from modifying the resource. On some systems, write locks also prevent other transactions from reading the resource.
The DBMS implicitly locks data when it is updated, and may also do so when it is read. Transactions explicitly lock data to ensure that they can complete without a
deadlock or other complication. Explicit locks may be useful for some administrative tasks.
[10][11]Locking can significantly affect database performance, especially with large and complex transactions in highly concurrent environments.
[edit]Isolation
Isolation refers to the ability of one transaction to see the results of other transactions. Greater isolation typically reduces performance and/or concurrency, leading DBMSs to provide administrative options to reduce isolation. For example, in a database that analyzes trends rather than looking at low-level detail, increased performance might justify allowing readers to see uncommitted changes ("dirty reads".)
[edit]Deadlocks
Deadlocks occur when two transactions each require data that the other has already locked exclusively. Deadlock detection is performed by the DBMS, which then aborts one of the transactions and allows the other to complete.
[edit]See also
[edit]References
[edit]Further reading
- Ling Liu and Tamer M. Özsu (Eds.) (2009). "Encyclopedia of Database Systems, 4100 p. 60 illus. ISBN 978-0-387-49616-0. Table of Content available athttp://refworks.springer.com/mrw/index.php?id=1217
- Beynon-Davies, P. (2004). Database Systems. 3rd Edition. Palgrave, Houndmills, Basingstoke.
- Connolly, Thomas and Carolyn Begg. Database Systems. New York: Harlow, 2002.
- Date, C. J. An Introduction to Database Systems, Eighth Edition, Addison Wesley, 2003.
- Date, C. J. (2003). An Introduction to Database Systems, Fifth Edition. Addison Wesley. ISBN 0-201-51381-1.
- Galindo, J.; Urrutia, A.; Piattini, M. Fuzzy Databases: Modeling, Design and Implementation (FSQL guide). Idea Group Publishing Hershey, USA, 2006.
- Galindo, J., Ed. Handbook on Fuzzy Information Processing in Databases. Hershey, PA: Information Science Reference (an imprint of Idea Group Inc.), 2008.
- Gray, J. and Reuter, A. Transaction Processing: Concepts and Techniques, 1st edition, Morgan Kaufmann Publishers, 1992.
- Kroenke, David M. Database Processing: Fundamentals, Design, and Implementation (1997), Prentice-Hall, Inc., pages 130-144.
- Kroenke, David M. and David J. Auer. Database Concepts. 3rd ed. New York: Prentice, 2007.
- Lightstone, S.; Teorey, T.; Nadeau, T. (2007). Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more. Morgan Kaufmann Press. ISBN 0-12-369389-6.
- O'Brien, James. "Management Information Systems". New York 1999
- Shih, J. "Why Synchronous Parallel Transaction Replication is Hard, But Inevitable?", white paper, 2007.
- Teorey, T.; Lightstone, S. and Nadeau, T. Database Modeling & Design: Logical Design, 4th edition, Morgan Kaufmann Press, 2005. ISBN 0-12-685352-5
- Tukey, John W. Exploratory Data Analysis. Reading, MA: Addison Wesley, 1977.
- Manovich, Lev.Database as a Symbolic Form, Cambridge: MIT press, 2001.