Online College Courses for Credit

+
Data Model Innovations

Data Model Innovations

Rating:
(0)
Author: Sophia Tutorial
Description:

Recall major innovations of data model evolution from years 1960's to 2020's.

(more)
See More
Tutorial

what's covered
This tutorial explores the major innovations of the data model evolution from the 1960s to the 2020s.

As we have seen with the shift from a manual file system to a computerized file system, there is always a focus to find better ways to manage data. There have been various models during the computerized file system that have been created with each model trying to fix some of the shortcomings of the previous model. You will find that many of the so-called newer database concepts have a significant resemblance to some of the older data models and concepts.

The first generation of data models is the file systems. They were used mostly during the 1960s to the 1970s. There are some of them that are still used as they were mostly used on IBM mainframe systems. They focused on record management rather than handling of relationships. This approach is quite limited as we’ve already seen in prior examples.

The second generation of data models were used in the 1970s. Those included the hierarchical and network data models. These were truly the early database systems that were used. The network data model in particular helped create the foundation of many of the concepts that are still used in modern databases today.

The third generation of data models started during the mid-1970s with the relational model. This is one that you should be familiar with as it is what you currently work with in PostgreSQL. The foundations of the relational model are meant to keep the concepts simple and hide the complexities of the database from the end users. In the relational data model, we use entities and relationships to help support relational data modeling. This is where some of the common names of databases that you know have come from including IBM DB2, Oracle, Microsoft SQL Server, PostgreSQL and SQL Server.

The fourth generation of data models were created in the mid-1980s that focused on object-oriented and object/relational databases. These databases were created to support more complex data through the use of objects. In these databases, there is the use of the star schema that was used to help support the data warehouses from analytical databases. In addition, it was much more common to see web databases being used. Some of the current ones include Versant, Objectivity/DB and Oracle 12c.

The fifth generation of data models were created in the mid-1990s. These focused on an XML Hybrid and the use of a database management system. This generation of data models helped support unstructured data as well as having object/relational models that supported XML documents. There was the hybrid of relational and object databases on the front end. Many of the current database will fall under this mode to offer that hybrid support of both relational and object-oriented databases. These databases typically are supported into the terabyte size.

The current generation of emerging data models focuses on NoSQL which have started from the early 2000s to the current time. These are the ones that we discussed in a prior tutorial that included key-value store, wide-column store, document-oriented and graph stores. These data models are meant to be distributed and highly scalable. They typically will be in the petabytes in terms of stored and will use a proprietary API to connect to them. There are many options available including SimpleDB from Amazon, BigTable from Google, Cassandra from Apache and MongoDB.


summary
There have been many different data models that have been created to improve of the issues from the previous models.