Use Sophia to knock out your gen-ed requirements quickly and affordably. Learn more
×

Data Model Innovations

Author: Sophia

what's covered
In this lesson, you will explore the major innovations and data model evolutions from the 1960s to the 2020s. Specifically, this lesson will cover:
  1. Introduction
  2. File Systems of the 1960s
  3. Hierarchical Data Models of the 1970s
  4. Relational Models of the 1970s
  5. Object-Oriented in the 1980s
  6. XML Hybrids in the 1990s
  7. NoSQL after the 2000s

1. Introduction

As you have seen with the shift from a manual file system to a computerized file system, there is always a focus on finding better ways to manage data. There have been many changes in computerized file systems, with each model trying to fix some of the shortcomings of the previous model. You will find that many of the newer database concepts have a significant resemblance to some of the older data models and concepts.

2. File Systems of the 1960s

The first generation of data models was the file system. They were used mostly during the 1960s to the 1970s, and they were mostly used on IBM mainframe systems. They focused on record management rather than handling of relationships. As we've seen in earlier tutorials, this approach was quite limited.

3. Hierarchical Data Models of the 1970s

The second generation of data models was used in the 1970s. These included the hierarchical and network data models. These were true early database systems. The network data model, in particular, helped create the foundation of many of the concepts that are still used in modern databases today.

4. Relational Models of the 1970s

The third generation of data models started during the mid-1970s with the relational model. This is what you currently work with within PostgreSQL. The foundations of the relational model are meant to keep the concepts simple and hide the complexities of the database from the end users. In the relational data model, you use entities and relationships to help support relational data modeling.

EXAMPLE

Some of the common names of databases that you may know come from this model, including IBM DB2, Oracle, Microsoft SQL Server, PostgreSQL, and SQL Server.

5. Object-Oriented in the 1980s

The fourth generation of data models was created in the mid-1980s and focused on object-oriented and object/relational databases. These databases were created to support more complex data through the use of objects. In these databases, a star schema is used to help support the data warehouses from analytical databases. In addition, it is much more common to see web databases being used.

EXAMPLE

Some of the common databases in this model include Versant, Objectivity/DB, and Oracle 12c.

6. XML Hybrids in the 1990s

The fifth generation of data models was created in the mid-1990s. These focused on an XML Hybrid and the use of a database management system. This generation of data models helped support unstructured data as well as having object/relational models that supported XML documents. There was a hybrid of relational and object databases on the front end. Many of the current databases fall under this model and offer that hybrid support of both relational and object-oriented databases. These databases are typically supported into the terabyte size.

7. NoSQL after the 2000s

The current generation of emerging data models from the early 2000s to the current time focuses on NoSQL. These include a key-value store, wide-column store, document-oriented, and graph stores. These data models are meant to be distributed and are highly scalable. They are typically in the petabytes in terms of storage, and use a proprietary API to connect to them.

EXAMPLE

There are many available options with this model, including SimpleDB from Amazon, BigTable from Google, Cassandra from Apache, and MongoDB.


summary
In this lesson, you learned that different data models have been created over time to improve performance relative to the issues and limitations of the previous models.

Source: Authored by Vincent Tran