Donald Feinberg

A member of the Gartner Blog Network

Donald Feinberg
VP Distinguished Analyst
20 years at Gartner
45 years IT industry

Donald Feinberg is a vice president and distinguished analyst in Gartner Intelligence in the Information Infrastructure group. Mr. Feinberg is responsible for Gartner's research on database management systems and data warehousing infrastructure. Read Full Bio

In-Memory DBMS vs In-Memory Marketing

by Donald Feinberg  |  September 28, 2014  |  Submit a Comment

Recently, we published a Market Guide for In-Memory Computing. The document covers all forms of IMC, including Database Management Systems (DBMS). Gartner defines In-Memory Computing (IMC) as a computing style where applications assume all the data required for processing is located in the main memory of their computing environment. Although we define many styles of IMC (Application Servers, Data Grids, Messaging and Complex Event Processing), I want to concentrate specifically on DBMS technology in-memory. Why? There appears to be some level of misconception about what does and does not qualify as an In-Memory DBMS (IMDBMS).

Our definition of IMDBMS requires the database structure to be in-memory, specifically the main memory of the server. Data in the database is accessed through instructions for accessing memory and not using I/O instructions. This should not be confused with products that buffer data in a disk-block cache. Disk-block caching has been used in the industry for many years, pre-dating relational technology. For example, IBM’s IMS DBMS was, from its introduction in 1968, able to cache data in memory, also referred to as pre-fetch or read-ahead; however, it is not an IMDBMS. While we agree that caching does improve performance, over accessing disk or flash, it is not IMC.

One major difference between traditional disk-based DBMS engines and IMDBMS is the implementation of the consistency model. IMDBMS covers all DBMS consistency models from ACID consistency to eventually consistent models, the latter found in many of the noSQL DBMS engines. However, regardless of the consistency model, a commit operation will be performed. Disk-based systems, even if all the data is cached in memory buffers, require the transaction to be written to disk or flash. Regardless of the length of time taken to perform this operation, it is greater than zero. With IMDBMS products, the commit operation takes place in memory. Although this requires unique methods or assuring the persistence of the data, due to the volatility of memory, such as synchronous writing of data to a second server using Remote Direct Memory Access (RDMA), the latency is less than writing to external media. This illustrates why the performance of IMDBMS is higher, even over using a disk-block buffer.

With our precise definition of true IMDBMS, we seek to dissipate the hype in the market over IMDBMS and claims made by some vendors that their technology is IMDBMS when, in fact, it is not.

 

Submit a Comment »

Category: Analyst Data Management DBMS General In-Memory Computing In-Memory DBMS Operational DBMS     Tags: , , , , , , ,

Another New DBMS to Replace Relational?

by Donald Feinberg  |  June 15, 2009  |  Submit a Comment

On June 9, Google Labs announced Google Fusion Tables , a new system for managing data in the Google cloud from Google Labs.  I want to be clear about one point – this is an experiment from Google Research not exactly ready for production systems (Google is clear about this also).  The issue I have is how the press exaggerates the announcement by warning the Database Management System (DBMS) vendors to watch out as they are being blindsided by Google.  You must be kidding!

First, what is Fusion Tables?  It is a system for managing data in the cloud for collaboration with data from disparate sources in a simple way, including the ability to “drill-down” to the sources of the data.  It allows the user to “join” (in a loose definition) data without the constraints of the data model, normally found in a relational DBMS.  What it is not is a DBMS to manage data for an On-Line Transaction Processing (OLTP) system or a Data Warehouse.  Fusion Tables is based on Data Spaces, defined in Wikipedia as “a container for domain specific data” and further “A Data Space system is a multi-model data management system that manages data sourced from a variety of local or external sources”.  Data Spaces were originally defined in the early 1990’s during the Object Oriented DBMS (OODBMS) era. 

As with many new ideas, there are elements of the technology that may have value.  When this happens, we find that the original relational model is evolved to incorporate this new technology or model.  We saw this occur with OODBMS – the modern DBMS does use inheritance and user defined classes.  We saw this happen with XML – now the modern DBMS has full native XML as a data type as robust as the original pure-play XML DBMSs.  Today we are seeing this happen with MapReduce as several DBMS vendors have incorporated it into its DBMS engine.  We will see this happen also with the column-store construct, which we believe will be incorporated into many modern DBMS engines as an indexing technique for optimization.  As to the validity of Fusion Tables and the ability to mix disparate data source and types, there is little question as to the usefulness of this.  Oracle has already put a capability in its current release (11g) as SecureFiles and Microsoft in SQL Server 2008 has a feature called FILESTREAM.  These are not experimental or beta test features but implemented in full production.

Is Fusion Tables worth watching?  Of Course!  The concept of easily combining disparate sources of data for analysis and collaboration is important and has been around since the inception of IT.  Mashups and other Web 2.0 constructs have made some of this available today (see The Rise of Collaborative Decision Making).  Google has a good start on this with the ability to use data from Google Apps and other spreadsheet style data with the initial version of Fusion Tables.  Organizations must take care or these types of applications will cause additional turmoil in the governance and security space (see Developing a Strategy for Dealing With Desktop Database Management System Proliferation ).  Will this technology replace your DBMS for OLTP and DW systems – not soon or in the future.  Many have tried (e.g., OODBMS).  There are other new techniques and systems being researched today that have promise (e.g., Akiba), however, the relational model continues to demonstrate flexibility and resiliency (over 30 years) and you can expect that to continue.  Products like DB2, Informix, Ingres, MySQL, Oracle, PostgreSQL, SQL Server and Sybase ASE will be used in new IT systems for many years to come.

Submit a Comment »

Category: DBMS     Tags: , , , , , , , , , , , , , , , , , , , , , , ,

Me – A blogger – Who would have thunk?

by Donald Feinberg  |  April 17, 2009  |  Submit a Comment

The Merriam-Webster dictionary defines curmudgeon as “Archaic”.  That’s me – sometimes.  Many of the open source bloggers might agree.  In years past, they all thought I was a curmudgeon.  Go ahead Tony – laugh.  But I can come around – although I still believe that companies like to make money and developers like a pay check.  When it comes to blogging – that is not me – or so I thought.  I do not read blogs and this is my first attempt at writing one.  Funny that two years ago, I won an award in our group for being mentioned more than anyone else in blogs that year!  And this, when I never read blogs or comment on them. So the curmudgeon changes again and here I am with my own blog. 

Tchau

Submit a Comment »

Category: Analyst General     Tags: