The two primary trends in data management that have been happening for as long as I can remember are:
- The expectations of the volume of data we are can produce and consume is growing rapidly
- The expected delay between data production and consumption are decreasing rapidly
We have seen ‘typical’ data volumes of databases grow from MB through GB to a point currently where TB databases are common, and PB databases are the “big guys”. But at the same time we have seen the expectations around the timeliness of response from these databases also change. What used to be a monthly report became a weekly, then a daily and finally it is not uncommon to have near real-time expectations for databases in terms of data retrieval and analysis. We have been on a continual path towards the point where data is consumed at the same moment in which it is created, either in raw form or in an aggregated or otherwise processed state.
At the other end of the …
The two primary trends in data management that have been happening for as long as I can remember are:
- The expectations of the volume of data we are can produce and consume is growing rapidly
- The expected delay between data production and consumption are decreasing rapidly
We have seen ‘typical’ data volumes of databases grow from MB through GB to a point currently where TB databases are common, and PB databases are the “big guys”. But at the same time we have seen the expectations around the timeliness of response from these databases also change. What used to be a monthly report became a weekly, then a daily and finally it is not uncommon to have near real-time expectations for databases in terms of data retrieval and analysis. We have been on a continual path towards the point where data is consumed at the same moment in which it is created, either in raw form or in an aggregated or otherwise processed state.
At the other end of the application stack, our ability to move more data around faster has led to new styles of applications that provide users near immediate access to data as it is created. Popular consumer web examples of such applications include Facebook, Twitter, Friend Feed etc.
But at the moment these applications aren’t real time, they are near real time. This means there is a delay of some form between data creation and consumption. These delays may be very short or several minutes depending on the particular application and its current workload. These delays may seem irrelevant for the above mentioned apps, but the difference between “near real-time” and “real-time” can have a significant impact on the application functionality. I am sure we have all been frustrated when checking in at the airport and choosing a seat, only to get the “sorry that seat is no longer available” once you click the ok button for your selection for example.
The Problem with the RDBMS
The problem with the traditional RDBMS is that it is not a real time system. It is poll based. This means a query is constructed, submitted and the results are returned to the application. This itself may happen very quickly, maybe only a few ms to execute and receive a resultset. However the problem is of course, the data is only “valid” for the exact moment when the query was executed. From that moment onwards the data becomes stale and numerous changes could be happening on the data within the RDBMS while the extracted resultset is processed.
NOTE: Yes I am aware that the disconnected approach is modern and a server side cursor approach used to be common. We moved away from server side results processing for scalability purposes, but regardless even with server side resultset processing you weren’t automatically updated with the data changed.
Using my example above, while I am deciding if I want a window or an isle or if it is better to have a middle seat at the front of the plane or an isle at the back, the underlying data set could be receiving numerous updates. When I finally make my selection the dataset could be completely invalid requiring me to start the whole process again.
While this is a very simplistic example, the issue here is the trend towards real-time in the user experience layer is not supported by the current interfacing mechanisms to a RDBMS. While we are seeing AJAX etc being used to provide an interface which can update data in real time, underneath likely that data is still being collected from polled queries running intermittently.
Real time & Efficiency
One solution to this problem may be simply to run our polling cycles are such a high rate that the difference between real-time and near real-time becomes indistinguishable. This is possible but of course, it comes at a high cost in terms of impact on scalability.
Let me use a fictitious example to highlight this. Imagine a Twitter like messaging system. This system is to provide a real time like experience to their users so they set a 2 second polling cycle for all client update queries.
For the purpose of this example, let us assume that we have 1 million users. Those 1 million users have a different usage profiles, for this example let us assume that:
- 50% of users get 1 message a day
- 20% of users get 10 messages a day
- 15% of users get 30 messages a day
- 10% of users get 200 messages a day
- 4% of users get 1000 messages a day
- 1% of users get 5000 messages a day
Ok, a couple more assumptions:
- To poll and retrieve an empty poll requires 5 “resources” (CPU, DISK, NETWORK)
- To poll and retrieve a message empty poll requires 50 “resources” (CPU, DISK, NETWORK)
Now let’s compare a system which polls the database every 2 seconds with an alternative system in which messages are pushed from the database on creation to the client on creation.
% User Base | Replies per day | Poll Resources | Push Resources | Push % of Poll |
50 | 1 | 108025000000 | 25000000 | 0.0% |
20 | 10 | 43300000000 | 100000000 | 0.2% |
15 | 30 | 32625000000 | 225000000 | 0.7% |
10 | 200 | 22600000000 | 1000000000 | 4.4% |
4 | 1000 | 10640000000 | 2000000000 | 18.8% |
1 | 5000 | 4660000000 | 2500000000 | 53.6% |
100 | 221850000000 | 5850000000 | 2.6% |
With the above distributions we would see that a 2 second poll time would have a resource requirement equal to 38x a push based database. This huge overhead is obviously going to be a major overhead and a significant limitation to the upper level of scalability possible.
So What to Do?
I will really address the resolution path for the limitations of the RDBMS when I complete this series in my summing up post. However specific to this issue, there are a couple of things happening which you should be aware of.
Firstly, traditional RDBMS vendors are trying to shoehorn some form of push based results notifications into existing database platforms. For example, SQL Server 2005 and above has query notifications and Oracle & MySQL has something similar (please post in the comments). Current implementations are rudimentary and not suitable for large scale deployment (meant more as a global cache “refresh” event than a user specific resultset update).
Also to watch, there are a couple of startups which have identified the real-time trend that is happening in Silicon Valley, and have also identified that existing RDBMS’s aren’t going to be able to fulfill this trend in current form. They are focusing on re-architecting the RDBMS to be push rather than pull based. GroovyCorp with their SQL Switch product is an organization that I have been speaking to recently. Groovy is the furthest down this particular road that I am aware of, with a real-time push based RDBMS being launched next month.