As discussed elsewhere, I find it useful to consider big data in four separate categories: (1) machine-generated event data, (2) computer-generated log data, (3) user-generated text and (4) user-generated audio, image and video. These categories are based largely on the level of structure within the data and the differing technologies required to store and process it, leading to category boundaries that are somewhat flexible. For completeness, and to include what we might call small data, let me add here category (0) which includes the traditional transaction and status data that we’ve stored and manipulated by computer for more than half a century. We seem to have a tendency as humans to categorize the world in a binary fashion—good or bad, short or tall, black or white and so on. You are no doubt familiar with the binary classification of structured vs. unstructured information which splits the above categories somewhere around (2). I, myself, prefer the equivalent terms hard vs. soft information, simply because “unstructured information” is an oxymoron; information, by definition, has structure.
The other common binary classification is data vs. content, where content starts at category (3) above. The data/content division arises from an old technological boundary between databases and content stores. Databases distinguish between pieces of information depending on their meaning and store them in separate records and fields; access is via query. Content stores do not see or impose lower-level structuring of information; access is via search. This distinction is oversimplified, of course. Databases have been adding features to support large text fields and blobs (binary large objects) for years. Content stores do, of course, support field structures. However, such non-core add-ons have tended to be treated as second-class citizens on both sides.
Recent business developments and technological advances have caused vendors on both sides of the fence to look to the other side. My most recent white paper, sponsored by NeutrinoBI, examined how search technology could be effectively applied to corporate data. An earlier white paper with Attivio came at the convergence from the content side, extending the use of inverted indexes from the content world to more structured data.
Coveo have always had a large number of connectors to a wide variety of content from websites, e-mails and content stores as well as ODBC connectivity to relational databases. Version 7.0 adds Twitter as a source. Their Enterprise Search 2.0 concept—stop moving data and start accessing knowledge—will make perfect sense to anybody following the push for data virtualization in the BI world by vendors such as Composite Software and Denodo.
From a business point of view, the important point here is to recognize that business users need integrated access to both data and content in order to understand what’s going on and predict what to do next. The volumes and varieties of big data make it very clear that this need cannot be satisfied by trying to push everything into or through one large information store, whether that be a database or a content store. There are, and will continue to be, optimal storage and processing technologies for different types of information and different purposes. Providing equal access to these different stores and equal priority to different access methods will be key.