Couldn’t resist that headline! But seriously, if you peel the proverbial onion enough, you will see that the lack of tools to discover / analyze the structure of that data is the truth behind the opaqueness that is implied by calling the data “unstructured”.
The need to take a deeper look at this? See this graph:
Couldn’t resist that headline! But seriously, if you peel the proverbial onion enough, you will see that the lack of tools to discover / analyze the structure of that data is the truth behind the opaqueness that is implied by calling the data “unstructured”.
The need to take a deeper look at this? See this graph:
A lot of data growth is happening around these so-called unstructured data types. Enterprises which manage to automate the collection, organization and analysis of these data types, will derive competitive advantage.
Every data element does mean something, though what it means may not always be relevant for you. Let me explain with common data sets which are currently labeled “unstructured”.
- Text: Lets start with the subsets in here.
- Machine generated data (sensors, etc) definitely can be deciphered once you get the meta data structures / templates that the machine uses to generate the data. Of course, some of the fields in the stream will need more advanced analysis/discovery capabilities to automate the analysis.
- Interaction Data: This is the case for social media data where a lot of business value lies in the long open text fields where people express sentiment about other people and products. To automate the analysis of these, entity recognition and semantic analysis provide the ability to understand the data better. In other words, if you can represent the text data as a collection of entities, relationships between them and relationship attributes like sentiment, you are much closer to analyze the data than you might think!
- Images: Image recognition algorithms have almost become mainstream (though not very well-received as seen in the reservations against Google and Facebook deploying these at scale). Again, these techniques yield entities though deriving relationships and sentiment are much more challenging.
- Audio: Again a lot of research is yielding technology which can decipher the content of audio streams and even annotate the resultant content with mood of the speaker! You could then leverage the text analysis techniques to get closer to the analyzable data.
- Video: Unarguably, this is the most challenging data type due to the sheer volume of data that needs to be handled. Image recognition techniques can be applied per frame or a series of frames to extract entities. Of course, deciphering the action (the video content) is further out in the future. Audio recognition can be applied to understand part of the “action” content.
Based on the above, some new data handling and analysis capabilities are required to extract more value out of these new data types.
- Dynamic Meta data discovery: This is mainly for text data. This includes the ability to
- Dynamically derive meta data out of sample result sets e.g. new REST end points
- Maintain / Master metadata on an ongoing basis
- At run time, choose the appropriate / best matching metadata set out of several possible options
- Taxonomy Setup: You need to be able to capture / represent your business and its entities for other analysis layers to reference and annotate incoming data. As your business evolves, this taxonomy will get richer.
- Entity Extraction and Semantic Analysis: This provides the ability to apply the taxonomy to any text data stream and derive entities and relationships expressed in that stream. This analysis can then be stored either in a relational database or as a graph.
- Multimedia Recognition Techniques: As described earlier, various techniques for deciphering the content of images, audio and video are required to analyze these data types.
The layering is along the following lines:
A lot of action is still on the top layers but eventually it will encompass audio and video as well.
Do you still believe all of this data deserves the opaque sounding “unstructured” tag? Are you building the capabilities to put the structure back into this data?