Why would we want to reduce the data footprint?
Why would we want to reduce the data footprint?
- Years of knowledge and experience in information management strongly suggests that more data does not necessarily lead to better data.
- The more data there is to generate, move and manage, the greater the development and administrative overheads.
- The more data we generate, store, replicate, move and transform, the bigger the data, energy and carbon footprints will become.
How can Big Data reduce Big Data?
- We can use it in profiling, in order to identify the data that could be useful.
- We can use it to identify immaterial, surplus and redundant data.
- By using it to catalogue, categorise and classify certain high-volume data sources.
What can we do with the Big Data profile data?
- We can use it to audit, analyse and review the generation, storage and transmission of data.
- We can use the data to parameterise data generators and filters, and
- To be used to generate ‘Big-Data-by-exception’ discrimination rules and as the basis for data discrimination based on directed machine-learning approaches.
So why would we do all of this?
- We hear that Big Data represents a significant challenge.
- The best way of dealing with significant challenges is to manufacture an appropriate, coherent and realisable response – a strategy.
- By addressing the data problems up-stream we can then attempt to turn the Big Data problem into a more manageable data problem, or alternatively, we can choose to remove the problem.
How does this work in practice?
- We can reduce the amount of data that we actually generate by removing unnecessary generation, storage and transmission of superfluous data. We can change logging, monitoring and signal data generators (applications and devices) so that they produce only concise and usable data. This requires modifications to parts of existing applications and application servers.
- We can introduce data governors as intelligent data filters and actively exclude or include data in data flows. This is particularly relevant where we are dealing with really high-volume data throughput and bandwidth where release of data into the data streams is subject to rules of exception. For example, we may decide to exclude any market signal data that simply repeats the same price stated in previous data.
- We can also filter data dimensionally; by association and abstraction of discrete phases, events, facets and values; and, by time, affinity and proximity.
What are the benefits?
- Making data smaller reduces the data footprint – lower cost, less operational complexity and greater focus.
- The earlier you filter data the smaller the data footprint is – lower costs, less operational complexity and greater focus.
- A smaller data footprint accelerates the processing of the data that does have potential business value – lower cost, higher value, less complexity and best focus.
In order to tame Big Data?
- We should only generate data that is required, that has value, and that has a business purpose – whether management oriented, business oriented or technical in nature.
- We should filter Big Data, early and often.
- We should store, transmit and analyse Big Data only when there is a real business imperative that prompts us to do so.
Conclusions?
- Taming Big Data is a business, management and technical imperative.
- The best approach to taming the data avalanche is to ensure there is no data avalanche – this is referred to as moving the problem upstream.
- The use of smart ‘data governors’ will provide a practical way to control the flow of high volumes of data.
Next steps?
If you are interested in the approach to Big Data mentioned here and in particular want to know more about the definition, architecture and use of ‘data governors’ applied to data, then please leave a comment below.
Many thanks for reading.