I have been given permission to re-publish an interview I did with the Data Protection & Law Policy newsletter. Also to appear in the e-Finance and Payments Law & Policy newsletter.
May be of interest to some of my readers.
[INTERVIEW]
1. Data protection challenge of the future: what is Big Data?
I have been given permission to re-publish an interview I did with the Data Protection & Law Policy newsletter. Also to appear in the e-Finance and Payments Law & Policy newsletter.
May be of interest to some of my readers.
[INTERVIEW]
1. Data protection challenge of the future: what is Big Data?
The three V’s – Volume, Velocity, and Variety – are the essential characteristics of “Big Data”. While data protection and privacy laws are still busy catching up with technologies of yesterday, Big Data is growing at a lightning speed on a daily basis. How can companies deal with the data protection challenges brought about by Big Data in order to truly benefit from the opportunities introduced by Big Data? First, one must truly grasp what is Big Data. We interview Jeff Jonas, Chief Scientist at IBM Entity Analytics, to obtain his perspectives and definition of Big Data, and his experience handling Big Data.
2. When did data become big?
Big Data did not become big overnight. What I think happened is data started getting generated faster than organizations could get their hands around it. Then one day you simple wake up and feel like you are drowning in data. On that day, data felt big.
3. Please explain and elaborate on the characteristics of Big Data?
Big Data means different things to different people.
Personally, my favorite definition is: “something magical happens when very large corpuses of data come together.” Some example of this can be seen at Google, for example Google flu trends and Google translate. In my own work, I witnessed this first in 2006. In this particular system, the system started getting higher quality predictions and faster as it ingested more data. This is so counter intuitive. The easiest way to explain this though is to consider the familiar process of putting a puzzle together at home. Why is it do you think the last few pieces are as easy as the first few – even though you have more data in front of you then ever before? Same thing really that is happening in my systems these days. It’s rather exciting to tell you the truth.
To elaborate briefly on the new physics of Big Data, I pinpointed the three phenomena of Big Data physics in my blog entry – Big Data. New Physics – drawing from my personal experience of 14 years of designing and deploying a number of multi-billion row context accumulating systems:
1. Better Prediction. Simultaneously lower false positives and lower false negatives
2. Bad data good. More specifically, natural variability in data including spelling errors, transposition errors, and even professionally fabricated lies – all helpful.
3. More data faster. Less compute effort as the database gets bigger.
Another definition of Big Data is related to the ability for organizations to harness data sets previously believed to be “too large to handle.” Historically, Big Data means too many rows, too much storage and too much cost for organizations who lack the tools and ability to really handle data of such quantity. Today, we are seeing ways to explore and iterate cheaply over Big Data.
4. When did data become big for you? What is your “Big Data” processing experience?
As previously mentioned, for me, Big Data is about the magical things that happen when a critical mass is reached. To be honest, Big Data does not feel big to me unless it is hard to process and make sense of. A few billion rows here and a few billion rows there – such volumes once seemed a lot of data to me. Then helping organizations think about dealing volumes of 100 million or more records a day seemed like a lot. Today, when I think about the volumes at Google and Facebook, I think: “Now that really is Big Data!”
My personal interest and primary focus on Big Data these days is: how to make sense of data in real time, that is fast enough to do something about the transaction while the transaction is still happening. While you swipe that credit card, there is only a few seconds to decide if that is you or maybe someone pretending to be you. If an unauthorized user is inside your network, and data starts getting pumped out, an organization needs sub-second “sense and respond” capabilities. End of day batch processes producing great answers is simply late!
5. What are the technologies currently adopted to process Big Data?
The availability of Big Data technologies seems to be growing by leaps and bounds and on many fronts. We are seeing a large corporate investments resulting in commercial products – at IBM two examples would be IBM InfoSphere Streams for Big Data in motion and IBM InfoSphere Big Insights for pattern discovery over data at rest. There are also many Big Data open source efforts under way for example HADOOP, Cassandra and Lucene. If one were to divide these into types one would find some well suited for streaming analytics and others for batch analytics. Some help organizations harness structured data while others are ideal for unstructured data. One thing is for sure – there are many options, and there will be many more choices to come as Big Data continues to get investment.
6. How can companies benefit from the use of Big Data?
I’d like to think consumers benefit too, just to be clear. To illustrate my point, I find it very helpful when Google responds to my search with “did you mean ______”. To pull this very smart stunt, Google must remember the typographical errors of the world, and that I do believe would qualify as Big Data. Moreover, I think health care is benefiting from Big Data, or let’s hope so. Organizations like financial institutions and insurance companies are benefitting from Big Data also by using these insights to run more efficient operations and mitigate risks.
We, you and I, are responsible in part for generating so much Big Data. These social media platforms we use to speak our mind and stay connected are responsible for massive volumes of data. Companies know this and are paying attention. For example, my friend’s wife complained on Twitter about a specific company’s service. Not long thereafter they reached out to her because they too were listening. They fixed the problem and she was as happy as ever. How did the company benefit? They kept a customer.
7. What is the trend of processing Big Data?
I think a lot of Big Data systems are running as periodic batch processes, for example, once a week or once a month. My suspicion is as these systems begin to generate more and more relevant insight, it will not be long before the users say: “Why did I have to wait until the end of the week to learn that? They already left the web site.”; or, “I already denied their loan when it is now clear I should have granted them that loan.”
8. What are the complications dealing with the privacy implications brought about by Big Data compare to average sized data?
There are lots of privacy complications that come along with Big Data. Consumers, for example, often want to know what data an organization collects and the purpose of the collection. Something that further complicates this: I think many consumers would be surprised to know what is computationally possible with Big Data. For example, where you are going to be next Thursday at 5:35pm or your three best friends, and which two of them are not on Facebook. Big Data is making it harder to have secrets. To illustrate using lines from my blog entry – Using Transparency As A Mask – ‘Unlike two decades ago, humans are now creating huge volumes of extraordinarily useful data as they self-annotate their relationships and yours, their photographs and yours, their thoughts and their thoughts about you … and more. With more data, comes better understanding and prediction. The convergence of data might reveal your “discreet” rendezvous or the fact you are no longer on speaking terms your best friend. No longer secret is your visit to the porn store and the subsequent change in your home’s late night energy profile, another telling story about who you are … again out of the bag, and little you can do about it. Pity … you thought that all of this information was secret.’
9. What are the privacy concerns & threats Big Data might bring about – to companies and to individuals whose data are contained in ‘Big Data’?
My number one recommendation to organizations is “Avoid Consumer Surprise.”
That said, my concern is many consumers don’t seem to give a hoot. When is the last time you actually read the privacy statement or terms of use on your favourite social media site? I think in the future we’ll see Big Data being used to make the services offered even more irresistible. Your Internet searches will become custom crafted lenses. As a student of privacy and someone building Privacy by Design (PbD) into my inventions, I think about these things all the time.
10. How are companies currently applying privacy protection principles before/after Big Data has been processed?
I think there are many best practices being adopted. One of my favorites involves letting consumers opt-in instead of opting them in automatically and then requiring them to opt-out. One new thing I would like to see become a new best practice is: a place on the web site, for example my bank, where I can see a list of third parties whom my bank has shared my data with. I think this transparency would be good and certainly would make consumers more aware.
11. What is “Big Data”, according to Jeff Jonas?
Big Data is a pile of data so big – and harnessed so well – that it becomes possible to make substantially better predictions, for example, what web page would be the absolute best web page to place first on your results, just for you.