5 Hidden Skills for Big Data Scientists

5 Min Read

Here are five hidden skills for big data scientists: 

1. Be Clear:  Is Your Problem Really A Big Data Problem?

Here are five hidden skills for big data scientists: 

1. Be Clear:  Is Your Problem Really A Big Data Problem?

There are many big data problems out there requiring huge compute scale, innovations in computation paradigms, vast storage space and so on. But just because your data takes up lots of disc space does not mean that you have a big data problem. Firstly, your data may be encoded in an inefficient format. XML, for example, can be incredible verbose (all those close tags and human readable text). Secondly, if your data changes over time it may change very slowly indicating that monitoring the difference between data sets is more important that importing complete data sets. Thirdly, you may be processing your information on a legacy architecture designed for low power CPUs or cores. Architecture should be data driven, meaning that you need to deeply understand the informational aspects of your data and not just the size of the data as it comes to you on disc.

2. Communicating About Your Data

Often, in large organization (I work for Microsoft and have worked at IBM in the past), the product requirements for data deliverables are high level. For example: we need these variables to be 99% accurate. This simplistic view of data – that a level of quality can be delivered in a specified time frame – is ignorant of the highly opportunistic nature of processes that improve the quality of data. Consequently, a data scientist needs to aggressively manage the communication about projects which transform and improve data sets. Do as much research as possible to minimize unknowns, but don’t sign contracts that involve both time and quality metrics!

3. Invest in Interactive Analytics, not Reporting

When you construct reports about your data products, you are answering a fixed set of questions. This is useful for monitoring, but it doesn’t provide a way to get at the unknown unknowns. It is only through interactions with data (often called slicing and dicing) that pockets of interest (problems and opportunities) are discovered. Rich, interactive tools may be perceived as a low priority and never quite got to. Avoid this peril!

4. Understand the Role and Quality of Human Evaluations of Data

When trying to determine how good your data product is, it is often the case that we employ an array of human judges to evaluate a sample of the data. The higher up the management chain you go, you tend to find a higher degree of respect for human judgement. There are many studies, however, that show that human judgements are not always as good as they are cracked up to be. In many cases, machines can do better than humans, they just tend to make different types of errors. On deeper inspection, human errors can be traced to the structure of incentives around the judgement process. Innovate in methods to compare data sets that help distinguish their relative quality without necessarily the expense of human assessment.

5. Spend Time on the Plumbing

How does data get in to your system? How does it flow? Are you sure every bit of information got in? With large scale data loading and processing systems, one doesn’t one a small number of failures to tip over the entire run. However, silently failing components can cause big headaches down the line when you are reporting your summary findings. Make sure there are no leaks in your pipeline!

 

Share This Article
Exit mobile version