Google has announced Google Refine 2.0, “a power tool for data wranglers”. It’s an open-source tool for cleaning and enhancing “messy” data sets, including cleaning up inconsistencies, transforming them from one format into another, and extending them with new data from external web services or other databases.
How does it shape up from a corporate point of view? Is this another step for Google into enterprise information management?
The new version includes:
“new extensions architecture, a reconciliation framework for linking records to other databases (like Freebase), and a ton of new transformation commands and expressions.”
The tool runs on your desktop (even though you’re accessing through a browser), so you don’t have to worry about data security.
Here are some videos about the new product. First, an introduction to how you can use the tool to do basic manual cleansing of data:
- Converting various badly-entered variants of “FFP” into “Firm Fixed Price”
- Helping “cluster” groups of similar data together using heuristics
- Using expressions to change distributions using log functions
- Identifying problems (zero values, errors between millions/billions, etc.)
Next, data transformations. The tool makes it easy to convert information in a basic HTML list into a nicely formatted table, using filtering and an expression language:
- Isolating certain rows
- Simple conversions, e.g. removing bolding
- Extracting part of the values to a new column
- Splitting existing values into new columns
The extractions used then be exported into a standard like JSON, and used to convert similarly-formatted datasets.
Finally, you can use Google Refine to augment your data with data from web services and how to link your data with databases such as Freebase:
- Calling web services to add geo-coding to address information
- Using Google’s language detection service to identify the language of different values
- Doing database joins with external data sources (Google calls this “reconciliation”)
- Freebase has a nice service that will automatically figure out what type of values you have in your data (e.g. movie names), and matches them to appropriate records, and offers you options in case of ambiguity (e.g. there are several movies containing the word “Terminator”)
- Once you have a match, you can choose from other fields (such as the movie release date, etc.)
Overall, it looks like Google Refine 2.0 is great free option for correcting and correlating small, real-world messy data sets from the web — but you have to be a JSON ninja to get the most out of it.
It’s clear that you could already use this in a corporate context — there are many one-off projects that require manual collection of information from different sources. And organizations suffer from just the same types of information fragmentation as the general web.
With a more user-friendly interface and (presumably) more robust scalability, this type of tool could be of great benefit for people trying to pragmatically cobble together information from various data sources (a few spreadsheets, an external web site, the corporate data warehouse, etc.)
It’s not clear that corporate data is or ever will be a priority for Google — the entire enterprise information management market is a rounding error compared to their advertising empire. In the meantime, anything that can help people get the data they need, cheaply, should be welcomed.
This type of solution will never replace the need for a robust enterprise information platform, but the need for “messy” solutions to answer real-world business questions is a frequently-underestimated need in business analytics deployments, and Google Refine 2.0 looks like a great tool to add to the workbench.