Regular readers know that I am not a Google fan boy, and that much of my commentary on Google focuses on their neglect of exploratory search. Nonetheless, when I saw the initial Youtubeware describing Google Squared a few weeks ago, my ears perked up. I decided to wait until it went live to assess it. Well, it’s live now.
The idea of Google Squared is simple: it “collects facts from the web and presents them in an organized collection, similar to a spreadsheet.” The best way to understand it is to try it. For example, search for hybrid car, and you’ll see a table of hybrids, with columns corresponding to image, description, type of transmission, yeah, and height. Add a price column if you’d like, and it will populate it for you. Very slick.
Of course, it is, as Google admits, “by no means perfect”. Most queries will show its warts, and some, like information scientists, are way off (it doesn’t even try to return results for library scientists). But it does pretty well when there is structured data out there, and it makes admirable attempt to find it! I suspect the real trick here is that it does a decent job of finding determining …
Regular readers know that I am not a Google fan boy, and that much of my commentary on Google focuses on their neglect of exploratory search. Nonetheless, when I saw the initial Youtubeware describing Google Squared a few weeks ago, my ears perked up. I decided to wait until it went live to assess it. Well, it’s live now.
The idea of Google Squared is simple: it “collects facts from the web and presents them in an organized collection, similar to a spreadsheet.” The best way to understand it is to try it. For example, search for hybrid car, and you’ll see a table of hybrids, with columns corresponding to image, description, type of transmission, yeah, and height. Add a price column if you’d like, and it will populate it for you. Very slick.
Of course, it is, as Google admits, “by no means perfect”. Most queries will show its warts, and some, like information scientists, are way off (it doesn’t even try to return results for library scientists). But it does pretty well when there is structured data out there, and it makes admirable attempt to find it! I suspect the real trick here is that it does a decent job of finding determining instances of the query category (perhaps a souped up version of work they started discussing back in 2004), and then mining structured content about those instances from repositories like Freebase.
I mean, look at these results:
To be clear, I picked these examples after a fair amount of trial and error–like Wolfram Alpha, it is hit and miss, with more miss than hit. But, as Seth Grimes said at the recent Text Analytics Summit, when Wolfram Alpha is good, it’s very very good, but when it’s bad, it’s horrid. Google Squared doesn’t fail quite so spectacularly, and it gives you a lot more of a chance to interact with it.
This is, by far, the best step I’ve seen Google take towards HCIR, and I’m impressed. It’s still a toy at this stage, but I think it has a future. My warmest congratulations to Daniel Dulitz and the rest of the magpie team that developed it; I’m looking forward to seeing it evolve.