The uncanny valley — a sociotechnological reality in which tech becomes so sophisticated that it mimics humanity to a confusing, often frightening, degree. This is the ledge upon which our society hinges, or at least according to Sara Watson, a technology critic and a Fellow at the Berkman Center for Internet and Society at Harvard University, and contributor to The Atlantic.
The uncanny valley — a sociotechnological reality in which tech becomes so sophisticated that it mimics humanity to a confusing, often frightening, degree. This is the ledge upon which our society hinges, or at least according to Sara Watson, a technology critic and a Fellow at the Berkman Center for Internet and Society at Harvard University, and contributor to The Atlantic.
In her newest piece, Watson argues that our data doppelgängers have created an uncanny valley of personalization in which who we are in reality is mirrored back to us, often without consent, understanding or even context, via online advertisements.
The culprit? Big data.
Targeted Ads, Big Data and Why We Think It’s All a Little Creepy
Data collection, as it currently stands, is a never-ending process. From the moment you check your news feed in the morning to when you purchase lunch with your credit card to Googling your next vacation locale in the evening, the conglomerate of your digital touch points are being harnessed by multiple companies, often without your permission, and then sold to third-party vendors, or data brokers, for the purposes of ad targeting.
This isn’t news. This is simply the way of the current digital world. And, if you weren’t sure that ad targeting using third-party data, much of it scraped and thus unreliable, was so prominent of a business practice, consider this: data management platforms (DMPs).
DMPs are, by definition alone, ad targeting tools. They pull in third-party data, and first-party if you so decide to tag your site accordingly, from multiple sources, allowing advertisers to segment audience traits (like purchase history, geography, demographics, etc.) to better serve ads to said audiences. DMPs are often why you see an ad on Facebook for the shoes you bought last week. DMPs are often the platforms to blame when the question arises, “Why am I even being served this ad? I’m not [insert random characteristic like ‘pregnant,’ ‘looking to purchase shoes,’ or ‘bipolar’ here].”
The answer is simple: because you meet a demographic criteria on a particular platform (Google, Facebook, etc.) and once searched for a term or bought a particular item that hinted in your being interested in said advertisement – however vague, perhaps personally irrelevant, that search or purchase may have been.
“Right now we don’t have many tools for understanding the causal relationship between our data and how third parties use it,” writes Watson. “When we try to figure out why creepy ads follow us around the Internet, or why certain friends show up in our news feeds more than others, it’s difficult to discern coarse algorithms from hyper-targeted machine learning that may be generating the information we see. We don’t often get to ask our machines, ‘What makes you think that about me?'”
And here is where we begin to experience the uncanny valley, a space in which our personalized ads make us question ourselves. “Am I pregnant?” “Do I want these shoes?” “Am I bipolar?” And the real kicker: “Is this just a hilarious mistake or does big data know more about me than I know about me?”
Personal Agency in the Digital Age
This entrance into the uncanny valley is serious business, to be sure. It shines light on human psychology including doubt, curiosity, defensiveness and secrecy. Of course, not all big data is part of the uncanny valley phenomenon. Moreover, first- and second-party data are arguably the defenders from an uncanny valley – giving humans the opportunity to own their doppelgängers, rather than the other way around.
See, third-party data is in the business of up-selling, and is the reason big data faces so much scrutiny. When you use data collected without user consent, those users lose all control of their digital identity, and all they see is the finished product, something that looks like a machine-generated advertisement based on personality traits both relevant and non. It’s the generator of the oft-asked question: “Why am I seeing this?”
“Personally targeted digital experiences present a likeness of our needs and wants, but the contours of our data are obscured by a black box of algorithms,” writes Watson. “Based on an unknown set of prior behaviors, these systems anticipate intentions we might not even know we have.”
The key here is in the “unknown,” because along with data brokerage of information collected without a user’s consent, comes the breakdown of a user’s personal agency to define who she is online – and soon offline as the Internet of Things takes shape.
Instead, what first- and second-party data allow is a data democracy, and they rid us of having to live in the uncanny valley. They give agency back to users, to people, essentially saying, “you have the power, not the tech.” Because in the end, people generate data, and people thus own the right to share that data with the brands they trust, and hoard it, if they so choose, from those that they do not. This is our way out of the uncanny valley. This is a world where customer experience and lifetime value matter more than quarterly sales.
Are we there yet? Probably not. But the cookie is indeed crumbling and the voice of the people fighting for their data rights will be heard. When that happens, which side will you be on?