Right now, big data and artificial intelligence (AI) are allowing business leaders and marketers in the U.S. to get a lot of work done. But that could easily change. All it takes is one bad actor, such as Russia. As a result, we’re currently witnessing the beginning of a clampdown on data-based marketing activities. What’s more, there’s a new report with a lot of experts behind it that doesn’t bode well for AI and some of the marketing activities that are now commonplace, such as geotargeting.
As Bill Hess points out, hackers are already using big data and AI against us. This primarily takes the form of data poisoning. Data integrity hacks are a reality companies have to deal with, and Russia’s meddling in the 2016 election was part of what led Mark Zuckerberg to change Facebook’s algorithm, which affects marketing.
Big Data Faces Rising Regulations
Now, in large part because of Russian trolls who used big-data marketing tactics to promote fake news and influence the election, Facebook will no longer place sponsored posts from brands in people’s news feeds unless these posts have a lot of engagement. We’ve crossed a threshold. Big data marketing tactics won’t go unhampered now that people are seeing the dark side.
In Europe, starting in May, General Data Protection Regulation (GDPR) will place a stiff penalty on companies that process consumers’ data without privileging their right to privacy and their right to control their own data, regardless of a company’s location. That means if your company has a website and EU citizens visit your site, you’ll have to pay strict attention to what GDPR says you can and cannot do with their data.
People are growing more and more suspicious of AI. The new malicious AI report, which was written by 26 experts from academia, industry, and 12 other fields, identifies AI as a potential culprit in the threat to “political security.” The report says that AI “can automate tasks involved in surveillance” by analyzing “mass-collected data,” which it can use to create propaganda and deceptive content, such as misleading videos and fake news. The more trolls and hackers use AI to threaten the political security of democracies, the more likely democracies and companies are to regulate the use of big data.
The Danger to Marketers
This is of concern to marketers, because geotargeting is a common practice that walks the line between personalization and surveillance. When some 95 percent of Americans own a cell phone and about 10 percent access the internet only through smartphones, it’s only logical for marketers to want to use geographic data for effective marketing. Denny’s used geotargeting to increase in-store visits by 11.6 percent. Starbucks and The Container Store have both used geotargeting to improve convenience for their customers, with no privacy complaints, but the practice has also seen some scrutiny. Geofeedia, a social media monitoring platform, provided user location info to the police so they could surveil protestors.
As a result, Twitter, Facebook, and Instagram suspended Geofeedia’s access to data, and the company’s CEO had to backpedal and alter their mission. To this day, it’s a publication relations nightmare for Geofeedia — when you Google the name, the scandal immediately pops up. Forbes declared this the era of social surveillance, saying that it was astounding the social networks claimed ignorance regarding the unethical monitoring.
Although no geotargeting legislation ensued, revelations like the Geofeedia scandal, Russian trolls’ influence on the election, and the malicious AI report don’t look promising for marketers who rely on the practice. Furthermore, bad actors could use people’s location data to do a different kind of geo-targeting: drone attacks.
AI Drones and Geo-targeting
According to Ohio University, “One of the most common concerns from the public about UAVs is privacy. Drones can collect data and images without drawing attention, leading many Americans to fear their Fourth Amendment rights of privacy may be in jeopardy if government entities were to use drones to monitor the public.”
According to the malicious AI report, government monitoring is the least of our concerns when it comes to drones. The authors note that both commercial and military drones are becoming more autonomous, and, “open source face detection algorithms, navigation and planning algorithms, and multi-agent swarming frameworks that could be leveraged towards malicious ends can easily be found.”
AI drones could easily identify a target, monitor a target’s location, and plan and implement an assassination or bombing. Soon, AI drones could do this on their own with not much more than a target’s name in the database. This would enable large-scale geo-targeting by terrorists and other militarized actors, including home-grown terrorists of the type that shoot up schools. Terrorists could use AI drones to do their bidding while they engage in activities that are seemingly unrelated, or while they engage in other forms of violence that distract law enforcement from drone threats.
Geolocation Data is Readily Available
Thanks to social media and cell phones equipped with GPS, there’s no shortage of geo data for malicious AI’s purposes. The malicious AI report might sound like something out of the TV show Black Mirror, but anyone with experience in the field knows that AI is growing ever more sophisticated. The best bet for companies and marketers is to steer as clear as possible away from any geo-targeting that resembles surveillance. If any red flags pop up when you’re considering geo-targeting, err on the side of caution and transparency.
Sadly, if big threats arise due to the ubiquity of geolocation data and AI, it will be extremely hard to catch the humans behind the threats and extremely easy to blame the technology. When people use guns for malicious purposes, the backlash is immediate, and in America, only the Second Amendment and the NRA’s lobbying efforts keep guns in households. There’s no amendment protecting AI.
We all want to use technological advancements for the good — and of course to boost business. But as the bad guys learn how to use AI and big data for their own purposes, it is for the greater good that business people beware and take care in implementing such sort of strategies.