Advertisement
Tech

The clear and present danger of artificial intelligence

Here are all the nefarious ways AI can be used against you.

Photo of Ben Dickson

Ben Dickson

AI robot is getting shocked illustration

Are we slowly inching toward a robot apocalypse? Or is it already here?

Featured Video

Since the term “artificial intelligence” was coined in the 1950s, we’ve been warned about an ominous future in which computers become smarter than humans and eventually turn against their creators in a Matrix– or Terminator–like scenario. But the threat of a super-intelligent AI that will drive humans into slavery remains remote. It’s at least decades away by the even most pessimistic estimates.

But the danger of humans using our current “dumb” AI for evil purposes is very real—and it’s growing rapidly as AI applications become more capable and integral to things we do and decisions we make. These are the threats that we should be concerned with—not just in the future dystopia that Elon Musk, Nick Bostrom, and other experts warn of, right now.

Advanced social engineering attacks

Spear-phishing attacks remain one of the most effective tools in the arsenal of hackers. It’s a kind of social engineering attack that involves sending highly targeted emails to victims and tricking them into installing a malware or giving away critical information to the attackers. In 2016, a spear-phishing attack gave hackers access to the email account of John Podesta, the campaign chairman of presidential candidate Hillary Clinton.

Advertisement

Successful phishing attacks hinge on having very good knowledge of the victim and require meticulous preparation. Hackers can spend months studying their targets and gathering information. AI can cut the time by automating much of the process, scraping information from social media and online sources and finding relevant correlations that will help the attackers enhance their disguise.

Branches of AI like natural language processing enable hackers to analyze large stores of unstructured data (online articles, web pages, and social media posts) at very high speeds. These sources will then enable them to extract useful information, such as the habits and preferences of the target of an attack.

Advanced forgery

Earlier this year, an AI tool to create fake porn gained popularity on the social news site Reddit. Called “deepfakes,” the app enabled users to swap the faces of porn actresses with those of famous actresses and singers—and to do so using computing resources that are available to anyone with an internet connection. The technology is still a bit crude and suffers from occasional glitches and artifacts, but with enough effort, deepfakes can be turned into a dangerous tool and a new type of revenge porn. Lawmakers are already warning about how the tool can be used to forge documents and spread misinformation and propaganda.

Advertisement
Deepfakes: Custom fake celebrity porn featuring Emma Watson.
A deepfake video featuring Emma Watson wakashaka12/Reddit

Deepfakes and other AI-powered tools are making it easy for anyone with minimal skills to impersonate other people. Lyrebird, another application that uses AI and machine learning, takes a few samples of a person’s voice and generates fake voice recordings. “My Text in Your Handwriting,” a program developed by researchers at University College London, uses machine learning to analyze a small sample of handwritten script and generate new text in that handwriting. Another company has created AI-powered chatbots that can imitate the conversation-style of anyone if they’re provided with enough transcripts of the person’s conversations.

Used together, these tools can usher in a new era of fraud, forgery, and fake news. Countering their nefarious effects will be frustratingly difficult.

Advanced cyberattacks

Hackers have been poking at software and hardware for security vulnerabilities for as long as computers have existed. However, the discovery and exploitation of security holes used to be an exhaustive process, one in which hackers had to patiently probe different parts of a system or application until they found an exploit.

Advertisement

Now hackers can enlist the services of machine-learning bots to automate the process. In 2016, the Defense Advanced Research Projects Agency (DARPA) hosted a competition called “Cyber Grand Challenge” in which human contestants sat back and let their AI bots compete. The bots would automatically probe each other’s system, then find and exploit vulnerabilities. The competition showed a glimpse of how cyberwars might be fought in the near future.

Enhanced with AI tools, hackers will become much faster and more capable.

A line of identical robots with one having evil eyes
Luis Colindres/The Daily Dot

Advanced surveillance and repression

Many governments are tapping into the capabilities of AI-powered facial recognition apps for state-sponsored surveillance. The U.S. law enforcement has a large facial-recognition database that contains and processes information about more than half of the country’s adult population. China’s AI-powered video surveillance system uses 170 million CCTV cameras across the country and is efficient enough to identify and capture a new target in in mere minutes. Other governments are developing similar programs.

Advertisement

While states claim such programs will help them maintain security and capture criminals, the same technology can very well be used to identify and target dissidents, activists, and protesters.

Face recognition is not the only area where AI can serve for creepy surveillance purposes. The U.S. Customs and Borders Protection (CBP) is developing a new AI-powered tool to analyze social networks and other public sources to identify individuals who pose security risks. China is creating an invasive “Sesame Score” program, which uses AI algorithms to rate citizens based on their online activities, including their shopping habits, the content of their social media posts, and their contacts. This Orwellian surveillance program, which is being developed with the cooperation of the country’s largest tech firms, will give the Chinese government complete visibility and control into everything its citizens are doing.

All of this doesn’t mean that AI has already gone haywire. Like most technologies, the advantages of AI are far greater than its malicious uses. However, we must acknowledge those potentially evil purposes and take measures to mitigate them.

Advertisement

In a paper title “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” researchers from Electronic Frontier Foundation, OpenAI, Future of Humanity Institute and several other organizations point out the threats of contemporary AI and outline some potential solutions to those threats, as well as guidelines to help keep AI-powered tools safe and secure. The paper also calls for more involvement from policymakers and the development of ethical frameworks for AI developers to follow.

But first, the researchers noted, AI researchers have to acknowledge that their work can be put to malicious use and take the necessary measures to prevent them from being exploited.

“The point here is not to paint a doom-and-gloom picture—there are many defenses that can be developed and there’s much for us to learn,” Miles Brundage, the paper’s co-author, said in an interview with the Verge. “I don’t think it’s hopeless at all, but I do see this paper as a call to action.”

The manipulation of the 2016 U.S. presidential elections and the recent Cambridge Analytica and Facebook scandal showed how big data and AI can serve questionable goals. Maybe that will urge organizations and institutions involved in the industry to prevent the next catastrophe.

Advertisement
 
The Daily Dot