The U.S. Senate passed legislation on Thursday that aims to monitor how deepfake videos are used online. Known as the “Deepfake Report Act of 2019,” the bill calls on the Department of Homeland Security to produce an annual report outlining the state of the new technology as well as any potential harms being caused by these types of videos.
The Deepfake Report Act would also detail whether foreign governments or non-governmental entities are using the manipulated videos to harm national security and insights into the latest techniques to detect digital content developed by artificial intelligence.
The bill, which received bipartisan support in the Senate, is now headed to the House for consideration. One of the bill’s sponsors, Sen. Gary Peters (D-Mich.), argued that lawmakers “must ensure Americans are aware of the risks this new technology poses, and are empowered to recognize misinformation.”
The legislation comes amid growing concern over deepfakes that has caused everyone from politicians to tech giants to take notice. Companies including Twitter, Facebook, and Google have either created new policies for dealing with deepfakes or invested in developing new tools to detect these manipulated videos.
Although Washington, D.C. has put most of its attention on the mere potential of deepfake videos being used to alter an election or spread misinformation, nearly all of the harm created by deepfakes thus far have targeted average women and not politicians.
As noted in a report this month from Netherlands-based cybersecurity company Deeptrace, 96% of all deepfakes online are related to nonconsensual porn, not politics. Even more telling, 100% of those videos involved placing females, more often than not celebrities, into pornographic videos.
READ MORE:
- Twitter developing a policy to combat deepfakes
- Deepfake transforms impressionist into 20 celebrities
- Most deepfakes are nonconsensual porn, not political
H/T The Hill