The technical, societal, and cultural challenges that come with the rise of fake media

The O’Reilly Data Show Podcast: Siwei Lyu on machine learning for digital media forensics and image synthesis.

By Ben Lorica
February 14, 2019

The technical, societal, and cultural challenges that come with the rise of fake media
Data Show Podcast

 
 
00:00 / 00:30:53
 
1X
 

In this episode of the Data Show, I spoke with Siwei Lyu, associate professor of computer science at the University at Albany, State University of New York. Lyu is a leading expert in digital media forensics, a field of research into tools and techniques for analyzing the authenticity of media files. Over the past year, there have been many stories written about the rise of tools for creating fake media (mainly images, video, audio files). Researchers in digital image forensics haven’t exactly been standing still, though. As Lyu notes, advances in machine learning and deep learning have also found a receptive audience among the forensics community.

We had a great conversation spanning many topics including:

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more
  • The many indicators used by forensic experts and forgery detection systems
  • Balancing “open” research with risks that come with it—including “tipping off” adversaries
  • State-of-the-art detection tools today, and what the research community and funding agencies are working on over the next few years.
  • Technical, societal, and cultural challenges that come with the rise of fake media.

Here are some highlights from our conversation:

Imbalance between digital forensics researchers and forgers

In theory, it looks difficult to synthesize media. This is true, but on the other hand, there are factors to consider on the side of the forgers. The first is the fact that most people working in forensics, like myself, usually just write a paper and publish it. So, the details of our detection algorithm becomes available immediately. On the other hand, people making fake media are usually secretive; they don’t usually publish the details of their algorithms. So, there’s a kind of imbalance between the information on the forensic side and the forgery side.

The other issue is user habit. The fact that even if some of the fakes are very low quality, a typical user checks it just for a second; sees something interesting, exciting, sensational; and helps distribute it without actually checking the authenticity. This actually helps fake media to broadcast very, very fast. Even though we have algorithms to detect fake media, these tools are probably not fast enough to actually stop the trap.

… Then there are the actual incentives for this kind of work. For forensics, even if we have the tools and the time to catch a piece of fake media, we don’t get anything. But for people actually making the fake media, there is more financial or other forms of incentive to do that.

Related resources:

Post topics: AI & ML, Data, O'Reilly Data Show Podcast
Post tags: O'Reilly Radar Analysis, Podcast
Share:

Get the O’Reilly Artificial Intelligence Newsletter

Get the O’Reilly Artificial Intelligence Newsletter