AI makes it easier to spot fake photos. Source: Shutterstock

AI makes it easier to spot fake photos. Source: Shutterstock

How businesses can benefit from DARPA’s media forensics project

BUSINESSES are new to the term media forensics, or as the US Department of Defense’s (DOD) Defense Advanced Research Projects Agency (DARPA) puts it, “MedFor”.

However, in the next couple of years, media forensics is expected to play a critical role in the digital world, especially after recent success at a lab funded by DARPA and run by Adobe Research and UC Berkeley.

The press release announcing and explaining the project spearheaded by Adobe didn’t provide any dates for a commercial launch of the technology, but experts believe demand will accelerate timelines.

While the media forensics project is aimed at diffusing fake news, the technology itself has the power to not only boost security projects in commercial establishments but also play a significant role in artificial intelligence (AI) developments that learn from media content.

Wait, what exactly is media forensics?

A picture is worth a thousand words but in the past few years, pictures get harder to trust.

The problem isn’t limited to the fashion and cosmetics industries where photos are “touched-up” and “augmented” to make models look better and the results of skin-care products look (instantly) appealing — it’s spread to politics and now even business.

What’s worrying specialists is that manipulating of photos has become quite easy, and recently, advancements in AI have made it possible to “edit” videos just as seamlessly.

To be honest, the manipulation of photos and videos using AI is something even social media applications offer these days — it’s a major part of their appeal.

According to DARPA, this manipulation of visual media is enabled by the wide-scale availability of sophisticated image and video editing applications as well as automated manipulation algorithms that permit editing in ways that are very difficult to detect either visually or with current image analysis and visual media forensics tools.

The forensic tools used today lack robustness and scalability, and address only some aspects of media authentication; an end-to-end platform to perform a complete and automated forensic analysis does not exist.

It’s why DARPA chose to invest in media forensics (MediaFor).

The US Government agency-led program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.

If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video.

Did Adobe and UC Berkeley make MediaFor a success?

No. There’s a long way to go and the recent success was just a small step forward. Here’s what they achieved:

Adobe researchers Richard Zhang and Oliver Wang, along with their UC Berkeley collaborators, Sheng-Yu Wang, Dr. Andrew Owens, and Professor Alexei A. Efros, developed a method for detecting edits to images that were made using Photoshop’s Face Aware Liquify feature.

“While still in its early stages, this collaboration between Adobe Research and UC Berkeley, is a step towards democratizing image forensics, the science of uncovering and analyzing changes to digital images,” said Adobe.

“We started by showing image pairs (an original and an alteration) to people who knew that one of the faces was altered. For this approach to be useful, it should be able to perform significantly better than the human eye at identifying edited faces,” explained Adobe’s Wang.

By the end of the project, human eyes were able to judge the altered face 53 percent of the time, while the tool they developed achieved results as high as 99 percent.

Looks cool, but how does this help businesses?

Businesses are currently working on refining their AI solutions at the moment, and use cases such as facial recognition, image matching, and ‘human v. human-billboard’ differentiations by autonomous vehicles are all going to get an immediate boost as a result of advancements in media forensics.

Further, as organizations train their AI algorithms to understand customers and learn from their facial expressions to make improvements to their products and processes, learnings from the media forensics project overall will provide great insights to businesses.

Finally, organizations exploring the use of AI in physical enterprise security, media forensics will really help developers take a giant leap forward in ensuring that their offering is foolproof.

Overall, in the future, experts believe media forensics will be tied into all social media platforms that want to avoid fake news and all kinds of media dissemination tools that want to ensure people know what they’re watching — their biggest challenge, however, will be balancing creative or artistic license with fact.