Smart Home? Adversarial Machine Learning Could Protect Your Privacy

November 14, 2018

Tags: AI & Machine Learning, Computer Vision, Imaging & Video, Intelligent Agents & Assistants

By Meredith Alexander Kunz, Adobe Research

Do you ever wonder if your smart home device is watching you, even when you don’t want it to? Do you worry that you will be recorded on video, your likeness shared on the cloud for analytics or other more nefarious purposes?

Smart home cameras are now found in millions of locations. While we might want video surveillance to capture intruders, and video-enabled gaming devices to assess our movements for interactive game play, we don’t necessary want cameras to record and share identifiable images of us engaged in private activities.

To address these concerns, Adobe Research scientists Zhaowen Wang and Hailin Jin have partnered with Texas A&M University’s Zhenyu Wu, Zhangyang Wang, and Haotao Wang to find new ways to protect our privacy from video-enabled in-home devices.

“The goal is to keep useful info—the action recognition—but to remove personal, identifiable, private information,” explains Zhaowen Wang.

young man playing golf and interacting with a video-game

New video privacy technology shows a non-identifiable image of golfer

Their work builds on adversarial machine learning, a research field that lies at the intersection of machine learning and computer security. This approach essentially uses one network to try to “trick” a second network. Only video contents free of privacy information can get past the scrutiny of the second network.

“Traditional machine learning tries to preserve and extract information—to maximize it,” Wang says. “Our approach is different. Adversarial learning helps us minimize recognizing a person’s identity, while still seeing and understanding their actions.”

The authors’ adversarial system learns a smart “filtering” mechanism, which can automatically convert a raw image to a privacy-preserving version. The learned filter can be embeded in the camera front-end, so that the image captured by the camera will have privacy information removed at the very beginning, before any transmission, storage, or analytics. This is described in the team’s 2018 European Conference on Computer Vision (ECCV) conference paper Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study.

Here’s an example of how it works. Let’s say you want a smart camera to recognize an action performed by two people in a room in your home, but you do not want to reveal their identities to third parties. You must effectively blur the image to obscure their facial features, skin tone, clothing, or other identifiable elements. The research team’s learned filtering does just that and offers an image of shapes moving. The group is working on making this kind of information more human-readable, while not revealing identities.

The technique is promising not just for smart cameras. It could also help de-identify imagery for data sharing, long a headache for behavioral and medical data science research. It could potentially be applied outside video to voice recordings, too.

The authors are now extending their work to more scenarios and are also working to building a new benchmark dataset for the task of privacy protection in computer vision.

 

 

Collaborators:

Zhaowen Wang, Hailin Jin (Adobe Research)

Zhenyu Wu, Zhangyang Wang, Haotao Wang (Texas A&M University)

 

Learn More:

https://github.com/wuzhenyusjtu/Privacy-AdversarialLearning

Recent Posts