Faculty Advisor

Whitehill, Jacob Richard

Abstract

In this project we investigate the viability of collecting annotations for face images while preserving privacy by using synthesized images as surrogates. We compare two approaches: a deep learning model to render a detailed 3D reconstruction of the face from an input image; and a novel generative adversarial network architecture that extends BEGAN-CS to generated images conditioned on desired facial features. Using these two models, we conduct an experiment with crowdsourced workers to compare annotation quality of original face images and synthesized versions. Across 60 workers annotating a total of 180 images (60 of each version), we find that while original versions have the best accuracy (84.5%), the 3D (75.9%) and GAN (75.6%) versions show promising results.

Publisher

Worcester Polytechnic Institute

Date Accepted

March 2019

Major

Computer Science

Project Type

Major Qualifying Project

Accessibility

Unrestricted

Advisor Department

Computer Science

Share

COinS